Issue |
A&A
Volume 643, November 2020
|
|
---|---|---|
Article Number | A43 | |
Number of page(s) | 29 | |
Section | Extragalactic astronomy | |
DOI | https://doi.org/10.1051/0004-6361/201936278 | |
Published online | 29 October 2020 |
A comparative analysis of denoising algorithms for extragalactic imaging surveys
1
INAF – Osservatorio Astronomico di Roma, Via Frascati 33, 00040 Monteporzio, Italy
e-mail: valerio.roscani@inaf.it
2
Department of Mathematics, Sapienza University of Rome, P.le Aldo Moro, 5, 00185 Rome, Italy
3
Department of Mathematics and Applications “Renato Caccioppoli”, University of Naples Federico II, Via Cintia, Monte S. Angelo, 80126 Naples, Italy
Received:
9
July
2019
Accepted:
1
September
2020
Aims. We present a comprehensive analysis of the performance of noise-reduction (denoising) algorithms to determine whether they provide advantages in source detection, mitigating noise on extragalactic survey images.
Methods. The methods we analyze here are representative of different algorithmic families: Perona-Malik filtering, bilateral filter, total variation denoising, structure-texture image decomposition, non-local means, wavelets, and block-matching We tested the algorithms on simulated images of extragalactic fields with resolution and depth typical of the Hubble, Spitzer, and Euclid Space Telescopes, and of ground-based instruments. After choosing their best internal parameters configuration, we assessed their performance as a function of resolution, background level, and image type, in addition to testing their ability to preserve the objects fluxes and shapes. Finally, we analyze, in terms of completeness and purity, the catalogs that were extracted after applying denoising algorithms on a simulated Euclid Wide Survey VIS image and on real H160 and K-band (HAWK-I) observations of the CANDELS GOODS-South field.
Results. Denoising algorithms often outperform the standard approach of filtering with the point spread function (PSF) of the image. Applying structure-texture image decomposition, Perona-Malik filtering, the total variation method by Chambolle, and bilateral filtering on the Euclid-VIS image, we obtain catalogs that are both more pure and complete by 0.2 magnitude than those based on the standard approach. The same result is achieved with the structure-texture image decomposition algorithm applied on the H160 image. The relative advantage of denoising techniques with respect to PSF filtering rises with increasing depth. Moreover, these techniques better preserve the shape of the detected objects with respect to PSF smoothing.
Conclusions. Denoising algorithms provide significant improvements in the detection of faint objects and enhance the scientific return of current and future extragalactic surveys. We identify the most promising denoising algorithms among the 20 techniques considered in this study.
Key words: techniques: image processing / methods: numerical / methods: data analysis / surveys
© ESO 2020
1. Introduction
Measuring the amount of photons that we receive from astronomical sources over a given range of wavelengths is the primary way to gather information about the Universe. From the advent of digital photography in the 1980s, charge-coupled device (CCD) imaging is one of the primary ways by which we do so. Currently, CCD devices can reach 100 million pixels, with read noise as low as one electron, almost 100% quantum efficiency, and sensitivity from the X-rays to the near-infrared (see, e.g., Lesser 2015, for a review).
Before being ready for the extraction of meaningful scientific content, astronomical images must be processed in order to, for instance, combine different observations into a single mosaic, correct for flat-field, transients, artifacts, and defects, subtract a global or local background, etc. Once these preparatory steps are completed, the quality of the image mainly depends on its resolution capability (which is proportional to λ/D, the ratio between the observed wavelength and the diameter of the telescope in the case of diffraction-limited instruments, for example, space observatories; or from the atmospheric seeing for ground-based facilities), and on its depth (i.e., the magnitude at a given reference signal-to-noise ratio (S/N), which mainly depends on the duration of the observations (exposure time). Since increasing the latter is often unfeasible or too demanding, searching for alternative methods to increase the S/N is important. A possible solution can be the application of noise reduction (denoising) techniques.
Wavelet transforms are a standard and popular tool used for denoising and detecting sources on astronomical images. The technique is extremely versatile, as it can be applied across a wide range of scientific cases (e.g., X-ray images: XMM-LSS survey (Pierre et al. 2004) and Fermi catalog (Ackermann et al. 2013; Principe et al. 2018), cosmic microwave background maps (Starck et al. 2004), N-body simulations (Romeo et al. 2003), etc.). Other interesting applications are summarized in Starck et al. (2006). Among the different possible implementations, the widely used is the so-called “à trous” algorithm (Shensa 1992). This algorithm is an undecimated wavelet transform (UWT), which is also isotropic, which makes it very efficient for the detection of isotropic objects, and makes it a popular choice in astronomical image processing, where many objects are nearly isotropic e.g., stars, galaxies, galaxy clusters (Starck et al. 2014).
With regard to extragalactic optical/near-infrared imaging, images are typically convolved with a PSF-shaped kernel to enhance source detection (see an application of the lemma in Neyman & Pearson 1933); this is the most standard example of a denoising algorithm since filtering reduces the noise variance, allowing real sources to rise above the background. In many familiar cases, the typical PSFs of telescopes are quite similar to 2D Gaussians, making the PSF filtering basically indistinguishable from a Gaussian filtering. However, in several non-astronomical applications of image analysis, this approach is often outclassed by other, more refined methods that are designed to be more efficient and to better preserve the borders and edges of the sources.
A special mention is reserved for machine learning and deep learning techniques, which have lately gained in popularity within the field of image processing and astronomy. An interesting application was proposed by Beckouche et al. (2013), where a sparse dictionary-learning algorithm was tested on multiple astronomical images. On the other hand, autoencoder neural networks for image denoising appear to be very promising (Vincent et al. 2008, 2010; Xie et al. 2012). Some applications of autoencoders in different areas of astronomy can be found in the literature, even if they are primarily used for other purposes (e.g., spectral energy distribution recovery Frontera-Pons et al. 2017, denoising of gravitational waves Shen et al. 2017, or stellar cluster detection Karmakar et al. 2018). Although these techniques are, certainly, highly appealing, the aim of this work is to perform a comprehensive comparison of “traditional” denoising algorithms based on non-linear partial differential equations (PDEs) and variational methods. The comparison with other approaches based on machine learning will be developed in future works.
We compared several classes of denoising techniques to determine which ones yield the best improvements in source detection. To this aim, we performed an extended set of tests. We considered many different noise reduction algorithms, roughly belonging to the following families: Perona-Malik (PM) filtering, bilateral filter, total variation (TV) denoising, structure-texture image decomposition, block-matching, non-local means, and wavelets. The numerical methods employed range from variational methods to PDE-based techniques, in addition to some statistical methods.
We tested these methods using two different datasets. First, we focused on simulated images, created by state-of-the-art codes and prescriptions to mimic different realistic cases. This simplified environment has the advantage of allowing a detailed analysis of the results since the “reality” is perfectly known. For real images, we applied the algorithms that give the best results obtained on the simulated dataset to check if the improvement is confirmed. To our knowledge, this is the first attempt to extensively compare a large number of denoising algorithms in an astrophysical context. In general, the performance of any of these methods depends on the kind of noise that affects the image. Here, we are mainly interested in extragalactic imaging and, in particular, we focus on the next generation of optical and near-infrared instruments and surveys such as Euclid (Laureijs et al. 2011), The Large Synoptic Survey Telescope (LSST Science Collaboration et al. 2009), Dark Energy Survey (DES, Dark Energy Survey Collaboration et al. 2016), and Wide Field Infrared Survey Telescope (WFIRST, Spergel et al. 2015). The paper is organized as follows: in Sect. 2, we list and briefly describe all the denoising methods used in our tests, providing mathematical formulations and code information. In Sect. 3, we present our datasets. In Sect. 4, we describe and discuss all the tests we carried out, presenting our results in Sect. 5. In Sect. 6, we apply the methods on real images from space and ground-based instruments. Finally, Sect. 7 summarizes the main results, along with a discussion of possible future applications. Throughout this paper, we adopt the AB magnitude system (Oke & Gunn 1983) and a ΛCDM cosmology, with Ωm = 0.3, ΩΛ = 0.7, H0 = 70 km s−1 Mpc−1.
2. Denoising techniques
The focus of this paper is a comparison of different denoising techniques that have been proposed in the literature and applied to astronomical images. As we state in the introduction, these images have specific features. So an efficient denoising method is crucial for extracting the information contained in the image and could serve as a preliminary step for other image processing steps, such as image segmentation or deblurring.
To select the most efficient denoising approach for extragalactic survey images, we analyzed different method classes covering the main families of noise reduction techniques, namely, non-linear filtering (Sect. 2.2), bilateral filter (Sect. 2.3), TV denoising (Sect. 2.4), image decomposition (Sect. 2.5), wavelets (Sect. 2.6) and non-local means (Sect. 2.7). In the following, we briefly summarize the mathematical formulations of these techniques and provide information on the codes for reproducibility.
2.1. Gaussian smoothing
Here, we consider the intensity function I(x, y) of a noisy image, with (x, y) ∈ Ω, where Ω ⊂ ℝ2 is the reconstruction domain. We let Iclean be the desired clean image. An image with a Gaussian noise component is
where η ∼ N(μ, σ) is the additive noise component.
Of course, we want to reconstruct Iclean from I. This filter uses a Gaussian function for calculating the transformation to apply to each pixel in the image. Mathematically, applying a Gaussian filter to an image corresponds to convolving the image with a Gaussian function. Since the Fourier transform of a Gaussian is another Gaussian, applying a Gaussian smoothing has the effect of reducing the image’s high-frequency components; a Gaussian filter is, thus, a low-pass filter. In two dimensions, it is the product of two Gaussian functions, one in each dimension, so that the low-pass Gaussian filter is
where x is the distance from the origin in the horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution.
Filtering the image I : Ω ⊂ ℝ2 → ℝ with a “low-pass” Gaussian filter corresponds to processing it with the heat equation (Gabor 1965; Lindenbaum et al. 1994) that solves the following linear PDE:
which has a diffusive effect on the initial datum I0, for a small fixed time TC > 0. The relation between the Gaussian filter (2) and the problem (3) is that the solution of the heat equation is a convolution with the Gaussian filter, that is,
with . It is well-known that applying this filter does not preserve the edges. This edge blurring is due to the isotropic diffusion.
We can get an improvement in three different ways: by modifying the heat equation (see Sect. 2.2); by making convolution “nonlinear” (see Sect. 2.3); by defining an optimization problem (see Sect. 2.4).
Code information. In this paper, we use a simple Gaussian smoothing using a kernel that approximates a PSF of known full width at half maximum (FWHM; Winkler 2011), referring to it as the “PSF”, whereas with the term “Gaussian” we refer to the Gaussian filter with internal parameter, σ. We made use of the “gaussian_filter” routine implemented in the Python package Scipy1 (Jones et al. 2001), with , which is easily obtained defining the Gaussian kernel radius r = x2 + y2, where the kernel maximum is at r = 0 then
2.
2.2. Filters of Perona-Malik type
An improvement on the simple Gaussian filter is obtained by modifying the heat equation. Following the PM model (Perona & Malik 1990), we chose large values of |∇I| as an indicator of the edge points of the image in order to stop the diffusion at these points. In this way, we move from an isotropic to anisotropic diffusion as follows:
Equation (5) must be complemented with suitable boundary conditions (e.g., homogeneous Neumann boundary conditions) and an initial condition. Perona and Malik pioneered the idea of anisotropic diffusion and proposed two functions for the diffusion coefficient (also called edge-stopping functions):
where K is the gradient magnitude threshold parameter that decides the amount of diffusion to take place. We also consider other three edge-stopping functions that have been proposed subsequently to the original work by Perona and Malik:
Black et al. (1998) proposed an edge stopping function called Tukey’s biweight function defined as:
Guo et al. (2012) proposed the following function:
where
Weickert (1998) proposed:
Code information. This method has been implemented by us in C++ and it is available online3.
2.3. Bilateral filter
The bilateral filter is an edge-preserving denoising algorithm that was first introduced by Tomasi & Manduchi (1998). It is defined as (see also Banterle et al. 2012):
where
where I is the filtered image, I0 is the original input image to be filtered, x are the coordinates of the current pixel to be filtered, Ω is the window centered in x, so xi ∈ Ω is another pixel, fr is the range kernel for smoothing differences in intensities (this function can be a Gaussian function), and gs is the spatial (or domain) kernel for smoothing differences in coordinates (this function can be a Gaussian function).
It averages pixels based on their spatial closeness and on their radiometric similarity. Spatial closeness is measured by the Gaussian function of the Euclidean distance between two pixels and a certain standard deviation (sigma_spatial). Radiometric similarity is measured by the Gaussian function of the Euclidean distance between two color values and a certain standard deviation (sigma_color).
Code information. We used the Python routine “denoise_bilateral” available in the Python package SCIKIT-IMAGE2. We noticed that in using our dataset, variations of the sigma_spatial were less effective than variations of sigma_color. We decided to set sigma_spatial = 3 since it provides the best results.
2.4. Total variation denoising
Total variation denoising (also known as total variation regularization) is based on the principle that images with excessive and possibly spurious detail have high TV, defined as
for a function u ∈ C1(Ω) (we note that a similar definition can be given also for L1 functions Kolmogorov & Fomin 1957). According to this principle, TV denoising tries to find an image with less TV under the constraint of being similar to the input image, which is controlled by the regularization parameter, that is, it aims to minimize TV(I, Ω). This minimization problem leads to the Euler-Lagrangian equation, which can be solved via the following evolutive problem:
for t > 0 and x, y ∈ Ω, with homogeneous Neumann boundary conditions and a given initial condition. Generally, TV denoising tends to produce “cartoon-like” images, that is, piecewise-constant images. The concept was pioneered by Rudin, Osher, and Fatemi in Rudin et al. (1992) and today it is known as the ROF model. In addition, it is remarkably effective at simultaneously preserving edges while smoothing away noise in flat regions, even at low S/N.
Code information. We test the ROF method that was proposed by Chambolle in Chambolle (2004) and the TV denoising using split-Bregman optimization (Goldstein & Osher 2009; Getreuer 2012; Bush 2011). For the implementation of the two aforementioned methods, we used the Python routines, “denoise_tv_chambolle” and “denoise_tv_bregman” belonging to the Python package SCIKIT-IMAGE4 (van der Walt et al. 2014).
2.5. Structure-texture image decomposition
A general approach to the denoising problem is based on the assumption that an image I can be regarded as composed of a structural part, u (i.e., the objects in the image), and a textural part, v, which corresponds to finest details plus the noise. Following the approach described in Aujol et al. (2006), this image decomposition technique is based on the minimization of a functional with two terms: one based on the total variation and a second on a different norm adapted to the texture component. Given an image I defined in a set Ω, with BV(Ω) as the space of functions with limited total variation in Ω, we can decompose I into its two components by minimizing:
where denotes the norm of a given space X and the minimum is found among all functions (u, v) ∈ BV(Ω)×X such that u + v = I. The parameter p is a natural exponent and λ is the so-called splitting parameter which modifies the relative weights. The best decomposition is found at the λ for which the correlation between u and v reaches a minimum.
Code information. In Castellano et al. (2015), the authors proposed a C++ code named Astro-Total Variation Denoiser (ATVD), which implements three versions of the technique, based, respectively, on the TV-L2 (X = L2(Ω)), TV-L1 (X = L1(Ω)), and TVG (X being a Banach space as defined in Aujol et al. 2006) norms. Two thresholds are defined and used in the stopping criteria of the algorithms, called ϵcorr and ϵsol. The ϵcorr defines the correlation algorithm precision, whereas ϵsol defines the method precision (e.g., TVL2, TVG, TVL1). For all our tests, we use ϵcorr = 10−4 and ϵsol = 10−3.
We note that the definition of the functional to be minimized can also take an additional term to account for some properties of the unknown image, f, corresponding to the available image, g. This is a typical situation in which the physical image is modeled as linear operator A acting from a Hilbert space, X, to a Hilbert space, Y. In this approach, X contains all the functions characterizing unknown objects and Y contains the functions describing the corresponding measurable images, such that
A typical choice is to take X = Y = L2(Ω) and the inverse problem then calls for us to minimize the functional,
over f ∈ X. This problem is often ill-posed, so a popular Tikhonov regularization is obtained by adding another term R(f) to the functional to get
where μ is a positive parameter to be tuned carefully. The term R(f) can also be used to introduce a prior, for example, the regularity of f (based on Schwartz theorem) or the sparsity of f (choosing R(f) = ‖f‖1). Imposing a morphological prior on the shapes, such has penalizing shapes that are different from ellipses, would require an enormous number of parameters in the case of astronomical images that usually include several sky objects and, thus, it is a very challenging problem which goes beyond the scope of this paper. For the use of priors in other areas (e.g., in biomedical imaging), we refer interested readers to Bertero & Piana (2006) and for a general introduction to inverse problems in imaging, we refer to the book by Bertero & Boccacci (1998).
2.6. Wavelets
The wavelets transform is the counterpart for images of the Fourier transform and the wavelets domain, which is a sparse representation of the image that can be thought of similarly to the frequency domain of the Fourier transform (Valens 1999). A sparse representation indicates that most values are zero or near-zero and truly random noise is represented by many small values in the wavelet domain. Setting all values below some threshold to 0 reduces the noise in the image, but at larger thresholds, this also decreases the detail in the image. Here, we recall the relation introduced in Sect. 2.1:
where η is the noise and Iclean is the clean image (signal). The components of η are independent and identically distributed (iid) as 𝒩(0,σ2) and independent of Iclean. The goal is, again, to remove the noise to obtain an approximation, of Iclean, minimizing the mean square error (MSE):
where N is the number of pixels. We can denote the matrix of wavelet coefficients of the image I as by Y = 𝒲I, where 𝒲 is the orthogonal wavelet transform operator, similarly, F = 𝒲Iclean and E = 𝒲η (see Vetterli & Kovačevic 1995; Mallat 2001 for more details on 𝒲) The wavelet transform is based on the subbands (called details) at different scales, usually indexed by . The wavelet-thresholding method filters each coefficient, Yj, from the detail subbands, k ∈ 𝒦, with a threshold function to obtain
. The denoised approximation is
, where 𝒲−1 is the inverse wavelet transform. Two thresholding techniques are frequently used. The “soft-threshold” function,
which shrinks the argument, x, to 0 by the threshold, T. The “hard-threshold” function,
sets the input to 0 if is below (or equal) the threshold T. We note that the threshold procedure removes noise by thresholding only the wavelet coefficients of the corresponding subbands, keeping the low-resolution coefficients unaltered.
Code information. We consider the two thresholding methods defined in the Python routine “denoise_wavelet”2 (Chang et al. 2000; Donoho & Johnstone 1994). The first one applies BayesShrink, which is an adaptive thresholding method that computes separate thresholds for each wavelet subband as described in Chang et al. (2000). The second is “VisuShrink”, in which a single “universal threshold” is applied to all wavelet detail coefficients as described in Donoho & Johnstone (1994). This threshold is designed to remove all Gaussian noise at a given σ with high probability, but tends to produce images that appear overly smooth.
In this work we decided to test several methods based on wavelet transforms. We selected the Meyer wavelet described in Meyer (1990) with VisuShrink thresholding method since, upon analyzing the application on our dataset, we found that it provides the best performance based on the analysis described in Sect. 4. The list we took the Meyer wavelet from can be found in Lee et al. (2019). We refer to this method as “orthogonal wavelets”.
We also consider other methods, based on multiscale wavelets decomposition (implemented in the C++ library called SPARSE2D, available at CosmoStat web page5). The first is an isotropic UWT, based on the à trous algorithm, better known as the “starlet transform”, where five wavelets scales and an iterative hard thresholding technique are set. We refer to this method as “starlet”. The other two both use a biorthogonal UWT using a set of filters (introduced for the JPEG 2000 compression standard Starck et al. 2010) called “7/9 filters” and five wavelet scales. For the first method, a 3σ threshold is set, whereas for the second one, we perform a multi-resolution Wiener filter. We refer to these two methods as “b-UWT(7/9)” and “b-UWT(7/9)+Wiener”, respectively. The optimal configurations for the methods implemented in SPARSE2D have been suggested by the authors. We used the mr_filter program with the following options:
– Starlet: t = default, f = 3, n = 5
– b-UWT(7/9): t = 24, f = default, n = 5
– b-UWT(7/9)+Wiener: t = 24, f = 6, n = 5
Here, t is the type of multi-resolution transform, f is the type of filtering, and n is the number of scales. For further details about these three methods and the UWTs in general, see Starck et al. (2010).
2.7. Non-local means
The non-local means algorithm averages the value of a given pixel with values of other pixels in a limited proximity, under the condition that the patches centered on the other pixels are similar enough to the patch centered on the pixel of interest. This algorithm is defined by the formula (Buades et al. 2005):
where
I0 is the original image, x ∈ Ω, C(x) is a normalizing constant, Gσ is a Gaussian kernel with σ denoting the standard deviation, and h acts as a filtering parameter. The algorithm has been found to give excellent performance when used to denoise images with specific textures6.
We define, using sizeI, the image size in pixels; by sizep, the size of the patch in pixels; by dp, the maximal distance in pixels for search patches; by n, the image number of dimensions (n = 2, 3 depends if we consider 2D or 3D images). In its original version, the computational complexity of the algorithm is proportional to: sizeI * (sizep * dp)n (Buades et al. 2005). A new “fast” version is now preferentially used since its actual complexity is proportional to: (Darbon et al. 2008).
Compared to the classic algorithm, in the fast mode, the distances are computed in a coarser way, indeed all the pixels of a patch contribute to the distance to another patch with the same weight, no matter their distance to the center of the patch. This approach can result in a slightly poorer denoising performance.
When the standard deviation σ is given, the method gives a more robust computation of patch weights. A moderate improvement to denoising performance can be obtained by subtracting the known noise variance from the computed patch distances, which improves the estimates of patch similarity (Buades et al. 2011).
Code information. In this study, both the fast and slow version of the algorithm were tested. After an initial selection of patch sizes and distances, through the analysis described in Sect. 4, we decided to set sizep = 5 and dp = 6. For our numerical tests, we used the routine “denoise_nl_means4” that is implemented in the Python package SCIKIT-IMAGE.
2.8. Block-matching and 3D filtering
Block-matching and 3D filtering (BM3D) is a three-dimensional block-matching algorithm which “groups” similar 2D fragments with a matching method in the image. The matching method finds similar fragments to the reference method, grouping fragments closer than a defined threshold. The matched fragments are then stored in 3D arrays called “groups”. A “collaborative filtering” is performed in each group, which consists of a 3D linear transform, a shrink to reduce noise and an invert linear transform which produces 2D estimates of all the fragments. Once the estimates are obtained, they are aggregated to form an estimate of the whole image. For further details see Dabov et al. (2007).
Code information. In this study, we tested a C++ code of BM3D available online7. For performance improvements and issues related to memory, the input images are cut into smaller overlapping tiles, re-aggregated in a single output image at the end of the process, as suggested by the authors.
3. Test dataset
We first test the denoising algorithms on five different simulated images (Table 1), chosen with the aim of reproducing the properties of a wide range of typical cases in terms of resolution, depth, pixel scale, and wavelength:
-
VIS: Euclid satellite visual band (wavelength: 550–900 nm)
-
NIR H: Euclid satellite near-infrared H band (wavelength: 1372–2000 nm)
-
EXT G: ground-based optical filter
-
H160: Hubble Space Telescope (HST) near-infrared F160W band (e.g., CANDELS-wide Guo et al. 2013)
-
IRAC: Irac-Spitzer 3.6 μm channel.
Test images.
Here, we refer to the simulated images, provided as input to the algorithms, as “original”, whereas we refer to the simulated images representing the true sky, without noise included, as “noiseless”.
The VIS and NIR H reproduce the expected features of the visual and near-infrared bands in the forthcoming ESA satellite, Euclid (Laureijs et al. 2011), and EXT G simulates a typical ground-based complementary optical observation for the Euclid Wide Survey. H160 is modeled after the detection band in recent deep surveys such as CANDELS (Grogin et al. 2011; Koekemoer et al. 2011) and 3D-HST (Skelton et al. 2014), whereas IRAC simulates the features of the Spitzer Channel 1 band in the CANDELS GOODS-South field (Guo et al. 2013).
The images were simulated with SkyMaker (Bertin 2009) on the basis of source catalogs generated by the Empirical Galaxy Generator (EGG; Schreiber et al. 2017) and they have been perturbed by Gaussian noise in order to reach the limiting magnitudes reported in Table 1. All the PSFs are Gaussian, except for the IRAC case where a real IRAC 3.6 μm channel PSF was used. The H160 and HAWK-I images are real observations whose tests are described in Sects. 6.1–6.2.
We can sort the simulated images in several different ways:
-
Depth, from the deepest to the shallowest: H160 > EXT G > IRAC > VIS > NIR H
-
PSF, from the sharpest to the coarsest: H160 > VIS > NIR H > EXT G > IRAC
-
Pixscale, from the smallest to the largest: H160 > VIS > EXT G > NIR H > IRAC.
For each simulated image, we cut three independent areas of the sky, which are the same for every band but differ in dimensions due to the different pixel scale. The regions are: BG – centered on a big elliptical galaxy (see Fig. 1); CL – centered on a cluster of galaxies (see Fig. 2); CM – an average portion of the sky (see Fig. 3). The three regions have a dimension of: VIS – 1000 × 1000 pixels; NIR_H – 333 × 333 pixels; EXT_G – 500 × 500 pixels; H160 – 1666 × 1666 pixels; IRAC1 – 166 × 166 pixels.
![]() |
Fig. 1.
From left to right: crops of the BG (Big Galaxy) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
![]() |
Fig. 2.
From left to right: crops of the CL (Cluster) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
![]() |
Fig. 3.
From left to right: crops of the CM (Average field) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
Following the analysis described in Sect. 4, which offers a commentary on the results given in Sect. 5, additional tests on real images (see Table 1 for details) from ground-based instruments (HAWK-I) and from space (HST) are reported and analyzed in Sect. 6.
4. Quality tests
The idea at the core of the analysis is to first evaluate the algorithms through different tests in order to apply only the most promising ones (with their best configurations) on real images. We set our analysis on the five simulated images in different levels of testing. A brief description of each step is given below:
As a first step, we compare the performance of all the algorithms we chose based on three metrics: mean square error (MSE), structural similarity (SSIM; Wang et al. 2004) and CPU time. The MSE is defined as:
where xi is the ith pixel in the denoised image and is the ith pixel in the original image (without noise). The SSIM is defined as:
where μx is the average of x, μy is the average of is the variance of
is the variance of y, σxy is the covariance of x and y, c1, and c2 are constants that are proportional to the dynamic range of the pixel values. The CPU time is the computational time required by the algorithms to filter the image.
Through these tests, we identify the main internal parameters of each algorithm and their ideal values. Next, we test the stability of the algorithms selected in the previous step when faced with non-stationary Gaussian noise. Furthermore, we test their performance as a function of the FWHM of the PSF and as a function of the background noise level. Then we test the stability of the algorithms selected in the previous steps against variations of the main internal parameter value (identified in Step 1), measuring how the MSE varies as a function of the parameter values. Next, we test how the shapes of the objects are affected by the selected denoising algorithms, checking whether they preserve the FWHM of point-like objects, the ellipticity, and the FWHM of the galaxies profiles. Then we test the selected algorithms, studying two diagnostics: completeness and purity, which provide a quality estimate of the catalog produced after an ideal source detection, exploring a combination of SExtractor (Bertin & Arnouts 1996) detection parameters. As the last step, we test whether the denoised images can be used also for photometry measurements, analyzing if the object fluxes are preserved after denoising. Finally, we apply the best performing algorithms of our selection on real images acquired from space and ground-based telescopes, as described in Sect. 6.
Implementation details. We compare the different images, following always the same procedure here described: The original image is scaled to the range [0, 1]8, then filtered by the denoising algorithm providing the denoised image. At the end, denoised image is scaled back to [originalmin, originalmax], where originalmin and originalmax are the minimum and maximum values in the original image, using the following equation:
where is the ith pixel in the original image and
is the ith pixel in the denoised image scaled to [0,1]; MSE and SSIM are computed by comparing the denoised image to the noiseless one. In order to choose the best internal parameter for each denoising algorithm (a list of these parameters is in Sect. 3), we used different stopping criteria:
ATVD. In this method, the internal parameters are automatically optimized by the algorithm and a stopping rule is already implemented through a minimization problem, as described in Sect. 2.5.
Perona-Malik. In the PM code, we have a stopping rule composed by three conditions: in the first one we compare at each time step MSEn with MSEn − 1, where MSEn is the MSE at the current time step, whereas MSEn − 1 is the MSE at the previous time step. The code continues running as MSEn − 1 − MSEn > 0. The second condition concerns the number of iterations n: the code continues running until the number of iterations does not exceeds the maximum number of iterations NMAX, which is set to NMAX=500. The third condition is , with ϵ = 10−10.
SPARSE2D. Starlet and the two b-UWTs automatically estimate the noise background. For the optimal configurations of these three methods, we used the setting provided by the authors and reported at the end of Sect. 2.6. Further information can be found in the documentation file available with the code on the CosmoStat webpage9.
Other denoising algorithms. For all the other denoising algorithms, we get the optimal values of the main parameter(s) by minimizing the MSE with an iterative process. The stopping rule is reached when ∣MSEn − 1 − MSEn ∣ ≤ ϵ, setting ϵ = 10−10, and the number of iterations is lower than the maximum number of iterations NMAX, which is, in this case, set to NMAX=100.
5. Results
In this section, we analyze and comment the results related to the quality tests in a separate and sequential way, following the same order of the steps given in Sect. 4.
5.1. Ranking with MSE and SSIM
In this test, we use three metrics to constrain the performance of denoising methods: MSE, SSIM, and CPU time (Sect. 4). We give priority to those algorithms that are able to minimize as much as possible the MSE, preferring the fastest method (in terms of CPU time) and the highest SSIM in the case of a comparable MSE. Following this criterion, in this step, we identify the best configuration and the main parameters for every algorithm. These results are taken into account separately for all the simulated images presented in Sect. 3. The main internal parameters identified for the different algorithms are:
– Orthogonal Wavelets: sigma – The noise standard deviation used for compute threshold(s);
– NL-means: h – Cut-off distance in grey levels;
– TV Bregman: weight – Denoising weight, efficiency of denoising;
– TV Chambolle: weight – Denoising weight, efficiency of denoising;
– Gaussian: sigma – Standard deviation for Gaussian kernel;
– Bilateral: sigma_color – Standard deviation for gray value distance;
– Perona-Malik: T – Number of iterations of the anisotropic diffusion;
– ATVD (TVL1,TVL2,TVG): λ – Structural-Texture splitting parameter;
– BM3D: sigma – Noise standard deviation.
Further details for the algorithms implemented in Python and the measurement of MSE and SSIM can be found in the scikit-image documentation2. The method used to identify the best internal parameter for each algorithm is described in Sect. 4. In Appendix A, we show the best MSE and CPU time values for every algorithm for the different crops. The tables are organized to record the best MSE and CPU time values obtained with the algorithms. The columns represent the different image simulated filters and the value indicated in bold is the lowest of the column. Tables A.1–A.3 contain the MSE values for the crops BG, CM, and CL, respectively. Table A.4 contains the CPU time values for the crop CM after fixing the optimal internal parameters. We note that in the following, PSF filtering amounts to filtering with a Gaussian whose FWHM is the same as the PSF-FWHM, whereas in the case of the Gaussian filtering, the σ (and thus the FWHM) is a free internal parameter.
Here, we briefly summarize the main results: The TVL2, BM3D, starlet, the two b-UWTs, PM, NL-means slow, TV Chambolle always yield good performance, typically providing the lowest values for the MSE, and always perform better than Gaussian filtering, with the only exception of the IRAC image (we discuss the IRAC situation below in Sect. 5.2).
The MSE of all the methods is proportional to the pixel scale of the image, so that low sampling implies worse results.
In most cases (with the exception of IRAC, which we discuss below), the PSF filtering provides a larger (i.e., worse) value for the MSE compared to the one provided by Gaussian filtering.
In some cases, the MSE of the denoised image is larger (i.e., worse) than the one measured without denoising the image at all. Indeed, some algorithms in the situations listed below tend to over-smooth the image, providing a worse MSE. This event occurs:
-
(a)
in VIS (CM) image, in the case of the PSF filtering;
-
(b)
in all the H160 images for both the PSF filtering and TV Bregman;
-
(c)
2–4 times in NIR H images, for methods NLmeans fast, Orthogonal Wavelets, TV Bregman and PSF filtering, and only once for BM3D;
-
(d)
only once in EXT G (CM), for the PSF filtering;
-
(e)
4–5 times in IRAC images, for NLmeans slow, NLmeans fast, TV Bregman, Orthogonal Wavelets, and PSF filtering.
The SSIM ranking typically reflects the MSE ranking, pointing out the same group of the best algorithms found in the MSE ranking; even if some positions are swapped in few cases, the SSIM values provided by the best algorithms are comparable (ΔSSIM < 10−4).
The table related to the CPU time (Table A.4) shows that the Gaussian is the fastest algorithm among the ones we tested, followed by the PSF and TV Bregman. The CPU time for the other algorithms differ from one to four orders of magnitude with respect to the Gaussian algorithm. However, the computational times are always manageable, at least for the cases of optimal performance. If we focus on the algorithms belonging to the same classes of methods, we can note that:
All the PM methods yield a similar level of performance, (see Fig. A.1), and therefore we chose to only keep g = g1 with the parameter k set to k = 10−3 in the following steps.
TVL2 performs clearly better than TVG and TVL1. In fact, for example, in BG, value is always within 5% from the value provided by the original image (no noise), with the exception of IRAC, where it drops to 0.2, which is still greater than the values provided by TVG and TVL1, as shown in Fig. A.2.
NLmeans slow performs slightly better than NL-means fast for H160, VIS, and EXT G, ( differences are within 5% in favor of NL-means slow) and much better for NIR H and IRAC (where NL-means fast performs worse than the original). See Fig. A.3.
TV Chambolle performs better than TV Bregman in H160, NIR H, and IRAC. TV Bregman performs worse than the original, whereas for VIS and EXT G it performs 14% and 3% worse than TV Chambolle, respectively (see Fig. A.3).
Bilateral is always within the best performing techniques (see Fig. A.3).
BM3D, the two b-UWTs and starlet are always among the most efficient algorithms (see Fig. A.4 and Tables A.1–A.3).
Nevertheless, we keep Gaussian and PSF filtering for reference since they are widely used. Hence, at the end of this first step we are left with 11 methods: PM with edge-stopping function g1 and k = 10−3, TVL2, BM3D, starlet, b-UWT(7/9), b-UWT(7/9)+Wiener, Gaussian, PSF, NL-means slow, bilateral, and TV Chambolle. Following our experiment analysis, we decided to discard 9 algorithms: 4 PM methods, TVG and TVL1, NL-means fast, TV Bregman, and orthogonal wavelets.
5.2. The IRAC results
We note that the IRAC images do not follow the same trends as in the other bands. In fact, whereas for all the other images there is always a small group of algorithms which perform better than all the others, for IRAC nearly all the denoising algorithms tend to exhibit similar performance. We investigated the possibility that the number of pixels were not enough (166 × 166 pixels) to extract significant conclusions from these images and we tested the algorithms on an IRAC (CM) simulation with pixel scale 0.06 arcsec and size of 1000 × 1000 pixels. We noticed that, TV Chambolle, NL-means slow, and Gaussian provide the best performance (MSE ∝ 10−9), followed by BM3D, b-UWT(7/9), TV Bregman, TVL2, PM (g = g1 k = 0.01) (MSE ∝ 10−8), PSF (MSE ∝ 10−8), and then bilateral, PM (g = g1 k = 0.001), b-UWT(7/9)+Wiener, starlet, and orthogonal wavelets (MSE > 10−8). After this small test, we point out that again the IRAC band does not follow the trend defined in the other bands (even if the MSE decreases for all the methods and the original image), but with the increased number of pixels TV Chambolle, NL-means slow, and Gaussian are the algorithms which provide the best performance. The low resolution of IRAC here plays a fundamental role, impacting on most of the algorithms’ performance. This aspect is described later in this paper, in Sect. 5.4.
5.3. Stability against non-stationary Gaussian noise
In this section, we discuss the results of tests on images with varying depths, that is, those obtained by combining regions observed with different exposure times. To build the dataset, we used a noiseless simulated image I2exp, which we mirrored along the x-axis to obtain a new image with two identical vertical halves; then, we added Gaussian noise with σ = σVIS on the lower half Hl and with σ = 2σVIS on the upper half Hu. In this way, the two halves of the image contain the very same objects with a different amount of observational noise, as if they had been observed with different exposure times. We applied the algorithms in their optimal configuration (see Sect. 5.1) on the mirrored version of the crops VIS (CM) and VIS (CL), and we calculated MSE and SSIM in Hl and Hu for both. We then compared these results with the ones obtained by the application of the algorithms on the mirrored image with stationary Gaussian noise, with σVIS and 2σVIS, we refer to these images as Iσ and I2σ, respectively. From the results reported in Tables B.1 and B.2, obtained on both VIS (CM) and VIS (CL), respectively, we notice that:
All the algorithms applied on Hl produce MSE values of the same order of magnitude of the ones obtained with Iσ.
Nearly all of them applied on Hu produce MSE values of the same order of magnitude of the ones obtained with Iσ, with the exception of starlet and the two b-UWTs, for which the application on Hu produces a MSE which is around 1 order of magnitude larger than the respective MSE obtained with I2σ.
MSE relative variation for BM3D, PSF, Gaussian, PM and TVL2 is ∝10−3.
For bilateral, NL-means, and TV Chambolle the MSE relative variation is ∝10−2.
For starlet, and the two b-UWTs methods, the MSE relative variation is ∝10−1.
For all the algorithms, SSIM relative variation is < 10−2.
We want to point out that MSE achieved by methods like starlet and the two b-UWTs on Hl is ∝10−6 − 10−7, meaning that the MSE relative variation (∼10−1) still implies small absolute variations (∝10−6 − 10−7). Finally, we can conclude that the algorithms tested are not influenced (or negligibly influenced in few cases) by images with non-stationary Gaussian noise.
5.4. Stability against FWHM and depth variations
In the second part of this test, we compare the performance of the 11 algorithms with respect to the variation of the FWHM and depth of the images. We consider two cases:
In the first, we use a 1000 × 1000-pixel crop of the simulated VIS image convolved with different kernels to degrade the resolution increasing the FWHM without changing the depth of the image (we considered the cases of FWHM = 0.5, 1, 1.5 and 2.0″, with the original FWHM being 0.2″).
In the second we decreased the depth of a 1000 × 1000 pixels crop of the simulated H160 image without changing the FWHM by adding Gaussian noise with increasing standard deviation σ (×1, 10, 20, 30, and 40 times the original one) to the noiseless image.
The plots summarizing the results are shown in Figs. C.1–C.6. We note that:
The MSE calculated on the original image alone decreases at increasing FWHM due to the loss of information (i.e., small objects and details). All the algorithms follow this trend while lowering the MSE even more due to the effect of filtering (see Fig. C.1).
The ratio between the MSE obtained by each algorithm and the MSE computed on the original image () increases with an increasing FWHM, with the only exception being Gaussian filtering, which follows the opposite trend (see Fig. C.2).
The ratio is weakly affected by variations of the FWHM for most of the denoising methods, with the exception of Gaussian (see Fig. C.3).
As expected, the MSE increases at increasing background level (due to the increasing of σ for the Gaussian noise) both in the original image and in the output denoised images for all the algorithms (see Fig. C.4).
Then, increases at increasing background level for all the methods (see Fig. C.5).
Also, decreases at increasing background level for all the methods (see Fig. C.6).
In summary, we conclude that the best performance by any denoising algorithm is obtained based on images with low S/N and high resolution (narrow FWHM). The best performance with respect to the PSF method is obtained by applying the denoising methods on an image with a high S/N, regardless of the PSF-FWHM. These results can be used to estimate the efficiency of the denoising algorithms in different situations, pointing out that when they are applied to high-resolution images, they provide the best improvements, whereas if they are applied to low S/N images (where there is a peak of performance), the improvements compared to the PSF are slightly less significant. Based on these results, it would be very interesting to apply these methods as an alternative to the PSF filtering on images with high resolution and high S/N.
5.5. Stability against variations of the parameters
In this test, we analyze the selected methods by varying the values of those internal parameters that had been kept fixed to the optimal ones in the previous tests. The goal is to understand whether the performance is stable against sub-optimal parameter settings. We exclude from this analysis the three denoising methods belonging to the SPARSE2D package, for which we simply used the configuration provided by the authors, reported at the end of Sect. 2.6, along with the PSF filtering since it is just a particular case of the Gaussian filtering method.
We perform the test on the VIS (CM) image and we change the main parameter value of each technique by ±10%, ± 25%, ± 50% and ±75% with respect to the value used for the MSE analysis (see Sect. 5.1). The results are shown in Fig. 4. We notice that most of the techniques tend to exhibit similar performance when over-estimating the parameters, remaining relatively stable; on the contrary, under-estimating it significantly worsens the performance. However, all the algorithms (with the exception of BM3D) have a lower dispersion in MSE compared to the Gaussian filtering (this is not evident in the plot because of the logarithmic y-axis scale, but we verified it numerically and we give the σ values in the upper panel of the plot), meaning that they are generally more stable against the variation of the parameters. In addition, they yield a mean that is ∼1 order of magnitude lower when the parameters are below the optimal value, and by ∼2 order of magnitudes when they are above it. The BM3D, as with the other algorithms, underperforms when its main parameter is underestimated, producing, in this case, a large dispersion of the order reached by the Gaussian.
![]() |
Fig. 4.
Step 3: Stability against variations of the parameters. Each curve corresponds to a denoising algorithm. We plot the MSE against the relative variation of the parameters, |
5.6. Conservation of the FWHM and ellipticity
The optimal denoising approach should not significantly alter size and shape of the detected sources so as to enable a meaningful scientific analysis. We thus tested the selected methods by measuring the FWHM of the detected sources with SExtractor and comparing the measured values to the ones obtained on the original, unfiltered images. We performed this test on the simulated VIS image described before, which is mainly populated by galaxies, and on a specific rendition of the simulated VIS image populated by stars distributed on a grid. The results are shown in Figs. 5 and 6. Whereas for the stars in Fig. 5, the PSF filtering tends to smooth all the detected object as much as of ∼50% of the FWHM, the other algorithms have a much lower impact (the FWHM is degraded by less than 20% of the original value), even though BM3D has a significant dispersion. Similarly, for the galaxies in Fig. 6, the PSF filtering causes again a small offset, whereas most of the other methods tend to better preserve the FWHM.
![]() |
Fig. 5.
Step 4: FWHM conservation test on stars. On the x-axis we plot the FWHMdenoised/FWHMnoiseless, where FWHMnoiseless is the FHWM of the objects measured on the Noiseless image. μ and σ are the mean and the standard deviation of the distribution of FWHMdenoised/FWHMnoiseless. |
![]() |
Fig. 6.
Step 4: FWHM conservation test on galaxies. On the x-axis we plot the FWHM of the objects measured on Noiseless image FWHMnoiseless, whereas on the y-axis we plot the FWHM measured on the original image after the application of the denoising algorithms FWHMdenoised. μ and σ are the mean and the standard deviation of the distribution of FWHMdenoised − FWHMnoiseless. |
For a comprehensive comparison, another quantity has been taken into consideration. The ellipticity of the galaxies has been measured before and after the application of the denoising algorithms, using the parameter ELLIPTICITY from SExtractor. Looking at Fig. 7, it is possible to notice that most of the algorithms do not modify the ellipticity at any alarming level (even if, in some cases, the dispersion is far from being optimal, e.g., for bilateral and TV Chambolle). Nevertheless, the PSF is one of the most performing method for this test. From these tests, we can conclude that most of the tested algorithms preserve the shape of the sources similarly (in the case of the ellipticity) or even better (in the case of the FWHM) than the PSF filtering.
![]() |
Fig. 7.
Step 4: Ellipticity conservation test on galaxies. On the x-axis we plot the ellipticity e of the objects measured on Noiseless image enoiseless, whereas on the y-axis we plot the e measured on the original image after the application of the denoising algorithms edenoised. μ and σ are the mean and the standard deviation of the distribution of edenoised − enoiseless. |
5.7. Completeness and purity
In this test – perhaps the most crucial one – we checked the quality of the catalogs of sources extracted from the denoised images. We analyze two diagnostics, both relevant to assess the performance of the detection process: namely, the completeness and the purity as defined below. We extract the catalogs running SExtractor in the dual image mode using a denoised image as detection band and the original image as measurement band to perform a cross-correlation between the extracted and the true catalogs of sources both in terms of position and flux.
We used the simulated VIS 5000 × 5000-pixel image, searching for the best SExtractor parameters configuration for every denoised image. We tested, thus, a large number of possible combinations of the two parameters which control the detection, that is, DETECT_THRESH (from a minimum value of 0.2 to a maximum of 6.0, with steps of 0.1) and DETECT_MINAREA (with values: 3, 6, 9, 12, 15, 30), considering only the combinations for which the quantity , which provides a selection of objects with a global significance of at least 1-σ. The number of detection parameters combinations which fulfill this requirement is ∼350. We point out that the best algorithms configurations, used for this test and obtained by MSE minimization, do not differ significantly from the best configurations found in the VIS (CM) image (Sect. 4).
We introduce some notations: (a) ndetected is the total number of detected objects, which includes both real and spurious detections indiscriminately; (b) nsimulated is the number of simulated objects in the image; (c) nspurious is the number of spurious detections, as identified by the “spurious sources identification approach” described in the following.
The spurious sources identification approach that we define for this work is related to the SExtractor cross-correlation when an association catalog is provided: we denote by CRassoc the circle centered on the simulated object original position with radius Rassoc, which is the maximal distance allowed for the association made by SExtractor. We set it to 6 pixels (i.e., 3 × FWHM). Then we tag an object as spurious if one of the following two conditions holds: is outside CRassoc; (is inside CRassoc) AND (|magmeasured − magsimulated| > 1.0) AND (magaperture − magsimulated| > 1.0) where magmeasured is SExtractor MAG_AUTO (an estimation of the total magnitude of the source), magsimulated is the true magnitude of the simulated object, and magaperture is SExtractor MAG_APER corresponding to the magnitude within a circular aperture with diameter of 6 pixels.
Finally, we can now define the two diagnostics:
where purity = purityassoc, determined by the association approach defined above. We measure completeness and purity in 0.2 magnitudes bins. In Fig. 8, we plot the magnitudes at which the completeness drops below 50% against the one at which the purity drops below 90%. Each symbol corresponds to a different denoising technique, and repetitions of the same symbols correspond to different combinations of the detection parameters for the same algorithm. For readability, a maximum of five combinations per algorithm (corresponding to the best ones) are shown in the figure.
![]() |
Fig. 8.
Step 5: Completeness and purity test. We extracted catalogs on the VIS simulated image processed with the denoising algorithms, using different configurations of SExtractor. We plot the magnitude at which the completeness drops below 50% against the magnitude at which the purity drops below 90%. Each symbol corresponds to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters. The positions of the symbols are slightly randomized to improve readability. |
We note that all the methods improve the detection. The BM3D performs like the PSF, whereas methods like starlet and b-UWT(7/9)+Wiener produce remarkable results, with a 0.2 increment in completeness with respect to the PSF, but the best performance is reached by TVL2, Perona-Malik, TV Chambolle, and bilateral. Indeed, these four methods reach the completeness threshold 0.6 mag deeper, and the purity threshold 0.8 magnitude deeper, than in the non-denoised run. Moreover, they improve the detection compared to the PSF smoothing, reaching 0.2 magnitudes deeper in terms of both completeness and purity.
It is tempting to consider the MSE and SSIM measured on the VIS images used for the completeness and purity analysis, searching for a possible correlation between the metrics and the diagnostics. In Fig. 9 the plots are produced using the results shown in Fig. 8. We find no or weak correlation between MSE (SSIM) and purity, whereas a stronger correlation exists between MSE (SSIM) and completeness.
![]() |
Fig. 9.
Step 5: Correlation between MSE or SSIM and purity or completeness. On the x-axis, we plot the magnitudes at which completeness (purity) reaches 50% (90%), whereas on the y-axis, we plot the metrics, MSE or SSIM. Dashed lines are the linear best-fitting ones. |
We show snapshots of a sample of objects detected by the different methods in the VIS image in Appendix D. These snapshots give a visual match of objects detected in the denoised images. We only show the best-performing algorithms results compared to the original, PSF-filtered, and the noiseless images. For VIS, the algorithms are: PM, TVL2, bilateral, and TV Chambolle, followed by BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener.
5.8. Conservation of the flux
In this final test, we compare the total fluxes (by using SExtractor MAG_AUTO) measured on the simulated denoised images with the true input fluxes for objects with magnitudes between 19 and 23. Here, we do not consider the Gaussian filtering but only the PSF filtering since the latter is the reference method for this analysis. The results are shown in Figs. 10–11. The standard deviation of the difference between measured and true magnitudes is ∼0.16 for PSF-filtered images. All denoising methods exhibit similar levels of performance, with the exception of b-UWT(7/9) (σmag = 0.36) and b-UWT(7/9)+Wiener (σmag = 0.30). We conclude that denoising algorithms preserve the overall calibration of the input images and they enable a photometric accuracy comparable to the one usually achieved on images filtered with the PSF.
![]() |
Fig. 10.
Step 6: Flux conservation distribution for objects with magnitude within 19 and 23. On the x-axis the real objects magnitude magreal. On the y-axis, the difference between the magnitude measured MAG_AUTO and magreal. Only the detected objects within the purity and completeness thresholds (Sect. 5.7) are considered. μ and σ are the distribution mean and the standard deviation values, respectively. |
![]() |
Fig. 11.
Step 6: Flux conservation distribution for objects with magnitude within 19 and 23. On the x-axis, the difference between the magnitude measured MAG_AUTO and the real objects magnitude from the catalog (magreal). On the y-axis the MAG_AUTO – magreal probability distribution function. Only the detected objects within the purity and completeness thresholds (Sect. 5.7) are considered. μ and σ are the distribution mean and the standard deviation values. |
6. Testing on real images
Following the analysis of the performance of denoising techniques on a series of simulated images and testing their performance with stationary and non-stationary Gaussian noise, we tested the algorithms on real images, using the HST H160 observations of the GOODS-South Field and a crop of the HAWK-I survey. Notably, basing the analysis on real observations provides us with a straightforward test of more realistic situations, in particular concerning the presence of non-Gaussian noise or correlated noise.
6.1. Space telescope images
We use two images of the area of the Hubble Ultra Deep Field: one at the full depth released with the official CANDELS mosaics that includes all WFC3 observations of that region (HUDF09, reaching H160 = 29.74 at S/N = 5), the second, shallower one at the depth obtained with WFC3 observations of the CANDELS Treasury Program alone (GSDEEP, H160 = 28.16 at S/N = 5) (Koekemoer et al. 2011; Grogin et al. 2011). We will use the former image, that is, the deeper image, as the “true sky” against which we compare the performance of denoising techniques on the shallower image. Using an analysis similar to that in Sect. 5.7, we take as the reference catalog, the one obtained running SExtractor on HUDF09 with conservative detection parameters. The goal is, once again, to check the completeness and purity. We use an association radius, Rassoc of 3 × FWHM, which now corresponds to 7.5 pixels. We identify an object as spurious using the same criteria used in Sect. 5.7 with a magaperture within a circular aperture with diameter of 7.5 pixels. The resulting plot, visible in Fig. 12, is similar to the one obtained on the simulated image (Fig. 8). Clearly, TVL2 outperforms all the other algorithms, in particular the PSF by 0.2 mag in completeness and purity (or alternatively 0.4 magnitudes more complete and 0.2 magnitudes less pure). Bilateral performs better than the PSF filtering, by a total of 0.2 magnitudes in completeness. Perona-Malik provides a 0.2 mag more complete and 0.2 less pure catalog, being in this way an alternative of the PSF filtering. Starlet and b-UWT(7/9) perform slightly worse than the PSF.
![]() |
Fig. 12.
Space telescope – real images for completeness & purity (GSDEEP and HUDF09). On the x-axis, the magnitude at which the purity drops below 90%, on the y-axis the magnitude at which the completeness drops below 50%. Each symbol is referred to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters, see text for details. The positions of the symbols are slightly randomized to improve readability. |
As in Sect. 5.7, we show the snapshots of a sample of objects detected by the different methods in the GSDEEP image in Appendix E. The algorithms reported are: PM, TVL2, bilateral, NL-means, BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener.
6.2. Ground-based images
We repeated the same tests described above on two Ks-band observations of the Goods-South field acquired with the HAWK-I imager at the VLT: a shallower observation of the field presented in Castellano et al. (2010) and the ∼2 magnitude deeper observation released by the HUGS Survey (Fontana et al. 2014, see Table 1). As done before, we use the deepest image as the “true sky” and we apply the algorithms to reduce the noise on the shallow image. We use again the association radius Rassoc of 3 × FWHM corresponding to 11.25 pixels, with the relative magaperture (11.25 pixels diameter), identifying an object as spurious again using the same criteria as applied in Sects. 5.7 and 6.1. The resulting plot (see Fig. 13) shows that these algorithms improve the image detection compared to when there is no denoising at all (same result obtained in Sects. 5.7 and 6.1), whereas they do not provide significant improvements compared to the PSF. Indeed, only the PM creates a catalog of 0.2 magnitudes more pure at the same completeness. These results are in agreement with those presented in Sect. 5.4, where we noticed that all these methods give the best performance with high-resolution images (see Fig. C.2), such as VIS and H160. Indeed, the lower resolution of the ground-based images impacts the algorithms performance. In the same way, the methods, and mainly the PSF, perform better for images with lower S/N (e.g., HAWK-I compared to VIS and H160), as shown in Figs. C.5 and C.6, resulting in less significant improvements from the methods compared to the PSF.
![]() |
Fig. 13.
Ground-based real images – completeness & purity (HAWK-I and HAWK-I UDS). On the x-axis the magnitude at which the purity drops below 90%, on the y-axis, the magnitude at which the completeness drops below 50%. Each symbol is referred to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters, see text for details. The positions of the symbols are slightly randomized to improve readability. |
6.3. Artifacts and visual inspection
The analysis made in Sect. 5.7 only considers the object’s S/N and magnitude to classify the detections as correct or as spurious. The algorithms perform the denoising step according to different strategies and artifacts related to the family of these methods may be created. Several crops denoised by different tested methods are reported in Appendix D–E. By visually inspecting such snapshots, we can get an idea of the features and the artifacts produced. As a general comment, in looking at the figures reported in Appendix D, TVL2 seems to be the best method, closer to the noiseless image reported in the last column of all the figures, followed by starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener, which also produce good results. The PM, TV Chambolle, and BM3D are often very similar from a qualitative point of view. Most of the time, the flux of the objects detected by TVL2 is highly concentrated (see, e.g., Figs. D.2 and D.5), but in some cases the objects flux is distributed on a larger area, as visible for example, in Fig. D.6. Starlet produces, in some cases, bright isotropic objects, which could lead to misleading results in terms of their morphological information (see, e.g., Figs. D.14 and D.16). In looking at Fig. D.10, PSF seems to wrongly detect two objects, when other denoising methods detect them correctly. This qualitative analysis is confirmed by the corresponding quantitative analysis made, as visible in Fig. 8.
When more than one object is present in the figures, sometimes the different schemes are not capable of recognizing all the objects (see, e.g., Figs. D.3, D.6, D.9, D.11 and D.13). Moreover, fluctuations of the background around the objects are not completely removed and the smoothing can create artifacts (see, e.g., Fig. D.13, where an elliptical galaxy seems to appear as a spiral galaxy after denoising). b-UWT(7/9), and b-UWT(7/9)+Wiener are susceptible to background fluctuations and end up creating visual artifacts (see Figs. D.17 and D.18).
When inspecting the snapshots from real images, a qualitative analysis is inconclusive, so we have to stick to the quantitative analysis in Fig. 12. In fact, looking at the crops reported in Appendix E, the PM and bilateral are often so similar that it is difficult to distinguish them, although, based on Fig. 12, the differences between them are clear (the PM performs worse than the bilateral). An analogous remark can be made for BM3D, which produces images similar to the PM and bilateral, as visible, for example, in Fig. E.12. Here, the TVL2 is, again, the most promising method, which is visually close to the HUDF09 also thanks to the automatic optimization of its internal parameters, and this is confirmed by the results reported in Fig. 12. Images from starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener are quite similar to each other, even though those from starlet present fewer visual artifacts (see Figs. E.10, E.12–E.14). Differently from the other methods, NL-means produces nearly squared patches of uniform flux, which effectively reduce noise fluctuations (see Figs. E.8 and E.9), but also could lead to artifacts (see, e.g., Fig. E.5, where the galaxy in the bottom-right corner assumes a nearly boxy-shape).
Even if artifacts are generated and the morphology of the objects detected could be altered, we tested the efficiency of the algorithms in preserving the FWHM and the ellipticity of the objects in Sect. 5.6, proving that on the average the shapes are preserved. A more detailed assessment of the effect on galaxy morphology is beyond the scope of the present work. However, we note that a promising improvement in this direction can be obtained by classifying objects using a convolutional neural network (e.g., one of the classifiers proposed in Tuccillo et al. 2017; Khalifa et al. 2017; Barchi et al. 2020), in order to assess whether their morphological class is preserved after denoising.
7. Summary and conclusions
The goal of this work is to make an extensive comparison between a number of denoising algorithms. It is aimed at identifying the best choice of method for improving the detection of faint objects in astronomical extragalactic images (e.g., considering the typical cases of HST and Euclid). To this purpose, we performed a large set of tests on simulated images. We also tested the methods on real images: from space and ground-based instruments, collecting, in the process, some very interesting hints for many situations.
We chose to test a significant number of denoising algorithms based on traditional techniques (mainly, PDEs and variational methods), leaving a more complete comparison, including machine learning techniques, for future works. In particular, we point out that ATVD-TVL2, bilateral, Perona-Malik, TV Chambolle, starlet, and b-UWT(7/9)+Wiener are the most interesting to use among all the methods discussed here since they provide a good level of performance in the different tests proposed. These are closely followed by BM3D. Even if most of these methods are quite unusual for the astronomical community, they are very well-known in many other fields. They are also known to outperform a straightforward PSF/Gaussian filtering, which is the standard choice in astronomy. We therefore considered these techniques as the reference methods, against which we tested all the other methods.
As a first test, we considered the two metrics MSE and SSIM (defined in Sect. 4) and checked which methods yield the best performance with respect to them. We compared their performance again through MSE and SSIM in relation to depth, resolution and type of image. We tested the algorithms ability to preserve the FWHM to understand if they can preserve the shape of the objects, which is useful in case photometric measurements on the denoised image are needed. We tested their stability using the MSE, varying the ideal parameter of a fixed percentage, with the goal of having a hint on their reliability, in case the best parameter is chosen wrongly. We also tested possible detection improvements through two diagnostics, completeness and purity, which are used to measure the fraction of real detections on the total number of objects in the image. Finally, we applied these methods on real images (CANDELS-GS-deep and a crop of HAWK-I). We summarize below the key points of the analysis performed in this paper:
– From MSE and SSIM we noticed that 11 algorithms are always at the top of the rankings, especially for VIS and H160 images, which are of primary interest for detection in future surveys. These algorithms are: PM with edge-stopping function g1 and k = 10−3, TVL2, BM3D, starlet, b-UWT(7/9), b-UWT(7/9)+Wiener, Gaussian, PSF, NL-means slow, bilateral, and TV Chambolle
– From the stability test discussed in Sect. 5.3, the algorithms tested are not influenced (or negligibly influenced in few cases) by images with non-stationary Gaussian noise
– From the PSF and Depth variation test, we noticed that most of the methods show better performance with narrower FWHM. Whereas all the methods perform better with the noise level increasing (in terms of Gaussian noise standard deviation), their improvements are more significant compared to the PSF, with images with higher S/N.
– From the FWHM conservation test, we noticed that most of the algorithms tend to not smooth the image, in terms of FWHM increments. On the contrary, the PSF smoothing provides an offset in the FWHM measurement, for both stars-only images and galaxy images. On the other hand, the ellipticity is well preserved by the PSF and most of the algorithms.
– From the completeness and purity test, we found a small number of algorithms which provide 0.2 magnitudes more pure and complete catalogs than the PSF filtering, these are TVL2, Perona-Malik, TV Chambolle and bilateral (whereas starlet and b-UWT(7/9)+Wiener provide a 0.2 increment only for completeness).
– From the Flux conservation test, we found that most of the algorithms exhibit similar performance to the PSF filtering, preserving the overall calibration of the input images.
– From the real image test (H160), we found that TVL2 outperforms all the other algorithms, and it is the only one that performs better than the PSF of 0.2 magnitudes in completeness and purity, whereas the bilateral produces only a 0.2 more pure catalog.
– From the real image test (HAWK-I) we found that only Perona-Malik outperforms the PSF filtering, by 0.2 magnitudes in purity. The other methods perform worse or similarly to the PSF.
– From the visual inspection performed in Sect. 6.3 on simulated and real images, the best methods seem to be TVL2, PM, bilateral, and starlet, although they also generate artifacts, as with the other methods tested. The visual inspection confirms the results presented in Figs. 8 and 12.
The results we obtained demonstrate that denoising algorithms should be considered valuable tools in the presence of Gaussian noise, which is a good approximation of the noise in optical and near-infrared extragalactic surveys, as they clearly improve the detection of faint objects. The methods of structure-texture image decomposition, total variation denoising, Perona-Malik, bilateral filtering, and undecimated wavelets transform are of particular interest. While further specific tests are needed to define for each survey the optimal approach along these methods, along with each parameter set, our investigation on a small but reasonable reference set of simulated and real extragalactic images shows that the scientific return of ongoing and future surveys can be significantly enhanced by the adoption of these denoising methods in place of standard filtering approaches. Moreover, we find the use of the increasingly popular machine-learning techniques, possibly combined with the best methods resulting from our analysis, has the potential to further improve the performance of “traditional” denoising techniques described here. In additional, learning approaches for adding morphological priors (see, e.g., Peyré et al. 2010) could be useful and are an interesting subject for future works.
See also https://brainder.org/2011/08/20/gaussian-kernels-convert-fwhm-to-sigma/ for further details.
The scaling is required only by PM methods which need values between 0 and 1 to work, and bilateral, which needs only non-negative values for its use. The scaling step has been introduced for the two methods mentioned above and applied to all the methods only for comparison reasons. We verified that for the other algorithms the results do not significantly change if the scaling is not applied.
Acknowledgments
We would like to thank the referee, Prof. Jean-Luc Starck, for the many useful comments and suggestions that helped us improving our work. This research has been carried on within the INdAM-INAF project FOE 2015 “OTTICA ADATTIVA”. The authors S. Tozza and M. Falcone are members of the INdAM Research Group GNCS, and gratefully acknowledge its financial support to this research.
References
- Ackermann, M., Ajello, M., Allafort, A., et al. 2013, ApJS, 209, 34 [NASA ADS] [CrossRef] [Google Scholar]
- Aujol, J.-F., Gilboa, G., Chan, T., & Osher, S. 2006, Int. J. Comput. Vision, 67, 111 [CrossRef] [Google Scholar]
- Banterle, F., Corsini, M., Cignoni, P., & Scopigno, R. 2012, Comput. Graphics Forum, 31, 19 [CrossRef] [Google Scholar]
- Barchi, P., de Carvalho, R., Rosa, R., et al. 2020, Astron. Comput., 30, 100334 [CrossRef] [Google Scholar]
- Beckouche, S., Starck, J.-L., & Fadili, J. 2013, A&A, 556, A132 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Bertero, M., & Boccacci, P. 1998, Introduction to Inverse Problems in Imaging (CRC Press) [CrossRef] [Google Scholar]
- Bertero, M., & Piana, M. 2006 Inverse problems in biomedical imaging: modeling and methods of solution, eds. A. Quarteroni, L. Formaggia, & A. Veneziani (Milano: Springer Milan), 1 [Google Scholar]
- Bertin, E. 2009, Mem. Soc. Astron. It., 80, 422 [NASA ADS] [Google Scholar]
- Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Black, M. J., Sapiro, G., Marimont, D. H., & Heeger, D. 1998, IEEE Trans. Image Process., 7, 421 [CrossRef] [Google Scholar]
- Buades, A., Coll, B., & Morel, J. M. 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2, 60 [CrossRef] [Google Scholar]
- Buades, A., Coll, B., & Morel, J.-M. 2011, Image Process. Line, 1, 208 [Google Scholar]
- Bush, J. 2011, Master’s thesis, University of California, Santa Barbara, http://www.math.ucsb.edu/ cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf [Google Scholar]
- Castellano, M., Ottaviani, D., Fontana, A., et al. 2015, in Astronomical Data Analysis Software and Systems XXIV (ADASS XXIV), eds. A. R. Taylor, & E. Rosolowsky, ASP Conf. Ser., 495, 257 [Google Scholar]
- Castellano, M., Fontana, A., Boutsia, K., et al. 2010, A&A, 511, A20 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Chambolle, A. 2004, J. Math. Imaging Vision, 20, 89 [CrossRef] [Google Scholar]
- Chang, S. G., Yu, B., & Vetterli, M. 2000, IEEE Trans. Image Process., 9, 1532 [NASA ADS] [CrossRef] [Google Scholar]
- Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. 2007, IEEE Trans. Image Process., 16, 2080 [Google Scholar]
- Darbon, J., Cunha, A., Chan, T. F., Osher, S., & Jensen, G. J. 2008, 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 1331 [CrossRef] [Google Scholar]
- Dark Energy Survey Collaboration (Abbott, T., et al.), 2016, MNRAS, 460, 1270 [NASA ADS] [CrossRef] [Google Scholar]
- Donoho, D. L., & Johnstone, J. M. 1994, Biometrika, 81, 425 [CrossRef] [MathSciNet] [Google Scholar]
- Fontana, A., Dunlop, J. S., Paris, D., et al. 2014, A&A, 570, A11 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Frontera-Pons, J., Sureau, F., Bobin, J., & Le Floc’h, E. 2017, A&A, 603, A60 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Gabor, D. 1965, Lab. Invest., 14, 801 [Google Scholar]
- Getreuer, P. 2012, Image Process. Line, 2, 74 [CrossRef] [Google Scholar]
- Goldstein, T., & Osher, S. 2009, SIAM J. Imaging Sci., 2, 323 [CrossRef] [Google Scholar]
- Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, ApJS, 197, 35 [NASA ADS] [CrossRef] [Google Scholar]
- Guo, Z., Sun, J., Zhang, D., & Wu, B. 2012, IEEE Trans. Image Process., 21, 958 [CrossRef] [Google Scholar]
- Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, ApJS, 207, 24 [NASA ADS] [CrossRef] [Google Scholar]
- Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python [Google Scholar]
- Karmakar, A., Mishra, D., & Tej, A., 2018, ArXiv e-prints [arXiv:1809.01434] [Google Scholar]
- Khalifa, N. E. M., Taha, M. H. N., Hassanien, A. E., & Selim, I. M. 2017, ArXiv e-prints [arXiv:1709.02245] [Google Scholar]
- Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, ApJS, 197, 36 [NASA ADS] [CrossRef] [Google Scholar]
- Kolmogorov, A. M., & Fomin, S. V. 1957, Elements of the Theory of Functions and Functional Analysis (Dover Publications) [Google Scholar]
- Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, ArXiv e-prints [arXiv:1110.3193] [Google Scholar]
- Lee, G. R., Gommers, R., Waselewski, F., Wohlfahrt, K., & O’Leary, A. 2019, J. Open Sour. Softw., 4, 1237 [CrossRef] [Google Scholar]
- Lesser, M. 2015, PASP, 127, 1097 [CrossRef] [Google Scholar]
- Lindenbaum, M., Fischer, M., & Bruckstein, A. 1994, Pattern Recogn., 27, 1 [CrossRef] [Google Scholar]
- LSST Science Collaboration (Abell, P. A., et al.) 2009, ArXiv e-prints [arXiv:0912.0201] [Google Scholar]
- Mallat, S. 2001, IEEE Trans. Pattern Anal. Mach. Intell., 11, 674 [NASA ADS] [CrossRef] [Google Scholar]
- Meyer, Y. 1990, Ondelettes et opérateurs: Ondelettes (Hermann) [Google Scholar]
- Neyman, J., & Pearson, E. S. 1933, Phil. Trans. R. Soc. A: Math., Phys. Eng. Sci., 231, 289337 [Google Scholar]
- Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713 [NASA ADS] [CrossRef] [Google Scholar]
- Perona, P., & Malik, J. 1990, IEEE Trans. Pattern Anal. Mach. Intell., 12, 629 [CrossRef] [Google Scholar]
- Peyré, G., Fadili, J., & Starck, J.-L. 2010, SIAM J. Imaging Sci., 3, 646 [CrossRef] [Google Scholar]
- Pierre, M., Valtchanov, I., Altieri, B., et al. 2004, JCAP, 2004, 011 [CrossRef] [Google Scholar]
- Principe, G., Malyshev, D., Ballet, J., & Funk, S. 2018, A&A, 618, A22 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Romeo, A. B., Horellou, C., & Bergh, J. 2003, MNRAS, 342, 337 [NASA ADS] [CrossRef] [Google Scholar]
- Rudin, L. I., Osher, S., & Fatemi, E. 1992, Phys. D: Nonlinear Phenom., 60, 259 [CrossRef] [Google Scholar]
- Schreiber, C., Elbaz, D., Pannella, M., et al. 2017, A&A, 602, A96 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Shen, H., George, D., Huerta, E. A., & Zhao, Z. 2017, ArXiv e-prints [arXiv:1711.09919] [Google Scholar]
- Shensa, M. J. 1992, IEEE Trans. Signal Process., 40, 2464 [NASA ADS] [CrossRef] [Google Scholar]
- Skelton, R. E., Whitaker, K. E., Momcheva, I. G., et al. 2014, ApJS, 214, 24 [Google Scholar]
- Spergel, D., Gehrels, N., Baltay, C., et al. 2015, ArXiv e-prints [arXiv:1503.03757] [Google Scholar]
- Starck, J. L., Aghanim, N., & Forni, O. 2004, A&A, 416, 9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Starck, J. L., Moudden, Y., Abrial, P., & Nguyen, M. 2006, A&A, 446, 1191 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Starck, J.-L., Murtagh, F., & Fadili, J. 2010, Sparse Image and Signal Processing: Wavelets and Related Geometric Multiscale Analysis (Cambridge University Press) [Google Scholar]
- Starck, J.-L., Murtagh, F., & Bertero, M. 2014, Starlet Transform Astron. Data Process., 1 [Google Scholar]
- Tomasi, C., & Manduchi, R. 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), 839 [CrossRef] [Google Scholar]
- Tuccillo, D., Huertas-Company, M., Decenciére, E., & Velasco-Forero, S. 2017 Proc. International Astronomical Union, IAU Symp., 325, 191 [Google Scholar]
- Valens, C. 1999, A Really Friendly Guide to Wavelets [Google Scholar]
- van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., et al. 2014, PeerJ, 2, e453 [Google Scholar]
- Vetterli, M., & Kovačevic, J. 1995, Wavelets and Subband Coding (Upper Saddle River, NJ, USA: Prentice-Hall, Inc.) [Google Scholar]
- Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. 2008, Proceedings of the 25th International Conference on Machine Learning, ICML ’08 (New York, NY, USA: Association for Computing Machinery), 1096 [Google Scholar]
- Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P.-A. 2010, J. Mach. Learn. Res., 11, 3371 [Google Scholar]
- Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. 2004, IEEE Trans. Image Process., 13, 600 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
- Weickert, J. 1998, Anisotropic Diffusion in Image Processing (Teubner) [Google Scholar]
- Winkler, A. M. 2011, Gaussian kernels: convert FWHM to sigma [Google Scholar]
- Xie, J., Xu, L., & Chen, E. 2012, in Advances in Neural Information Processing Systems, eds. F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Curran Associates, Inc.), 25, 341 [Google Scholar]
Appendix A: MSE comparison tables and plots
![]() |
Fig. A.1.
Step 1: MSE comparison between Perona-Malik functions on CM. On the x-axis, all the simulated CM crops in the different bands, whereas on the y-axis |
![]() |
Fig. A.2.
Step 1: MSE comparison between ATVD algorithms on BG. On the x-axis, all the simulated BG crops in the different bands, whereas on the y-axis |
![]() |
Fig. A.3.
Step 1: MSE comparison between the other algorithms excluding ATVD and PM on CL. On the x-axis, all the simulated CL crops in the different bands, whereas on the y-axis |
![]() |
Fig. A.4.
Step 1: MSE comparison between BM3D, starlet and the two b-UWTs on CM. On the x-axis, all the simulated CM crops in the different bands, whereas on the y-axis |
MSE table of BG crops.
MSE table of CM crops.
MSE table of CL crops.
CPU time table of CM crops after fixing the optimal internal parameters for each method.
Appendix B: Non-stationary Gaussian noise MSE comparison table
MSE values for Hl, Iσ, Hu, I2σ related to the VIS (CM) mirrored crop.
MSE values for Hl, Iσ, Hu, I2σ related to the VIS (CL) mirrored crop.
Appendix C: PSF and depth comparison plots
![]() |
Fig. C.1.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis mse. |
![]() |
Fig. C.2.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis |
![]() |
Fig. C.3.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis |
![]() |
Fig. C.4.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis mse. |
![]() |
Fig. C.5.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis |
![]() |
Fig. C.6.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis |
Appendix D: VIS crops visual comparison
![]() |
Fig. D.1.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 38.8 with magnitude of 25.79. |
![]() |
Fig. D.2.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 48.2 with magnitude of 24.76. |
![]() |
Fig. D.3.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 72.9 with magnitude of 23.82. |
![]() |
Fig. D.4.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 47.5 with magnitude of 25.01. |
![]() |
Fig. D.5.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 35.44 with magnitude of 25.39. |
![]() |
Fig. D.6.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 21.26 with magnitude of 26.48. |
![]() |
Fig. D.7.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 27.70 with magnitude of 25.72. |
![]() |
Fig. D.8.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 45.57 with magnitude of 24.97. |
![]() |
Fig. D.9.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 68.29 with magnitude of 24.14. |
![]() |
Fig. D.10.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 26.74 with magnitude of 26.13. |
![]() |
Fig. D.11.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 25.44 with magnitude of 26.19. |
![]() |
Fig. D.12.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 36.99 with magnitude of 25.71. |
![]() |
Fig. D.13.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 90.35 with magnitude of 23.34. |
![]() |
Fig. D.14.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 7.99 with magnitude of 25.21. |
![]() |
Fig. D.15.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 3.80 with magnitude of 26.01. |
![]() |
Fig. D.16.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 2.23 with magnitude of 26.59. |
![]() |
Fig. D.17.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 15.73 with magnitude of 24.48. |
![]() |
Fig. D.18.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 56.23 with magnitude of 23.09. |
Appendix E: GSDEEP crops visual comparison
![]() |
Fig. E.1.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.71 with magnitude of 27.47. |
![]() |
Fig. E.2.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 7.37 with magnitude of 27.20. |
![]() |
Fig. E.3.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 8.44 with magnitude of 26.80. |
![]() |
Fig. E.4.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.88 with magnitude of 27.18. |
![]() |
Fig. E.5.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 5.82 with magnitude of 27.48. |
![]() |
Fig. E.6.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 5.08 with magnitude of 27.48. |
![]() |
Fig. E.7.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.51 with magnitude of 27.58. |
![]() |
Fig. E.8.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 12.48 with magnitude of 27.17. |
![]() |
Fig. E.9.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 9.77 with magnitude of 27.36. |
![]() |
Fig. E.10.
GSDEEP crops visual comparison. In the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 11.98 with magnitude of 28.34. |
![]() |
Fig. E.11.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 9.85 with magnitude of 28.45. |
![]() |
Fig. E.12.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 11.26 with magnitude of 28.37. |
![]() |
Fig. E.13.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 17.70 with magnitude of 27.86. |
![]() |
Fig. E.14.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 10.40 with magnitude of 28.48. |
All Tables
CPU time table of CM crops after fixing the optimal internal parameters for each method.
All Figures
![]() |
Fig. 1.
From left to right: crops of the BG (Big Galaxy) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
In the text |
![]() |
Fig. 2.
From left to right: crops of the CL (Cluster) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
In the text |
![]() |
Fig. 3.
From left to right: crops of the CM (Average field) image central area for VIS, H160, NIR H, EXT G, and IRAC. |
In the text |
![]() |
Fig. 4.
Step 3: Stability against variations of the parameters. Each curve corresponds to a denoising algorithm. We plot the MSE against the relative variation of the parameters, |
In the text |
![]() |
Fig. 5.
Step 4: FWHM conservation test on stars. On the x-axis we plot the FWHMdenoised/FWHMnoiseless, where FWHMnoiseless is the FHWM of the objects measured on the Noiseless image. μ and σ are the mean and the standard deviation of the distribution of FWHMdenoised/FWHMnoiseless. |
In the text |
![]() |
Fig. 6.
Step 4: FWHM conservation test on galaxies. On the x-axis we plot the FWHM of the objects measured on Noiseless image FWHMnoiseless, whereas on the y-axis we plot the FWHM measured on the original image after the application of the denoising algorithms FWHMdenoised. μ and σ are the mean and the standard deviation of the distribution of FWHMdenoised − FWHMnoiseless. |
In the text |
![]() |
Fig. 7.
Step 4: Ellipticity conservation test on galaxies. On the x-axis we plot the ellipticity e of the objects measured on Noiseless image enoiseless, whereas on the y-axis we plot the e measured on the original image after the application of the denoising algorithms edenoised. μ and σ are the mean and the standard deviation of the distribution of edenoised − enoiseless. |
In the text |
![]() |
Fig. 8.
Step 5: Completeness and purity test. We extracted catalogs on the VIS simulated image processed with the denoising algorithms, using different configurations of SExtractor. We plot the magnitude at which the completeness drops below 50% against the magnitude at which the purity drops below 90%. Each symbol corresponds to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters. The positions of the symbols are slightly randomized to improve readability. |
In the text |
![]() |
Fig. 9.
Step 5: Correlation between MSE or SSIM and purity or completeness. On the x-axis, we plot the magnitudes at which completeness (purity) reaches 50% (90%), whereas on the y-axis, we plot the metrics, MSE or SSIM. Dashed lines are the linear best-fitting ones. |
In the text |
![]() |
Fig. 10.
Step 6: Flux conservation distribution for objects with magnitude within 19 and 23. On the x-axis the real objects magnitude magreal. On the y-axis, the difference between the magnitude measured MAG_AUTO and magreal. Only the detected objects within the purity and completeness thresholds (Sect. 5.7) are considered. μ and σ are the distribution mean and the standard deviation values, respectively. |
In the text |
![]() |
Fig. 11.
Step 6: Flux conservation distribution for objects with magnitude within 19 and 23. On the x-axis, the difference between the magnitude measured MAG_AUTO and the real objects magnitude from the catalog (magreal). On the y-axis the MAG_AUTO – magreal probability distribution function. Only the detected objects within the purity and completeness thresholds (Sect. 5.7) are considered. μ and σ are the distribution mean and the standard deviation values. |
In the text |
![]() |
Fig. 12.
Space telescope – real images for completeness & purity (GSDEEP and HUDF09). On the x-axis, the magnitude at which the purity drops below 90%, on the y-axis the magnitude at which the completeness drops below 50%. Each symbol is referred to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters, see text for details. The positions of the symbols are slightly randomized to improve readability. |
In the text |
![]() |
Fig. 13.
Ground-based real images – completeness & purity (HAWK-I and HAWK-I UDS). On the x-axis the magnitude at which the purity drops below 90%, on the y-axis, the magnitude at which the completeness drops below 50%. Each symbol is referred to a different denoising method, which can be present multiple times in the plot due to different combinations of detection parameters, see text for details. The positions of the symbols are slightly randomized to improve readability. |
In the text |
![]() |
Fig. A.1.
Step 1: MSE comparison between Perona-Malik functions on CM. On the x-axis, all the simulated CM crops in the different bands, whereas on the y-axis |
In the text |
![]() |
Fig. A.2.
Step 1: MSE comparison between ATVD algorithms on BG. On the x-axis, all the simulated BG crops in the different bands, whereas on the y-axis |
In the text |
![]() |
Fig. A.3.
Step 1: MSE comparison between the other algorithms excluding ATVD and PM on CL. On the x-axis, all the simulated CL crops in the different bands, whereas on the y-axis |
In the text |
![]() |
Fig. A.4.
Step 1: MSE comparison between BM3D, starlet and the two b-UWTs on CM. On the x-axis, all the simulated CM crops in the different bands, whereas on the y-axis |
In the text |
![]() |
Fig. C.1.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis mse. |
In the text |
![]() |
Fig. C.2.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis |
In the text |
![]() |
Fig. C.3.
VIS FWHM variation comparison plot. On the x-axis the VIS images with FWHM equal to the original value, 0.5, 1.0, 1.5, and 2.0 arcsecs, whereas on the y-axis |
In the text |
![]() |
Fig. C.4.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis mse. |
In the text |
![]() |
Fig. C.5.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis |
In the text |
![]() |
Fig. C.6.
H160 depth variation comparison plot. On the x-axis the H160 images with Gaussian noise standard deviation equal to 1, 10, 20, 30, and 40 times the original value, whereas on the y-axis |
In the text |
![]() |
Fig. D.1.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 38.8 with magnitude of 25.79. |
In the text |
![]() |
Fig. D.2.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 48.2 with magnitude of 24.76. |
In the text |
![]() |
Fig. D.3.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 72.9 with magnitude of 23.82. |
In the text |
![]() |
Fig. D.4.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 47.5 with magnitude of 25.01. |
In the text |
![]() |
Fig. D.5.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 35.44 with magnitude of 25.39. |
In the text |
![]() |
Fig. D.6.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 21.26 with magnitude of 26.48. |
In the text |
![]() |
Fig. D.7.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 27.70 with magnitude of 25.72. |
In the text |
![]() |
Fig. D.8.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 45.57 with magnitude of 24.97. |
In the text |
![]() |
Fig. D.9.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 68.29 with magnitude of 24.14. |
In the text |
![]() |
Fig. D.10.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 26.74 with magnitude of 26.13. |
In the text |
![]() |
Fig. D.11.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 25.44 with magnitude of 26.19. |
In the text |
![]() |
Fig. D.12.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 36.99 with magnitude of 25.71. |
In the text |
![]() |
Fig. D.13.
VIS crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless. The green boxes are the detected objects regions. The central object has been detected with a S/N of 90.35 with magnitude of 23.34. |
In the text |
![]() |
Fig. D.14.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 7.99 with magnitude of 25.21. |
In the text |
![]() |
Fig. D.15.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 3.80 with magnitude of 26.01. |
In the text |
![]() |
Fig. D.16.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 2.23 with magnitude of 26.59. |
In the text |
![]() |
Fig. D.17.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 15.73 with magnitude of 24.48. |
In the text |
![]() |
Fig. D.18.
VIS crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, TV Chambolle, noiseless; on the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 56.23 with magnitude of 23.09. |
In the text |
![]() |
Fig. E.1.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.71 with magnitude of 27.47. |
In the text |
![]() |
Fig. E.2.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 7.37 with magnitude of 27.20. |
In the text |
![]() |
Fig. E.3.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 8.44 with magnitude of 26.80. |
In the text |
![]() |
Fig. E.4.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.88 with magnitude of 27.18. |
In the text |
![]() |
Fig. E.5.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 5.82 with magnitude of 27.48. |
In the text |
![]() |
Fig. E.6.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 5.08 with magnitude of 27.48. |
In the text |
![]() |
Fig. E.7.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 6.51 with magnitude of 27.58. |
In the text |
![]() |
Fig. E.8.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 12.48 with magnitude of 27.17. |
In the text |
![]() |
Fig. E.9.
GSDEEP crops visual comparison: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. The green boxes are the detected objects regions. The central object has been detected with a S/N of 9.77 with magnitude of 27.36. |
In the text |
![]() |
Fig. E.10.
GSDEEP crops visual comparison. In the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 11.98 with magnitude of 28.34. |
In the text |
![]() |
Fig. E.11.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 9.85 with magnitude of 28.45. |
In the text |
![]() |
Fig. E.12.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 11.26 with magnitude of 28.37. |
In the text |
![]() |
Fig. E.13.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 17.70 with magnitude of 27.86. |
In the text |
![]() |
Fig. E.14.
GSDEEP crops visual comparison. On the first row: Original, PSF, Perona-Malik, TVL2, bilateral, NL-means, HUDF09. On the second row: BM3D, starlet, b-UWT(7/9), and b-UWT(7/9)+Wiener. The central object has been detected with a S/N of 10.40 with magnitude of 28.48. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.