Issue 
A&A
Volume 592, August 2016



Article Number  A128  
Number of page(s)  5  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201628596  
Published online  12 August 2016 
Efficient implementation of the adaptive scale pixel decomposition algorithm
^{1} Xinjiang Astronomical Observatory, Chinese Academy of Sciences, 830011 Urumqi, PR China
email: zhangli@xao.ac.cn
^{2} National Radio Astronomy Observatory, Socorro 87801, NM, USA
email: sbhatnag@nrao.edu; rurvashi@nrao.edu
^{3} University of Chinese Academy of Sciences, 100080 Beijing, PR China
^{4} Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, 830011 Urumqi, PR China
Received: 26 March 2016
Accepted: 20 June 2016
Context. Most popular algorithms in use to remove the effects of a telescope’s point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the deltafunction basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (AspClean) algorithm models the sky brightness on a scalesensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission.
Aims. However, the runtime cost of AspClean is higher than that of scaleinsensitive algorithms. In this paper, we identify the most expensive step in the original AspClean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wideband telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales.
Methods. As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost.
Results. The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original AspClean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.
Key words: methods: data analysis / techniques: image processing
© ESO, 2016
1. Introduction
Radio interferometers measure the spatial coherence of the electric field (Thompson et al. 2001) in the Fourier domain. Owing to incomplete sampling in the Fourier plane, the telescope’s point spread function (PSF) has widespread sidelobes. The true sky brightness is convolved with this PSF, which limits the imaging to no better than a few 100:1 in the dynamic range.
The CLEAN algorithm (Högbom 1974) and its variants (Clark 1980; Schwab & Cotton 1983) have so far been the most popular algorithms in use to remove the effects of the PSF. Most of these algorithms model the sky brightness as a collection of delta functions, with an implicit assumption that the sky is composed of several wellseparated point sources. As such, this approach works well for fields dominated by unresolved sources. With the increase in sensitivity of modern radio telescopes, lowlevel extended emission is often detected in many fields, particularly at low frequencies. Accurate deconvolution of extended emission is an important problem, especially with the sensitivity of new telescopes. Recently, scalesensitive methods have been developed to mitigate artefacts induced by the original CLEAN algorithm when used to deconvolve extended emission (Bhatnagar & Cornwell 2004; Cornwell 2008; Rau & Cornwell 2011). These scalesensitive algorithms have been shown to have better performance than scaleinsensitive algorithms in recovering extended emission.
The MultiScale CLEAN (MSClean) algorithm (Cornwell 2008) uses a matchedfiltering technique to find the components. It decomposes a sky image into a set of fixed scales, such as tapered and truncated paraboloids. The AspClean algorithm was proposed by Bhatnagar & Cornwell (2004); here the scales are determined adaptively through a fitting procedure and heuristics are developed to limit the set of scales (active set) that need to be fitted during any given iteration. Compared to MSClean, the AspClean algorithm gives better imaging performance, but at the cost of a significant increase in computing time.
In this paper, we propose an efficient implementation that significantly reduces the computational cost while keeping the imaging performance comparable to the original AspClean algorithm (referred to as AspClean2004 in the rest of the text). This is possible because the PSF sidelobes of modern wideband telescopes with excellent coverage of the uvplane for continuum imaging are significantly reduced. This allows us to approximate the PSF as a single Gaussian. Then the convolution in the objective function of the iterative fitting (using LevenbergMarquardt (LM) minimization, Marquardt 1963) can be removed. These convolutions are performed through expensive fast Fourier transformation (FFT)based approach in the AspClean2004 algorithm. With typically large numbers of minimization iterations required, FFTbased convolution was responsible for the high computing cost of the AspClean2004 implementation. Our current approach, referred to as AspClean2016 in the text below, can be thought of as an extreme case of the approximations used in the Clark CLEAN (ClarkClean) algorithm (Clark 1980), where the full PSF is approximated by a PSFpatch that only includes the highest sidelobe. As the highest sidelobe becomes increasingly smaller with modern wideband telescopes, AspClean2016 ignores the sidelobes of the PSF entirely. We show that this still leads to convergence and the total runtime is significantly reduced, even if it is at the expense of a slightly larger number of iterations.
We recall the basics of the AspClean2004 algorithm in Sect. 2, while the AspClean2016 algorithm is described in Sect. 3. Numerical experiment and comparisons are presented in Sect. 4, and we summarize our results in Sect. 5.
2. AspClean2004 algorithm
In the image reconstruction problem, the dirty image I^{dirty} can be expressed as (1)where B is the PSF, I^{true} is the true image, I^{noise} is the noise convolved by the PSF and ⋆ is the convolution operator. One goal of imaging algorithms is to remove the effects of the PSF. In practice, a model with a limited number of components is used to approximate the true image, (2)where the model image , is the ith image component, ϵ is the error between the true image and the model image, and L is the number of components used to approximate the true image. In scaleinsensitive algorithms, is a delta function. The true sky, including any extended emission, is thus modelled as a collection of delta functions, which leads to residuals at levels much higher than the thermal noise limit of modern telescopes, as shown by various authors (including Bhatnagar & Cornwell 2004). The primary limitation of the imaging performance of scaleinsensitive algorithms is that their models cannot represent the correlation between pixels containing extended emission. This is in conflict with the fundamental assumption underlying scale–insensitive algorithms that each pixel in the image must be independent. The AspClean2004 algorithm tries to solve the deconvolution problem by adaptively determining the scales in various parts of the image by fitting the largest possible scale locally using the following procedure:

1.
Determine the initial values of the scale by smoothing theresidual image with a few Gaussian beams, then identify the peakamong the smoothed images and use the widthof the Gaussian as the initial guess.

2.
Refine the initial values with the LM minimization algorithm to determine the best model, , that fits the data locally.

3.
Update the residual image as .

4.
Repeat steps 1–3 till one of the stopping criteria is satisfied. The stopping criteria can be the total number of iterations or a estimated noise threshold.
As mentioned above, step 2 makes the component scales adaptive (i.e. they are not fixed and predetermined), while the component scales are fixed and predetermined in MSClean and its variants. The minimization algorithm in step 2 minimizes the objective function χ^{2}, which is given by (3)where ∥ ∥ _{2} is the Euclidean norm. In Eq. (3), there is a convolution operation between B and . In a typical function minimization, the objective function is evaluated many times for each fitted scale. The FFTbased convolution used in the AspClean2004 algorithm dominates the computational cost. For an Npixel image, the fast convolution includes two FFTs, one iFFT, and N multiplication, where the computational complexity of an FFT is about Nlog _{2}N (Gonzalez & Woods 2010). As such, the computational complexity of a convolution is N(3log _{2}N + 1) for an Npixel image. For M evaluations of the objective function, the computational complexity is MN(3log _{2}N + 1) when solving for a single component. While the AspClean2004 reconstruction gives good results, it is limited by the high computational cost.
3. AspClean2016: An efficient implementation of AspClean2004
As is well known,a twodimensional Gaussian function g_{1}(a_{1},x_{1},y_{1},ω_{1}) that is convolved with another twodimensional Gaussian function g_{2}(a_{2},x_{2},y_{2},ω_{2}) results in a new twodimensional Gaussian function g_{3}(a_{3},x_{3},y_{3},ω_{3}), where a_{i},x_{i},y_{i} and ω_{i} are the amplitude, position (x_{i},y_{i}), and width of Gaussian function g_{i}, respectively. The Fourier convolution theorem states that if the parameters of the convolved result (e.g. g_{3}) and a Gaussian function (e.g. g_{1}) are known, the parameters of another Gaussian function (e.g. g_{2}) can be analytically computed.
We therefore use the objective function (4)where is a Gaussian containing a model component whose parameters (amplitude a_{ic}, location x_{ic},y_{ic} and width ω_{ic}) needs to be determined by optimization (using the LM minimization). After converging to the solution of , the underlying component that models the emission is determined by analytically deconvolving the PSF (approximated as a single Gaussian) as (5)where ω_{i}, ω_{ic}, ω_{b} are the widths of , and the main lobe of the PSF, respectively. The amplitude α_{i} that corresponds to the is calculated by the equation (6)where α_{b} and α_{ic} are the amplitudes of the Gaussian beam approximated from the PSF and , respectively. As in other algorithms, a loop gain is used before the component is added to the model image. In Eqs. (5)and (6), the α_{b} and ω_{b} of the PSF are fixed, and changes of α_{i} and ω_{i} follow from any adjustments of α_{ic} and ω_{ic}. Here we would like to point out similarities between our approximation of the PSF by a single Gaussian (effectively ignoring all levels of the PSF sidelobes) and use of an approximate PSF where only the highest near sidelobe of the PSF is included in a similar step as in ClarkClean. The difference is that our algorithm uses this approximation to the extreme in the estimation phase of the model components, while ClarkClean uses it in the subtraction step in a minor cycle to determine the next highest peak in the residual image.
Initial values are important to compute an optimal component. In many tests, we found that good initial values that are closer to the optimal component can significantly reduce the iteration counts of the minimization algorithm to find the optimal component. In other words, good initial values can significantly reduce the computational time. As in the AspClean2004 algorithm, good initial values can be obtained by smoothing the current residual image first and then finding the parameters that correspond to the global peak among the smoothed residual images. Smoothing is computed by a convolution, which has the effect of increasing the computational cost. However, when deconvolving PSFsized features, the peak of the last residual image and the scale of the main lobe of the PSF can be directly used as the initial values of the parameters. Then the simple method can be triggered to reduce the computational cost. In the AspClean2016 algorithm, the simple method is used only when the area of the optimal component in the last iteration is smaller than 5% of the area of the initial guess in the last iteration. After this, the smoothing method is used again until the condition of using the simple method is met. In practice, there are a large number of small scales in image decomposition. The modified scheme not only retains the reconstruction quality, but also reduces the computational cost.
4. Numerical experiment and comparison
We tested the AspClean2016 algorithm and compared it with the AspClean2004 algorithm using simulated Lband EVLA data in the Bconfiguration^{1}, with a bandwidth of 1 GHz and 32 channels. The observation time was six hours. All simulations were made using the CASA software^{2}. The robustweighted PSF is displayed in Fig. 1. The maximum negative sidelobe of the PSF was −0.068 (the peak value of the PSF is normalized to 1). The full width at half maximum (FWHM) of the main lobe of the PSF was about 2′′, while the resolution of the images was 1′′.
Fig. 1 Robustweighted PSF in the range −0.068 to 1.0 with the logarithmic scaling (CASA scaling power cycles =−1.4). 

Open with DEXTER 
Fig. 2 Results of the M 31 image from the AspClean2016 algorithm: a) the original image; b) the dirty image with a robust weighting function; c) the reconstructed model image; and d) the corresponding residual image. 

Open with DEXTER 
We tested the AspClean2016 algorithm on the M 31 image with robust weighting applied. The reconstructed results are displayed in Fig. 2. The original image is shown in Fig. 2a, while the dirty image is displayed in Fig. 2b. Artificial Gaussian noise was added to the simulated visibilities, which resulted in a noise RMS of 5×10^{5} Jy in the dirty image. The loop gain used in the reconstruction was 0.35. Comparing the original image in Fig. 2a, the reconstructed image, which has about 900 components (as shown in Fig. 2c), can reconstruct various scales well.
Fig. 3 Models and residuals of the M 31 image from the AspClean2004 and AspClean2016 algorithms: a) the Aspclean2004 model image; b) the AspClean2016 model image; c) the AspClean2004 residual image; and d) the AspClean2016 residual image. The model images and the residual images are displayed in the same respective data ranges. 

Open with DEXTER 
We found that the reconstructed results of AspClean2016 and AspClean2004 are comparable (e.g. see Fig. 3). In Fig. 3 the fundamental reason for the difference between the model (a) and (b) is that image reconstruction in interferometric imaging is nonunique. In other words, infinite numbers of image models will fit the data equally well. Two different algorithms will rarely produce the same set of image model components. However, they both should fit the data well. In Tables 1 and 2 we show that the amplitude range and the total reconstructed model flux from the AspClean2016 algorithm is closer to the original image. However, the AspClean2004 residual image is more noiselike and includes fewer correlated structures. From the histogram (see Fig. 4) of the residual images, we can see that the residual noise profiles of AspClean2004 and AspClean2016 are very close to each other and also very close to the input Gaussian noise. This shows that the new algorithm improved the computing efficiency without significant degradation in reconstruction quality. In all, the results of the AspClean2016 results are comparable to those of the AspClean2004 algorithm, but the AspClean2016 algorithm had a faster convergence rate. Table 3 shows that the AspClean2016 algorithm converged about 20 times faster in the test.
Model comparison between AspClean2004 and AspClean2016.
Residual comparison between AspClean2004 and AspClean2016.
Numerical comparison between the AspClean2004 and AspClean2016 algorithms.
To compare the performance of the AspClean2004 algorithm and the AspClean2016 algorithm with different weightings, we simulated the M 31 image with uniform, natural, and robust weightings, respectively. The image sizes were the same: 256×256 pixels. The loop gain was 0.35 and other parameters were the same. The numerical results are included in Table 3. Different weightings in the spatial frequency domain result in different PSFs in the spatial domain, which in turn affects structures in the dirty image. However, the AspClean2016 requires approximating the PSF as a Gaussian function. Obviously, the approximation is more effective when the PSF includes weaker sidelobes. Thus, for the AspClean2016 algorithm, the convergence speed with a uniform weighting was faster than with a natural weighting under the same parameters. This is also partly because the performance of the minimization algorithm differs between dirty images. Table 3 shows that the total runtime of AspClean2004 is 20 or more times longer than that of AspClean2016. The mean oneiteration runtime ratios, which are defined as the ratio of total runtime and total numbers of iterations, are higher than 40 in this test. In other words, the mean oneiteration runtime of AspClean2004 is 40 times longer than that of AspClean2016. When reconstructing the 512×512 M 31 image, the Python code for AspClean2016 took several minutes using a typical computer (Intel(R) Core(TM) i73770 CPU @3.4 GHz, 4.00 GB RAM). After it is implemented with C++, the runtime is expected to be much shorter.
Fig. 4 Histogram of the residual images. “FittedOri” is a Gaussian fitted to the “Original” image noise. “Original” is the added image noise, which is the inverse Fourier transformation of the visibility noise. “Asp04” is the AspClean2004 residual image, and “Asp16” is the AspClean2016 residual image. 

Open with DEXTER 
5. Summary
In this paper, we approximated the PSF with a single Gaussian, which allowed us to analytically compute the optimal components after a Gaussian was fitted to the current residual image. This greatly reduced the total runtime compared to the 2004 implementation of the AspClean algorithm. We also showed through simulations that there was no significant degradation in the imaging performance when using such an approximate PSF. The approximation we made will only become less of a problem with data produced by future telescopes with tentimes more antennas than current telescopes. This will further reduce the PSF sidelobes, which justifies this analytical approximation of the PSF. In addition, we used a different scheme to determine the initial values of the optimal Gaussian components, which were then passed to the minimization algorithm. In the 2004 implementation, initial values were found by smoothing of the residual image with a few Gaussians of different sizes. The heuristic used in the 2016 implementation can switch between smoothing and the simple method to further reduce the computational cost. Tests showed that AspClean2016 converges faster than AspClean2004, especially for larger images. The current implementation of AspClean2016 is in Python and the CASA package. The Python source code is available on the internet^{3}. Its implementation in the C++ in the CASA package is currently underway.
Acknowledgments
We would like to thank the people who develop Python and CASA, which provided an excellent development and simulation environment. L. Zhang thanks for support from the NRAO Graduate Student Internship program. The work was also supported by the National Basic Research Program of China (973 program: 2012CB821804 and 2015CB857100), the Nation Science Foundation of China (11103055) and the West Light Foundation of the Chinese Academy of Sciences (RCPY201105).
References
 Bhatnagar, S., & Cornwell, T. J. 2004, A&A, 426, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Clark, B. G. 1980, A&A, 89, 377 [NASA ADS] [Google Scholar]
 Cornwell, T. J. 2008, EEE J. Selected Topics in Signal Processing, 2, 793 [NASA ADS] [CrossRef] [Google Scholar]
 Gonzalez, R. C., & Woods, R. E. 2010, Digital Image Processing, 3rd edn. (Pearson) [Google Scholar]
 Högbom, J.A. 1974, A&AS, 15, 417 [Google Scholar]
 Marquardt, D. W. 1963, J. SIAM, 11, 431 [Google Scholar]
 Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Schwab, F. R., & Cotton, W. D. 1983, AJ, 88, 688 [NASA ADS] [CrossRef] [Google Scholar]
 Thompson, A. R., Moran, J. M., & Swenson, G. W. 2001, Interferometry and synthesis in radio astronomy, 2nd edn. (WileyVCH) [Google Scholar]
All Tables
All Figures
Fig. 1 Robustweighted PSF in the range −0.068 to 1.0 with the logarithmic scaling (CASA scaling power cycles =−1.4). 

Open with DEXTER  
In the text 
Fig. 2 Results of the M 31 image from the AspClean2016 algorithm: a) the original image; b) the dirty image with a robust weighting function; c) the reconstructed model image; and d) the corresponding residual image. 

Open with DEXTER  
In the text 
Fig. 3 Models and residuals of the M 31 image from the AspClean2004 and AspClean2016 algorithms: a) the Aspclean2004 model image; b) the AspClean2016 model image; c) the AspClean2004 residual image; and d) the AspClean2016 residual image. The model images and the residual images are displayed in the same respective data ranges. 

Open with DEXTER  
In the text 
Fig. 4 Histogram of the residual images. “FittedOri” is a Gaussian fitted to the “Original” image noise. “Original” is the added image noise, which is the inverse Fourier transformation of the visibility noise. “Asp04” is the AspClean2004 residual image, and “Asp16” is the AspClean2016 residual image. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.