Issue 
A&A
Volume 646, February 2021



Article Number  A44  
Number of page(s)  6  
Section  Astronomical instrumentation  
DOI  https://doi.org/10.1051/00046361/202039275  
Published online  04 February 2021 
Parameterized reconstruction with random scales for radio synthesis imaging
^{1}
College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, PR China
email: lizhang.science@gmail.com
^{2}
Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi 830011, PR China
^{3}
Key Laboratory of Radio Astronomy, Chinese Academy of Sciences, Urumqi 830011, PR China
^{4}
Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, PR China
^{5}
Center for Astrophysics, Guangzhou University, Guangzhou 510006, PR China
Received:
27
August
2020
Accepted:
7
December
2020
Context. In radio interferometry, incomplete sampling results in a dirty beam with side lobes, which obscures the celestial structures. Before the astrophysical analysis, the effects of the dirty beam need to be eliminated, which can be solved with various deconvolution methods.
Aims. Diffuse astronomical sources observed by modern highsensitivity telescopes tend to be complex morphological structures, often accompanied by faint features, which are submerged under the side lobes of the dirty beam. We propose a new deconvolution algorithm called random multiscale estimator (RMSClean), which is mainly used to solve the difficult reconstruction of diffuse astronomical sources.
Methods. RMSClean models the sky brightness distribution as a linear combination of random multiscale basis functions whose scales are obtained by randomly perturbing a preset multiscale list. Random multiscale models are used to approximate the uncertain characteristics of the scales of complex astronomical sources.
Results. When the RMSClean method is applied to simulations of SKA observations with realistic diffuse structures, it can reconstruct diffuse structures well and provides a competitive result compared to the commonly used deconvolution algorithms.
Key words: methods: data analysis / techniques: image processing
© ESO 2021
1. Introduction
The Square Kilometre Array (SKA; Dewdney et al. 2009; Wu 2019) is the world’s largest propopsed radio telescope, with very high sensitivity and a very large field of view. This telescope is likely to bring revolutionary changes to many fields of astronomy. However, the limited number of antennas often lead to incomplete sampling, which is nonuniformly distributed in the spatial frequency domain because of the nonuniform array configuration and the Earthrotation aperture synthesis. This will produce nonnegligible side lobes in the point spread function (PSF), which causes the details of the astronomical sources to be degraded to various degrees. Reconstructing astronomical sources from a dirty image is one of the main challenges of radio synthesis imaging, and deconvolving the PSF effect is the key technology.
In order to reconstruct the details of astronomical sources from a dirty image, many methods have been proposed. Because of noise and limited spatial frequency sampling, some prior information is required in the solution process to obtain the best results. The CLEANbased algorithms (Högbom 1974; Zhang et al. 2020) are a class of methods for constructing parameterized models, which introduce scale priors. The zeroscale basis functions (i.e., delta functions) are used to construct the model for compact emission (Clark 1980). Scale basis functions such as Gaussians and tapered truncated parabolas are used to represent diffuse emission, which reflects the correlation between pixels in an astronomical image (Cornwell 2008; Zhang et al. 2016).
The maximum entropy method (MEM; Narayan & Nityananda 1986) is an explicit optimization process to use the entropy function as a regularizer. An entropy function is used to control model emission during reconstruction, and it is maximized to make the model more consistent with the measured data. The entropy function regularization is essential to obtain a smoothed version of the potential sky emission, as the smooth prior is used (Bhatnagar & Cornwell 2004).
In addition, compressive sensing (CS) methods are another type of optimization methods that often use regularizations. Unlike the MEM, a CSbased deconvolution method is based on the sparse assumption. CS theory posits that when the signal is sparse or can be sparsely represented in a certain domain, then a small number of samples can be used to recover the signal. This breaks through the sampling requirements of the NyquistShannon theorem. In radio astronomy, compact emission is sparse in the image domain, and diffuse emission can become a sparse signal in the transformed domain. Wavelet transform is a sparse representation of the diffuse emission that is often used in the CS field. At the same time, CS often uses the L1 norm to find a more sparse solution and can apply many regularizations to obtain a model consistent with the regularizations, such as the sparsity averaging reweighted analysis (SARA; Carrillo et al. 2012), the sparse aperture synthesis interferometry reconstruction (SASIR; Girard et al. 2015) and model reconstruction by synthesisanalysis estimators (MORESANE; Dabbech et al. 2015).
Of these three types of deconvolution methods, MEM and CS are explicit optimization processes with regularization. MEM emphasizes smoothness, and CS emphasizes sparseness. Although the CLEANbased method based on scale decomposition is simple, it can be well combined with other imaging procedures such as wide band and wide field. Among these methods, the CLEANbased algorithms are more commonly used (Dabbech et al. 2015; Offringa & Smirnov 2017).
For modern highsensitivity telescopes, diffuse emission, especially weak diffuse emission, makes reconstructing the sky emission difficult. In this article, we propose a new parametric deconvolution method with random scales, which is a random multiple scales estimator (RMSClean). The RMSClean aims to restore a better diffuse emission model.
To understand the RMSClean better, radio synthesis imaging and the deconvolution problems are introduced in Sect. 2. The theory of each part of the RMSClean and its algorithm process is introduced in Sect. 3. The experimental results and discussion are presented in Sect. 4, and the last section summarizes this work.
2. Deconvolution problems in radio synthesis imaging
In radio interferometry, the relation between the sky brightness distribution I^{sky} and the visibility function V^{sky} in the measurement domain can be expressed as follows (Thompson et al. 2017):
where w points along the direction of the light, (u, v) is its tangent plane, (l, m, n) defines the coordinate position of the source on the celestial sphere, and . For small fields of view, this wterm can be ignored (Cornwell et al. 2008), which is also the case discussed in this paper.
In a real observation, factors such as the limited number of antennas, configuration of the array, observation time, and the channels determine the observed (u, v) coverage, which is the sampling function S(u, v), only measuring part of the visibilities. The measurement equation can be written as
where N is the noise in the measurement domain such as the sky. The sampling function S(u, v) is composed of 1 for the sampled points and 0 for the unsampled points. As a result of factors such as Earth rotation synthesis, S(u, v) is not always on the grid points (i.e., noninteger coordinate points). Therefore V^{obs} is not always on the grid. In order to use technologies such as a fast Fourier transform to improve the computational efficiency, the measurement data need to be mapped onto grid points (Taylor et al. 1999; Thompson et al. 2017).
In matrix form, the measurement equation can be written as
where V^{obs} is the column vector representing the measured visibilities, [S] and [F] are the sampling matrix and Fourier transform matrix, respectively, I^{sky} is the column vector for the sky brightness distribution, and N is the noise vector. Operations such as gridding have been ignored in the above measurement equation. In order to reconstruct the sky brightness distribution, the observed image (also called dirty image) I^{dirty} needs to be calculated,
where [F^{†}SF]I^{sky} = [F^{†}]([S][F]I^{sky})^{1} is a convolution of the PSF and the sky brightness distribution I^{sky}, and n = [F^{†}SF]N is the noise in the image plane.
The task now is to recover I^{sky} from the dirty image degraded by the PSF and noise. With modern highsensitivity telescopes, the observed images tend to be more complex structures. Effectively representing these complex structures is the key to the parameterized modeling method.
3. Parameterized deconvolution with random multiple scales
3.1. Multiscale model
An image with extended features tends to contain information of different scales. The pixels corresponding to source emission are clearly correlated within a scale. When a pointsource model is used to represent these pixels, the correlation of pixels within the scale is omitted. The model is thus a linear combination of compact sources, which results in diffuse features that cannot be reconstructed well. Scale priors should be introduced to the representation of diffuse features. A multiscale model has been shown to represent diffuse emission better (Cornwell 2008; Rau 2010; Rau & Cornwell 2011).
In a multiscale model, the sky brightness distribution can be expressed as
where ϵ is the error between the model image and true sky brightness distribution, which is an infinitesimal quantity only when the model image is consistent with the true sky brightness distribution. Such a model I^{model} represents an image as a linear combination of N_{s} different scales,
where is a normalized scale function, is a location image composed of multiple δ functions, where the nonzero position represents the center position of the scale and the amplitude of each point represents the total flux of the scale. For compact sources, the magnitude of is itself. The asterisk stands for the convolution operation, which is usually implemented by fast Fourier transform,
where [P_{s}]=diag([F]) is a diagonal matrix whose diagonal elements are given by [F]. Then the measurement equation can be written as
Because the purpose of deconvolution is to model celestial sources, the noise item [S]N is ignored in the following discussion. In addition, the reconstructed error [SF]ϵ can be ignored in the absence of deconvolution errors. Combined with Eq. (7), the measurement equation can be expressed as follows:
where V^{model} is the visibilities of the model.
3.2. Multiscale model with random scales
P_{s} is clearly similar to a uvtaper function for nonzero scale sizes (Thompson et al. 2017), which suppresses high spatial frequencies and gives higher weights to lower spatial frequencies. When the size of a scale is greater than zero, this is equivalent to adjusting the sensitivity of the instruments to the peak value (Rau 2010). In the image plane, this is equivalent to constructing feature images of different scales.
In the commonly used multiscale method (Cornwell 2008; Rau & Cornwell 2011), the size of these scales is specified by the user, usually several. Then the sky brightness distribution needs to be expressed as a linear combination of these prespecified scales. Therefore these methods can only extract features of prespecified scales. The scale information contained in a sky brightness distribution cannot be known beforehand. If the scale of the features included in the sky brightness distribution is not consistent with the prespecified scales, it will be represented by the closest scale and leave the differences in residual structures. Based on this idea, as long as the scale is not the same as the prespecified scales, it will not represent the data well.
In order to solve this problem, we applied random multiscale mechanisms. A random multiscale model can be written in the following form:
where N_{sr} is the number of random scales, and and are the scale basis function and location image with regard to the scale s_{r}, respectively. The random multiscale method no longer uses the preset fixed scales only and its scales change randomly, which can effectively model the scale uncertainty of the sky brightness distribution. We implemented the random multiscale method by adding random perturbations to the preset scale list.
3.3. New algorithm
We introduce the details of the algorithm based on the above ideas. This algorithm uses the common image reconstruction framework in radio astronomy (Rau 2010; Zhang et al. 2020). Overall, the sky brightness distribution needs to be solved in the spatial frequency domain, and the solution needs to make the model consistent with the spatial frequency samples and be able to estimate the unmeasured points well. The estimation of the model is made in the image domain, which is the minor cycle. Error correction in the major cycle is made in the spatial frequency domain. The specific process is as follows:
1. Update scales and scale basis functions. The scale s_{r} = ks is calculated based on the scales s ∈ {0, …, N} specified by the user, where k is a random perturbation coefficient that is a random number greater than zero. The maximum angular extent of resolvable emission for an interferometer provides a natural limit to the upper bound on k. Then s_{r} is used to calculate a scale basis function , which is a tapered and truncated parabola with s_{r}.
2. Update the Hessian matrix. Calculate a Hessian matrix whose elements are given by , where s_{r} ∈ {0, …, N} and q_{r} ∈ {0, …, N},
This is to calculate all possible pairs of scale basis functions, and store them first, and then use them to update the multiscale residual image for the model components, which can reduce timeconsuming convolution calculations to reduce computational complexity.
3. Update multiscale residual images before model component estimation. Calculate the multiscale residual images for each current scale updated in step 1,
where I^{res} is the current residual image corresponding to zero scale, and the dirty image for the first time. After the scales are updated, the multiscale residual images need to be updated once from the current zeroscale residual image.
4. Estimate model components. Find the global peak value from these multiscale residual images, and obtain the principle solution by dividing by the corresponding peak value of the row element of the corresponding Hessian matrix. Here we can obtain the model component , whose scale is centered at the nonzero value of the δ function and with the amplitude .
5. Update model. Update the current multiscale model image through the found model components,
where g is the loop gain, which ranges from 0 to 1, and is used to suppress overestimation.
6. Update multiscale residual images. The smoothed residual images for each scale are updated,
Repeat the above steps 1–6 until the stop condition is met. Steps 1–6 contain a cycle, that is, steps 4–6 are repeated a certain number of times (e.g., five times) before the scales are updated again.
7. Predict model visibilities. When the root mean square (rms) limit or flux limit of the residual in the minor cycle is reached, the current model is predicted to the measurement domain,
where [A^{†}] is the transform matrix from the image domain to the visibility domain, and is an inverse process of calculating the dirty image from the measurement data.
8. Update residuals from the spatial frequency domain. Calculate the visibility residuals and update the imagedomain residuals,
Repeat the above steps until the sky emission is fully extracted.
Steps 1–6 are called the minor cycle, and steps 7–8 are called the major cycle. Steps 1–6 are used to construct the random multiscale model, while steps 7–8 can correct errors during model reconstruction. The final residual image is often added to the final reconstructed image, which ensures that the unreconstructed signal is not neglected, which is useful for data reconstruction such as very sparse sampling (e.g., very low baseline interferometry).
3.4. Implementation
When the scales are updated, the Hessian matrix needs to be recalculated, which involves multiple convolution operations. In our implementation, the scales and Hessian matrix are only randomly updated five times for each minor cycle to balance the computational load and the quality of the reconstructed image.
In this implementation, the zero scale must be included. It does not only model compact emission, but the residual corresponding to the zero scale is also used to calculate the multiscale residual images after the scales are updated. The latter can prevent the previous scale smoothing from accumulating in the next scale smoothing across major cycles.
In each component search, both the MSClean (Cornwell 2008) and RMSClean find the best value from multiple scales. The difference is that the RMSClean not only uses the preset scales, but also perturbs scales randomly. Compared with the Aspclean algorithm (Bhatnagar & Cornwell 2004), unfixed scales are used to implement scaleadaptive modeling, but the ways of implementation are different. The aspClean uses the explicit fitting method to determine the largest scale of the current residuals, while the method in this paper is implemented in a random way. Each time, it determines the most suitable scale of the current multiple scales.
4. Results and discussion
In this section, SKA observations are simulated by the Radio Astronomy Simulation, Calibration and Imaging Library (RASCIL) software^{2} to verify the performance and motivation of the proposed algorithm, the RMSClean. These simulated observations were made with the core configuration of the SKALow array with a frequency of 100 megahertz and an observation bandwidth of 1 megahertz. In the first experiment, this realistic sky brightness distribution Cassiopeia A^{3} with complex morphological structures (i.e., the reference model image) (Fig. 1 left) was observed. The PSF formed by this observation is shown in the middle panel of Fig. 1. These side lobes are formed in the image domain by spatial frequencies that are not measured. This nonideal PSF obscures the details of the source in the observed StokesI image (called the dirty image, Figure 1 right). At the same time, the side lobes of the PSF make the dirty image with obvious side lobes in the nonsource region.
Fig. 1.
Simulation results of SKA lowfrequency observations. Left: reference model image Cassiopeia A. Middle: PSF displayed by the logarithmic scaling (CASA parameter “scaling power cycles” = −1.2) for more details of its sidelobes. Right: dirty image corrupted by the PSF. 

Open with DEXTER 
The results reconstructed by the proposed algorithm^{4} are shown in Fig. 2. The details of the source in the reconstructed model image (Fig. 2 top left) are significantly richer than those in the dirty image. The restored image (Fig. 2 bottom right) no longer contains the significant sidelobe structures of the nonsource area in the dirty image. The model error image (Fig. 2 top right), which is the difference between the reconstructed model image and the reference model image, contains only weak compact structures. This proves that the signal extraction in the dirty image is relatively sufficient, which is also verified by the weak structures in the residual image (Fig. 2 bottom left).
Fig. 2.
Reconstruction results of source Cassiopeia A from the RMSClean algorithm. Top left: reconstructed model image. Top right: model error image, which is the difference between the reference model image and the reconstructed model image. Bottom left: residual image. Bottom right: restored image, which is the sum of the reconstructed model image convolved with the CLEAN beam and the final residuals. The specified scale list is [0, 7, 15, 22, 30] pixels that are randomly perturbed during the RMSClean deconvolution. The specified scale list can also be written as the relation between a scale list and the PSF main lobe, which has more physical meaning. For example, [0, 7, 15, 22, 30] pixels = [0, 3.3, 10.5, 14.3] × 2.1 pixels for this experiment. However, these two representations are completely equivalent because the PSF main lobe is a constant for a given experiment and only a common factor is extracted for the second representation. 

Open with DEXTER 
To further verify the RMSClean algorithm, a similar experiment was performed with source g21^{5} (Fig. 3 top), and we compared this to the commonly used diffuse emission reconstruction method MSClean. The reconstruction results are shown in Fig. 4. Both methods can reconstruct the diffuse sources well (Fig. 4 top), but the RMSClean algorithm is more sufficient in extracting the signal so that the residual image contains fewer structures. The offsource rms and full rms of the RMSClean residual image are significantly smaller than those of the MSClean algorithm (Table 1). The dynamic range of the RMSClean restored image is also four times higher than that of MSClean (Table 1). A different structure source M87lo^{6} (Fig. 3 bottom) was simulated to further verify the performance of the RMSClean. This is consistent with the previous conclusion: the structures of the residual image are weaker and the dynamic range of the restored image is higher (Fig. 5 and Table 1). These experiments show that the RMSClean performs better than the commonly used MSClean (Tables 2 and 3).
Fig. 3.
Simulation results of sources g21 and M87lo. Top left: g21 reference model image. Top right: g21 dirty image. Bottom left: M87lo reference model image. Bottom right: M87lo dirty image. 

Open with DEXTER 
Numerical comparison of different algorithms for the g21 and M87lo simulations.
Numerical comparison of different specified scale lists for the g21 simulation.
Numerical comparison of different tests for reconstruction stability of random scales.
For the RMSClean deconvolution, we list some noteworthy features below.
1. The scale sizes in the RMSClean algorithm are randomly perturbed during the deconvolution process. It is found in experiments that the RMSClean algorithm more easily finds a better solution than the MSClean algorithm with a fixed scale list. The fixed scale list in the MSClean algorithm is more likely to fall into the local optimum in some cases. For example, a ring structure in the residuals during the deconvolution process has a small amplitude but a large scale size. The MSClean algorithm cannot easily escape this situation to obtain a global optimal solution. However, the RMSClean with random scales can quickly escape this point and find a new update direction to obtain a better reconstruction performance. Table 2 shows that the performance improvement of the RMSClean algorithm for different specified scale lists is visible, even when the scale list of the MSClean algorithm is “optimal”.
2. Compared with the MSClean, the deconvolution error ϵ of the RMSClean is smaller. This can be verified from the fewer structures in the residual images from a morphological point of view (see Figs. 4 and 5) and a lower rms (see Tables 1 and 2).
Fig. 4.
Reconstruction results of source g21 from different algorithms. Top left: reconstructed model image from the MSClean algorithm. Top right: reconstructed model image from the RMSClean algorithm. Bottom left: residual image from the MSClean algorithm. Bottom right: residual image from the RMSClean algorithm. The specified scale list is [0, 7,15, 25, 40] pixels. 

Open with DEXTER 
Fig. 5.
Reconstruction results of source M87lo from different algorithms. Top left: reconstructed model image from the MSClean algorithm. Top right: reconstructed model image from the RMSClean algorithm. Bottom left: residual image from the MSClean algorithm. Bottom right: residual image from the RMSClean algorithm. The specified scale list is the same as Fig. 4. 

Open with DEXTER 
3. We have also observed some oscillations caused by the random scaling mechanism in the RMSClean algorithm in repeated experiments that do not use stored random perturbation coefficients (Table 3). However, each deconvolution can be accurately reproduced by the stored random perturbation coefficients. At the same time, this oscillation can be used to find a better solution by testing multiple times and using the criteria in Table 3 combined with the morphological features as other CLEAN algorithms.
4. The RMSClean has more scales, which can handle the scale uncertainty of the sky brightness distribution and obtain a better representation. However, MSClean can only represent the sky brightness distribution in a few fixed scales.
5. The scale randomness of the RMSClean algorithm is calculated by adopting a userspecified scale list and then randomly perturbing in the deconvolution process. Therefore RMSClean is very similar to the general MSClean in its use. This helps MSClean users to migrate to the RMSClean algorithm.
5. Summary
Modern highsensitivity telescopes can observe weaker structures, and the observed sky brightness distribution tends to be more complicated. The dim and complex structures pose great challenges to the restoration of celestial brightness distributions. The introduction of multiscale parameterization of the sky brightness distribution can effectively represent the correlation between pixels within a scale. This improves the reconstruction quality of extended sources.
The existing multiscale methods model the sky brightness distribution as a linear combination of multiscale basis functions. The multiscale list needs to be specified by the user. Because of factors such as computational overhead and memory limit, only a few scale sizes usually employed. The sky brightness distribution is forcedly represented by these specified scales. The scale uncertainty of the sky brightness distribution means that such a representation leaves much compact emission, which requires many compact components to represent.
In this paper, a random multiscale method was introduced to solve the reconstruction problem of the scale uncertainty of the sky brightness distribution. The scales in the RMSClean vary randomly within a certain range by means of random perturbations. This cannot only deal with the uncertainty of the scale of the sky brightness distribution, but also increases the number of scales, which leads to an improved reconstruction quality. At the same time, it is also found by experiments that the random multiscale mechanism can help the deconvolution escape the local optimum, which causes the reconstruction to become a better solution.
The RASCIL can be found from https://developer.skatelescope.org/projects/simtools/en/latest/ or https://github.com/SKAScienceDataProcessor/rascil, which is officially developed for radio interferometry imaging by the SKA organization.
https://public.nrao.edu/gallery/cassiopeiaa/, Credit: L. Rudnick, T. Delaney, J. Keohane & B. Koralesky.
https://public.nrao.edu/gallery/pulsarwindnebula/, Credit: NRAO /AUI/NSF and NASA/CXC.
https://www.nrao.edu/pr/1999/m87big/, credit: F.N. Owen, J.A. Eliek and N.E. Kassim.
Acknowledgments
The authors sincerely thank the anonymous referee for constructive advices. We would like to thank the people who have worked and are working on Python, RASCIL and CASA projects, which together built an excellent development and simulation environment. This work was partially supported by the National Natural Science Foundation of China (NSFC, 11963003, 61572461, 11790305, U1831204), the National SKA Program of China (2020SKA0110300), the National Key R&D Program of China (2018YFA0404602, 2018YFA0404603), the Guizhou Science & Technology Plan Project (Platform Talent No.[2017]5788), the Youth Science & Technology Talents Development Project of Guizhou Education Department (No.KY[2018]119,[2018]433]), Guizhou University Talent Research Fund (No.(2018)60) and “Light of West China” Programme (2017XBQNXZA008).
References
 Bhatnagar, S., & Cornwell, T. J. 2004, A&A, 426, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Clark, B. G. 1980, A&A, 89, 377 [NASA ADS] [Google Scholar]
 Cornwell, T. J. 2008, IEEE J. Sel. Top. Signal Process., 2, 793 [Google Scholar]
 Cornwell, T. J., Golap, K., & Bhatnagar, S. 2008, IEEE J. Sel. Top. Signal Process., 2, 647 [Google Scholar]
 Carrillo, R. E., McEwen, J. D., & Wiaux, Y. 2012, MNRAS, 426, 1223 [Google Scholar]
 Dabbech, A., Ferrari, C., Mary, D., et al. 2015, A&A, 576, A7 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Dewdney, P., Hall, P., Schilizzi, R., & Lazio, T. 2009, Proc. IEEE, 97, 1482 [Google Scholar]
 Högbom, J. A. 1974, A&AS, 15, 417 [Google Scholar]
 Girard, J. N., Garsden, H., Starck, J. L., et al. 2015, A&A, 575, A90 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Narayan, R., & Nityananda, R. 1986, ARA&A, 24, 127 [Google Scholar]
 Offringa, A. R., & Smirnov, O. 2017, MNRAS, 471, 301 [Google Scholar]
 Rau, U. 2010, PhD Thesis, New Mexico Institute of Mining and Technology, Socorro, NM, USA [Google Scholar]
 Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Taylor, G. B., Carilli, C. L., & Perley, R. A. 1999, in Synthesis Imaging in Radio Astronomy II, ASP Conf. Ser., 180 [Google Scholar]
 Thompson, A. R., Moran, J. M., & Swenaon, G. W. 2017, Interferometry and Synthesis in Radio Astronomy, 3rd Edn. (Springer) [Google Scholar]
 Wu, X. P. 2019, China SKA Science Report (Science press), ISBN: 9787030629791 [Google Scholar]
 Zhang, L., Bhatnagar, S., Rau, U., & Zhang, M. 2016, A&A, 592, A128 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Zhang, L., Xu, L., & Zhang, M. 2020, PASP, 132, 041001 [Google Scholar]
All Tables
Numerical comparison of different algorithms for the g21 and M87lo simulations.
Numerical comparison of different specified scale lists for the g21 simulation.
Numerical comparison of different tests for reconstruction stability of random scales.
All Figures
Fig. 1.
Simulation results of SKA lowfrequency observations. Left: reference model image Cassiopeia A. Middle: PSF displayed by the logarithmic scaling (CASA parameter “scaling power cycles” = −1.2) for more details of its sidelobes. Right: dirty image corrupted by the PSF. 

Open with DEXTER  
In the text 
Fig. 2.
Reconstruction results of source Cassiopeia A from the RMSClean algorithm. Top left: reconstructed model image. Top right: model error image, which is the difference between the reference model image and the reconstructed model image. Bottom left: residual image. Bottom right: restored image, which is the sum of the reconstructed model image convolved with the CLEAN beam and the final residuals. The specified scale list is [0, 7, 15, 22, 30] pixels that are randomly perturbed during the RMSClean deconvolution. The specified scale list can also be written as the relation between a scale list and the PSF main lobe, which has more physical meaning. For example, [0, 7, 15, 22, 30] pixels = [0, 3.3, 10.5, 14.3] × 2.1 pixels for this experiment. However, these two representations are completely equivalent because the PSF main lobe is a constant for a given experiment and only a common factor is extracted for the second representation. 

Open with DEXTER  
In the text 
Fig. 3.
Simulation results of sources g21 and M87lo. Top left: g21 reference model image. Top right: g21 dirty image. Bottom left: M87lo reference model image. Bottom right: M87lo dirty image. 

Open with DEXTER  
In the text 
Fig. 4.
Reconstruction results of source g21 from different algorithms. Top left: reconstructed model image from the MSClean algorithm. Top right: reconstructed model image from the RMSClean algorithm. Bottom left: residual image from the MSClean algorithm. Bottom right: residual image from the RMSClean algorithm. The specified scale list is [0, 7,15, 25, 40] pixels. 

Open with DEXTER  
In the text 
Fig. 5.
Reconstruction results of source M87lo from different algorithms. Top left: reconstructed model image from the MSClean algorithm. Top right: reconstructed model image from the RMSClean algorithm. Bottom left: residual image from the MSClean algorithm. Bottom right: residual image from the RMSClean algorithm. The specified scale list is the same as Fig. 4. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.