Open Access
Issue
A&A
Volume 673, May 2023
Article Number A151
Number of page(s) 18
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/202245393
Published online 24 May 2023

© The Authors 2023

Licence Creative CommonsOpen Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article is published in open access under the Subscribe to Open model.

Open Access funding provided by Max Planck Society.

1 Introduction

In very long baseline interferometry (VLBI) the signals recorded at single antennas are correlated to achieve spatial resolution that would not be achievable with single-dish instruments. The correlation product of every antenna pair at a fixed time is the Fourier coefficient (visibility) of the true sky brightness distribution with a Fourier frequency determined by the projected spatial vector joining two antennas (baseline). As the Earth rotates during the observing run, baselines rotate on elliptical tracks in the Fourier domain, hence filling up the Fourier plane (uv-plane) continuously. However, due to the limited number of antennas and the limited amount of observing time, the coverage of Fourier coefficients (uv-coverage) is sparse. The procedure to recover the true sky brightness distribution from these sparsely covered Fourier coefficients is called VLBI imaging.

It is a long-standing frontline goal in astronomy to recover images of the shadow of a supermassive black hole. The Event Horizon Telescope (EHT) is a globally spanning VLBI array that observes at 230 GHz (with a recent upgrade to 345 GHz). With the combination of global baselines and short baselines, the EHT achieves the angular resolution that is needed to capture the first image of the black hole shadow in M87 (Event Horizon Telescope Collaboration 2019a) and in the Milky Way (Event Horizon Telescope Collaboration 2022a). The next-generation Event Horizon Telescope (ngEHT) is a planned extension of the EHT (Doeleman et al. 2019; Johnson et al. 2023). It will produce movies of the accretion onto the central black hole SGR A* at the scales of the event horizon (Roelofs et al. 2023; Emami et al. 2023). The dynamic timescales for these observations are very short. Observations of Sgr A* in the submillimeter (Bower et al. 2015; Wielgus et al. 2022) and near-infrared regime (GRAVITY Collaboration 2018a,b) confirm that Sgr A* is time-varying on timescales as short as 30 min. The predicted innermost stable circular orbit (ISCO) period varies between 4 min and roughly 30 min, depending on the spin of the black hole. Palumbo et al. (2019) concluded that a well-sampled baseline coverage on timescales of ~30 min is needed to recover the source dynamics.

CLEAN (Högbom 1974) and its many variants (Clark 1980; Schwab 1984; Wakker & Schwarz 1988; Bhatnagar & Cornwell 2004; Cornwell 2008; Rau & Cornwell 2011; Müller & Lobanov 2023) have served the community well for decades, but are recently being challenged by forward-imaging approaches in the spirit of regularized maximum likelihood (RML) methods (Narayan & Nityananda 1986; Wiaux et al. 2009; Garsden et al. 2015; Ikeda et al. 2016; Chael et al. 2016, 2018; Akiyama et al. 2017b,a; Event Horizon Telescope Collaboration 2019b; Müller & Lobanov 2022) and Bayesian approaches (Arras et al. 2019, 2021; Broderick et al. 2020a,b). Recently we developed new multiresolution tools for performing VLBI imaging (Müller & Lobanov 2022, 2023). For these multiscalar approaches we designed special wavelet-based basis functions (difference of Gaussian and difference of spherical Bessel functions) and fitted the basis functions to the uv-coverage. In this way we define smooth basis-functions that are well suited to describe (compress) the recovered image features by encoding information about the uv-coverage itself. Some wavelets are most sensitive to gaps in the uv-coverage, while others are most sensitive to covered Fourier coefficients. While the signal from the latter should be recovered, the signal from former are suppressed (effectively avoiding overfitting).

As a byproduct of these multiscalar imaging algorithms, we compute the multiresolution support (mr-support; Müller & Lobanov 2022): a set of wavelet parameters that are deemed statistically significant to represent the recovered image features. The multiresolution support encodes various information about the recovered image. First, it implements a support constraint (to determine where the emission is located in the image). Second, it encodes a spatial constraint (to understand which spatial scales are needed to represent the image features at these locations). In particular, the second prior information is determined by the spatial scales that are present in the data (i.e., covered by baselines in the observation). We demonstrated in Müller & Lobanov (2022) that the multiresolution support is a powerful prior information very well suited to refining the imaging procedure. In Müller & Lobanov (2022) we proposed the addition of amplitudes and phases to the data terms and the removal of any regularizer term, in order to solve the resulting optimization problem by only updating the coefficients in the multiresolution support. The fit to the observed visibilities improved, but without the addition of spurious artifacts that are typical of overfitting.

Among Stokes I imaging, full polarimetric imaging is of interest for the VLBI community, both theoretically (Blandford & Znajek 1977; Hardee et al. 2007; Kramer & MacDonald 2021) and observationally (e.g., Gómez et al. 2011, 2016; Hovatta et al. 2012; Zamaninasab et al. 2014; Pötzl et al. 2021; Ricci et al. 2022, among many others), in particular at event horizon scales (Event Horizon Telescope Collaboration 2021a,b). In polarimetric imaging the recorded data are separated into several polarized subbands and recombined in the four Stokes parameters. Essentially, we have four Stokes parameters (I, Q, U, V) and corresponding polarized visibilities. Hence, the problem that we aim to solve for the other three Stokes parameters is the same as for Stokes I: recovering a signal from a sparse measurement of the Fourier coefficients. However, there are some slight differences. While the Stokes I image is necessarily nonnegative (and used during imaging as a prior), this does not have to be true for Stokes Q, U, and V. Moreover, I2Q2 + U2 + V2 applies.

The multiresolution support is a well-suited prior to be applied to the polarimetric imaging when the Stokes I image is already done. The support constraint of the multiresolution support encodes the information that linear and circular polarized emission theoretically can only appear at locations where total intensity (Stokes I) is greater than zero. This might not reflect the observation situation in every case. Sometimes the Stokes I signature cannot be retrieved with the spatial sensitivity of the interferometer, while the more localized (e.g., due to Faraday rotation) polarized structural pattern is visible. However, in most VLBI studies this pathological situation does not appear and support constraint is a good approximation. Moreover, the spatial constraint adheres to the fact that the polarimetric visibilities have the same uv-coverage as total intensity visibilities (i.e., the same spatial scales, those covered by uv-coverage, are present in the polarized images).

Another domain of current research is the study of dynamic sources, such as Sgr A*. We study in this case the static imaging of a dynamically evolving source as in Event Horizon Telescope Collaboration (2022b), and the dynamic reconstruction of a movie (Roelofs et al. 2023). In this work we focus on the latter problem. Data sets of dynamic sources pose additional challenges. Due to the short variability timescale, the effective uv-coverage in every frame is not sufficient for efficient snapshot imaging. Modern approaches utilize a temporal correlation instead, in a Bayesian framework (Bouman et al. 2018; Broderick et al. 2022; Roelofs et al. 2023) or as temporal regularizer in the RML framework (Johnson et al. 2017; Bouman et al. 2018; Chael et al. 2022; Roelofs et al. 2023). Moreover, the variability of the source could be misidentified with the calibration of the gains (Event Horizon Telescope Collaboration 2022b).

Again, the multiresolution support (computed for the time-averaged image) encodes prior information that is very desirable for dynamic imaging. The support constraint encodes the information that every location of an emission spike appearing during the observation is present also in the mean image. The uv-coverage of the full observation run is the sum of the uv-coverage of the single frames. Hence, the spatial constraint also provides a powerful image prior for dynamic imaging; the multiresolution support only allows spatial scales that are present in the mean image (in the full observation run), which means that the fit in the gaps of the uv-coverage remains under control. On the other hand, the spatial constraint allows for the addition of the spatial scales to single frames that might be not represented in the uv-coverage of this single frame, but in earlier or later snapshots. However, we note that there may be a bias toward larger scales since the mean image suppresses small-scale structures present in only part of the individual frames.

Based on the success of the approach presented in Müller & Lobanov (2022) of only changing the coefficients in the multiresolution support to introduce effective regularization, we propose the same approach for static polarimetric imaging and dynamic imaging. As outlined above, the multiresolution support is well suited to be used as a regularizer in these problems as it exactly encodes the prior information that is needed. As we solve two quite different extensions to the standard VLBI imaging with the same approach, it is natural to use the same approach for the combined problem: a dynamic polarimetric reconstruction.

2 Theory

2.1 VLBI

As described by the van Cittert–Zernike theorem, the visibilities 𝒱 are related to the true sky-brightness distribution I(x, y) by a two-dimensional Fourier transform under reasonable assumptions (Thompson et al. 2017): (1)

From a full coverage of the Fourier coefficients (visibilities) the true sky brightness distribution could be computed by an inverse Fourier transform. However, in VLBI the -coverage is very sparse with significant gaps. This makes the problem of recovering the image an ill-posed inverse problem. The polarized quantities are measured at every antenna with orthogonal polarimetric filters (linear or circular). The cross-correlation of these signals gives rise to the polarimetric Stokes I parameters and their respective polarimetric visibilities: (2) (3) (4) (5)

Here I is the total brightness, Q and U the linear polarizations, and V the fraction of circular polarization. By construction it is (6)

2.2 Imaging

Imaging with the CLEAN algorithm and its variants (Högbom 1974; Schwab 1984; Wakker & Schwarz 1988) were the standard in VLBI imaging for the last decades. In CLEAN the imaging problem is equivalently reformulated as a deconvolution problem, (7)

where ID is called the dirty map (the inverse Fourier transform of all measured, and probably reweighted, Fourier coefficients) and BD is called the dirty beam (the dirty map of a synthetic delta source). The astronomer using CLEAN determines some search windows for components; CLEAN looks for the maximum peak in the residual in this window (minor loop) and subtracts the shifted and rescaled dirty beam from the residual (major loop). This procedure is iterated until the residual is noise-like. In this way, CLEAN models the image as a set of delta functions. Finally, these components are restored with a restoring beam (clean beam) that fits the central peak of the dirty beam. CLEAN is an inverse modeling approach to the imaging problem.

Recently, forward-modeling approaches have gained interest in the community in the framework of RML (Akiyama et al. 2017a; Chael et al. 2018; Müller & Lobanov 2022) and Bayesian methods (Arras et al. 2019; Broderick et al. 2020a,b). These methods seem to outperform classical CLEAN in terms of speed, spatial resolution, sensitivity, and precision, in particular when the uv-coverage is sparse (e.g., Event Horizon Telescope Collaboration 2019b; Arras et al. 2021; Müller & Lobanov 2022; Roelofs et al. 2023). On the other hand, these forward-modeling methods require the fine-tuning of some hyperparameters and regularization parameters, despite the recent effort to reduce this dependence (Müller & Lobanov 2022). For the remainder of this manuscript we focus on RML methods, and ignore Bayesian approaches for now.

In RML a sum of data fidelity terms and penalty terms is minimized (8)

where the data fidelity term Si; measures the fidelity of the recovered solution I to the observed data (i.e., polarized visibilities) and the regularization terms Rj measure the fidelity of the guess image I. The regularization parameters αi; and βj are manually set weights that balance data fidelity and regularization terms. Typical choices for the data terms are chi-square values to the observed (polarimetric) visibilities, and related calibration independent quantities such as closure phases and closure amplitudes. For the regularization terms a wide range of regularizers has been applied in the past, for example sparsity promoting reg-ularization (11, l2), smoothness constraints (total variation, total squared variation), hard constraints (total flux, non-negativity), entropy maximization (MEM), and multiscale decompositions (hard thresholding on scales). The regularization terms introduce regularization to the ill-posed imaging problem. By balancing the data terms and the regularization terms, we select a possible guess solution that fits the data (small data terms) and is robust against noise and artifacts (small penalty terms). We demonstrated in previous works (Müller & Lobanov 2022) that a support constraint has the same regularization effect. By constraining the space of free parameters to the multiresolution support we were able to refine the fit to the observed data in later imaging rounds.

2.3 Wavelets

The basis behind multiscalar approaches are multiscalar dictionaries. In Müller & Lobanov (2022) we proposed the use of radial-symmetric difference of Gaussian (DoG) wavelets, and extended them to directional dependent basis functions in Müller & Lobanov (2023). Moreover, we introduced in Müller & Lobanov (2023) steep quasi-orthogonal basis functions to study the Fourier domain by difference of Bessel functions (DoB). Both dictionaries (DoG and DoB) are related to each other: the DoG wavelets approximate the central peak of the DoB wavelets, but do not contain the wider sidelobes of latter ones. In what follows we quickly summarize these wavelet dictionaries. For more detailed information we refer to Müller & Lobanov (2022, 2023).

Wavelets have a wide range of applications in image compression. The most widely used continuous wavelet is the Mexican-hat wavelet which is a rescaled second-order derivative of a Gaussian (Lagrangian of Gaussians; Starck et al. 2015). The difference of Gaussian method offers some viable approximation to Mexican hat wavelets. A DoG wavelet is described by two width parameters σ1,σ2: (9)

The Fourier transform of these DoG wavelets define ring-like filters in the Fourier domain: (10)

The extension to DoB wavelets is natural. We replace the DoG wavelets by spherical Bessel functions: (11)

Moreover, the extension of both wavelets to directional dependent basis functions is straightforward as well. The radial coordinates just need to be replaced by elliptical ones.

The wavelet decomposition is composed from the wavelet basis functions from a sequence of increasing widths σ0σ1 ≤ … ≤ σJ: (12) (13)

For direction dependent dictionaries we use elliptical Gaussians and Bessel functions instead. For more details we referto ourdis-cussion in Müller & Lobanov (2023). The multiscale dictionary is the adjoint of the multiscale decomposition (in what follows called Γ) (14)

with an analogous action for DoB wavelets and multidirectional wavelets. The complete action of the multiscalar and multidirectional wavelet decomposition is presented in the appendix.

thumbnail Fig. 1

Illustration of wavelet decomposition and multiresolution support of an astronomical image. Left panels: true image and true image with additional Gaussian noise; middle panels: Wavelet decomposition of the noised image with the DoG wavelet dictionary computed with filter sizes σ0 = 1,σ1 = 2, σ2 = 4,…,σ5 = 32 pixels; right panels: multiresolution support computed by thresholding the wavelet scales to the scale-dependent noise plotted as a mask with value 1 (coefficient in the support) or 0 (coefficient not in the multiresolution support).

2.4 DoG-HiT

Our novel algorithm for doing dynamic polarimetric reconstructions is an extension of the DoG-HiT algorithm (Müller & Lobanov 2023). We summarize this algorithmic framework in this section. DoG-HiT models the image by a radial symmetric wavelet dictionary ΨDoG. The Fourier transform of the basis functions of the dictionary (atoms) are sensitivity filters in the Fourier domain. Hence, by fitting the widths of the Gaussians to the uv-coverage, we define wavelets that are most sensitive to measured Fourier coefficients and wavelets that are most sensitive to gaps in the uv-coverage. The signal of the former should be kept, while the lack of the latter atoms causes sidelobes in the image. In this way the dictionary allows for a better separation between measured features (covered by baselines) and uncovered artifacts. We interpolate the signal in the gaps by the smooth nature of the basis functions, but suppress the signal in the gaps to a level where overfitting is prohibited. All in all, we solve the minimization problem (Müller & Lobanov 2022): (15)

Here Scph and Slca denote the χ2 fit to the closure phases and closure amplitudes, respectively, and Rflux denotes a characteristic function on the total flux of the guess solution. We use the pseudo-norm (i.e., the number of non-zero coefficients) as a sparsity promoting regularization term weighted with a regularization parameter β Equation (15) is solved by a forward–backward splitting algorithm alternated with rescaling the emission to a predefined total flux (Müller & Lobanov 2022). The final recovered solution is: (16)

The regularization parameter β is the only free parameter that needs to be chosen manually by the user. The number of free parameters is therefore much smaller than the number of free parameters for RML methods such as ehtim (Chael et al. 2016, 2018) or SMILI (Akiyama et al. 2017b,a) since the penalty term is chosen data-driven. We demonstrated in Müller & Lobanov (2022) that although the optimization landscape is much simpler, the reconstructions obtained by DoG-HiT are competitive to RML reconstructions. Moreover, we only fit closure phases and closure amplitudes for DoG-HiT in Eq. (15) (i.e., the reconstruction is robust against instrumental gain corruptions). Next, we use the model computed by DoG-HiT for self-calibration (i.e., we determine the gains).

2.5 Multiresolution support

A specific property of the multiscalar decompositions is the multiresolution support. Mertens & Lobanov (2015) paved the way for the application of the multiresolution support in the analysis of active glactic nucleus (AGN) jets. The multiresolution support is a set of wavelet components that are statistically significant (Starck et al. 2015). We decompose a noisy image by a wavelet dictionary: [I0, I1,I2,…, IJ] = ΨI. Moreover, we compute the scale-dependent noise level sj by decomposing a Gaussian white noise field with the same wavelet dictionary. Given a threshold ks, we can define a set of statistically significant wavelet coefficients with the criterion that , where the noise level is approximated by the variance from an emissionfree region of the image scale Ij (i.e., far away from the center). The multiresolution support for a celestial ground truth image from the EHT imaging challenges1 is illustrated in Fig. 1.

The multiresolution support encodes two different types of prior information about the model. First, it encodes a support constraint (i.e., it defines the position of significant emission spikes in the field of view). Second, the multiresolution support contains information about the spatial scales that are present in the observation. In sparse VLBI arrays, this is dominated by the uv-coverage (i.e., by which spatial scales are covered by observed baselines in the Fourier domain). As various wavelet basis functions are most sensitive to various baselines or gaps in the uv-coverage, the information about which spatial scales are covered by observations is directly imprinted in the multiresolution support. This is especially true for the direction-dependent DoG and DoB wavelets used for DoG-HiT that were fitted to the uv-coverage (i.e., that were developed to allow an optimal separation between covered features and gaps in the uv-coverage).

DoG-HiT solves the minimization of Eq. (15) with a forward-backward splitting algorithm. The backward projection step is the application of the proximal-point operator of the l0 penalization function, which is a hard thresholding (Müller & Lobanov 2022). Hence, all insignificant wavelet coefficients are set to zero. DoG-HiT therefore computes an approximation of the multiresolution support as a byproduct. This support was used to further refine rounds in the imaging (Müller & Lobanov 2022).

The computation of the multiresolution support as a byproduct of DoG-HiT highlights an essential improvement of DoGHiT compared to CLEAN regarding supervision. The support of significant emission is found by DoG-HiT automatically, while it has to be selected in CLEAN by the user-defined CLEAN windows. DoG-HiT is therefore is less user-biased and provides (compared to standard RML frameworks and CLEAN) an essential step toward unsupervised VLBI imaging.

3 Algorithms

We outline in this section the algorithms used for static polarime-try, dynamic Stokes I imaging, and dynamic polarimetry. In what follows we call these algorithms mr-support imaging.

3.1 Stokes I

Static Stokes I images are constructed with DoG-HiT with the five round pipeline presented in Müller & Lobanov (2022). However, in that work we used only radially symmetric wavelets. As an extension, we used the multi-directional dictionaries developed in Müller & Lobanov (2023) for this work (i.e., we replaced the circular symmetric Gaussians by elliptical Gaussians). Moreover, we used a grid search in Müller & Lobanov (2022) to find a proper starting point for the forward-backward splitting minimization iterations of DoG-HiT. Since the backward step in the minimization is essentially a hard thresholding, we tried different scale-dependent thresholds in an equidistant grid to minimize Eq. (15), and used the setting of the minimum as the starting point for the forward–backward iterations. For this manuscript, we used the same grid search, but applied the orthogonal DoB wavelets in the grid search, while still using the DoG wavelets in the imaging rounds of the pipeline. We do not focus on the Stokes I reconstruction in this work as these extensions are rather straightforward and minor, and the focus of the manuscript is on an extension of DoG-HiT to polarimetry. We recall one of the main advantages of DoG-HiT: the algorithm works mainly unsupervised with a minimal set of free parameters, hence adding a minimal human bias in the imaging procedure.

3.2 Polarimetry

For polarimetric reconstructions we first reconstruct a Stokes I image with DoG-HiT and solve for the gains by self-calibrating to the final output (DoG-HiT relies on calibration independent closure quantities). As a second step, we solve for the polarimet-ric Stokes parameters Q, U, and V. We take the multiresolution support computed by DoG-HiT for the Stokes I imaging and constrain the space of free parameters to all wavelet coefficients in the multiresolution support. We then solve for Q, U, V by minimizing the fit to 𝒱Q, 𝒱U, 𝒱V with a gradient descent algorithm, but only allow coefficients in the multiresolution support to vary. In summary we solve the following problems: (17)

Here are the recovered wavelet coefficients for the Stokes I image as in Sect. 2.4, and SU and Sq are the chi2 fit qualities to the Stokes Q and U visibilities. The side condition Qj(x,y) = 0 whenever denotes the constraint that we only vary coefficients in the multiresolution support.

The multiresolution support is a well-suited regularizer here; the support constraint encodes the side-condition Eq. (6) effectively, which means that polarized emission is only allowed to appear at locations in the images in which we found relevant emission in total intensity. While this inequality Eq. (6) holds true theoretically in any case, in practice the pathological situation could occur that due to the instrumental effect a non-detection of Stokes I does not rule out polarimetric structures. With this caveat in mind, we assume for the rest of the manuscript that inequality Eq. (6) holds true in observations as well. Moreover, the polarimetric visibilities have the same uv-coverage as the Stokes I visibility. The spatial constraint of the multiresolution support describes which spatial scales are statistically significant to describe the emission in the image, which in the case of sparse VLBI arrays is dominated by the uv-coverage (i.e., which spatial scales are compressed by which baselines and whether these baselines are measured). Hence, we already computed the multiresolution support as a byproduct in DoGHiT to study the uv-coverage of the observation, and get control over overfitting in the gaps of the uv-coverage by suppressing the respective atoms of the dictionary. This effective regularization can be copied to the polarized visibilities as the uv-coverage is the same.

Moreover, we would like to note once again that the multiresolution support is a completely data driven property computed as a sideproduct by DoG-HiT. Hence, the reconstruction of polari-metric properties still relies on a minimal set of hyperparameters and remains largely unsupervised.

We fitted complex polarimetric visiblities directly here, which requires that a good polarization calibration is already available. However, the method is easy to adapt to more realistic situations since it is (opposed to CLEAN) a forward-modeling technique. First, instead of a constrained χ2 minimization to the complex visibilities, one could just optimize the fit to the visibility-domain polarization fraction as in Johnson et al. (2015). Second, the minimization in Eq. (17) is done iteratively, where the most important features are recovered first and gradually more detailed features are recovered at later iterations. Hence, with a similar philosophy to how self-calibration interacts with CLEAN, we could run the minimization for some iterations and do the calibration on the current model, then continue the minimization and calibration in an alternating manner.

3.3 Dynamic Stokes I

For dynamic Stokes I imaging, we first reconstruct a static image with DoG-HiT. For this work we assume that the static image of a dynamically evolving source might be a good approximation to the mean image during the time of observation. This might be true in particular if the source structure contains some persistent structure during the complete observing run, as could be expected for Sgr A* in EHT observations with a persistent shadow in rotating hotspot models (Tiede et al. 2020). However, based on the dynamics of the target, it may be difficult to recover a decent fit to the data with a static image. In this work we applied a procedure inspired by the strategy in Event Horizon Telescope Collaboration (2022b); we added a systematic noise-floor on every baseline to account for variability. However, we did not repeat the sophisticated noise modeling applied in Event Horizon Telescope Collaboration (2022b).

We computed the multiresolution support by the static mean image. Then we cut the observation in single frames and reconstruct images at every frame independently. All frames together make up the dynamic movie reconstruction. However, due to the shortness of single frames, snapshot imaging is not possible due to the sparsity of the uv-coverage. Again, we propose using the multiresolution support instead. We minimize the χ2 for every single frame observation independently for every frame in a gradient descent algorithm (using the mean image as an initial guess), but only allow coefficients in the multiresolution support to vary.

The multiresolution support is a well-suited regularizer here as well; if the static image is a good approximation to the mean image, the static image contains all the locations of emission in the field of view. If at some time an emission spike occurs at a specific location, this emission spike should be visible in the mean as well. Hence, the support constraint encodes information about the location of emission at single frames. This assumption comes with the caveat that short-lived small-scale features may be not strong enough in the mean image and excluded later from the dynamic reconstructions due to the multiresolution support. However, we also doubt that such a feature would be visible with the much sparser uv-coverage of single scans, and therefore would not be recovered. Moreover, the uv-coverage of the complete observation is the sum of the observations of the single frames. In single frame observations there are three different categories of Fourier coefficients and/or baselines: the ones measured by observations in this single frame (very sparse), the ones that are not measured during the time of the single frame but will be measured at later (earlier) times in the observation, and the baselines that are not measured at all due to the sparsity of the array. By doing constrained optimization (constrained by the multiresolution support) to the single frame observation we fit the first class of baselines, copy the solution over from the initial guess (mean image) for the second class of baselines, and suppress the last class of baselines by the multiresolution support. Hence, the spatial constraint implemented by the multiresolution support is a well-suited prior to do dynamic imaging.

The reasonable assumption of temporal correlation between scans (e.g., by a regularizer term favoring temporal smoothness) is not used explicitly for mr-support imaging. However, this assumption can be included in the dynamic reconstruction straightforwardly. Instead of fitting the visibilities with a constrained minimization approach, we can minimize the sum of a quality metric for the fit to the visibilities and a temporal regular-ization term, but only vary the coefficients in the multiresolution support. However, for this work we restricted ourselves to reconstructions without penalization on the temporal evolution such that now new regularization parameters are introduced and the reconstruction remains automatic and completely data-driven. Moreover, this means that all scans can be computed in parallel, allowing for fast computations.

3.4 Dynamic polarimetry

We propose the same procedure for polarized imaging and dynamic Stokes I imaging: fitting the respective visibilities with a gradient descent approach, while only varying coefficients in the multiresolution support computed by DoG-HiT. It is therefore natural to utilize this approach for dynamic polarimetry as well. We propose the following strategy. First, reconstruct a static Stokes I image by DoG-HiT and compute the multiresolution support. Then cut the observation in single frames and solve for dynamics and polarimetry together by fitting to 𝒱I, 𝒱Q, 𝒱U, 𝒱V together in single frames independently, but only vary coefficients in the multiresolution support.

4 Synthetic data tests

4.1 Synthetic observations

We tested the capabilities for mr-support imaging for polari-metric image reconstructions. We tested three different source models (static polarized Sgr A* model, a slowly rotating crescent, and a rapidly rotating crescent) with two different arrays (EHT and a possible ngEHT configuration). A thorough comparison of existing imaging approaches for dynamic polarimetry is in preparation and will be deferred to a later work. For more details we also refer to the ngEHT analysis challenges Roelofs et al. (2023), and in particular the upcoming third challenge2 in which we compete with mr-support imaging. We review our submission to the third challenge in Sect. 4.5.

We observe the synthetic ground truth images and movies with the array of the EHT 2022 observations and added thermal noise according to the measured SEFDs of the 2017 observation campaign (Event Horizon Telescope Collaboration 2019b). We used ten-minute cycles consisting of five minutes of continued observation with an integration time of ten seconds and a gap of five minutes off-source (mimicking calibration, pointing scans). This cycle time is of special interest when discussing dynamic reconstructions as the five-minute gaps essentially limit the temporal resolution. The data sets were scan-averaged prior to the imaging procedure.

As ngEHT configuration we took the EHT 2022 array configuration (i.e., ALMA, APEX, GLT, IRAM-30 m, JCMT, KP, LMT, NOEMA, SMA, SMT, SPT) and added ten additional antennas from the list of Raymond et al. (2021), as was done for the ngEHT Analysis challenges (Roelofs et al. 2023): HAY (34 m), OVRO (10.4 m), GAM (15 m), BAR, BAJA, NZ, SGO, CAT, GARS, CNI (all 6 m). We added instrumental noise according to the size of the telescopes, but did not add further calibration errors. As a ground truth we took the slowly rotating crescent model with a rotation period of one hour. As for the EHT 2022 coverage, the ground truth movie was observed with a cycle of five minutes on-source and a five-minute gap and an integration time of ten seconds (ten minutes on-source with a two-minute gap in the quickly rotating crescent example).

As a static synthetic test image we took a synthetic Sgr A* image out of the ehtim software package (Chael et al. 2018). The true image model is presented in Fig. 2. For the dynamic Stokes I imaging we used a crescent model (Tiede et al. 2022), (18)

with the following parameters: I0 = 0.6 Jy, s = 0.46, and r0 = 22 μas. To account for dynamics roughly similar to rotating hotspot models (Tiede et al. 2020) we let the crescent rotate clockwise. One rotation period takes one hour which is roughly comparable to the flux variability timescale of the SGR A* light curve (Wielgus et al. 2022). The synthetic ground truth image is presented in Fig. 3. To illustrate the orientation of the crescent, we also show a green arrow from the image center to the location of the brightest pixel in the image in Fig. 3. For polarized movies we have to add polarization. For the sake of simplicity, here we used a simpler model to test the capabilities of dynamic polarimetry: we added a constant linear polarized structure at 10% (no circular polarization) with a rotating EVPA. To separate the dynamic polarimetric reconstruction from effects of the Stokes I imaging, the rotation of the EVPAs is counter-clockwise (rotation of Stokes I was clockwise) and has a different rotation period of two hours instead of one hour as for the Stokes I images.

As an additional model we also test a rapidly rotating crescent model with an orbistal period time of twenty minutes. We show the ground truth movie in Fig. 4. The constant EVPA pattern rotates counter-clockwise in one hour. The advance time between scans that is used for pointing and calibration limits the temporal resolution. For an array as sensitive as the ngEHT a smaller gap time might be possible. We therefore synthetically observed the rapidly rotating movie with a cycle of ten minutes of scientific observation (ten seconds of integration time) and a two-minute gap.

thumbnail Fig. 2

Static, polarimetric reconstruction with mr-support imaging on synthetic data. Left panel: static polarization ground truth; middle panel: static reconstruction with mr-support imaging; right panel: -coverage of synthetic observation (EHT 2022 array).

thumbnail Fig. 3

Synthetic ground truth dynamic movie (slowly rotating crescent) in the time interval between 10 UT and 14 UT. The green arrow ranges from the image center to the position of the brightest pixel in the frame, hence illustrating the orientation of the crescent.

thumbnail Fig. 4

True movie for fast rotating crescent.

4.2 Static polarization with EHT coverage

We fitted the scales to the uv-coverage first with the procedure outlined in Müller & Lobanov (2022) and Müller & Lobanov (2023): we searched for jumps in the sorted distribution of uv-distances that exceed a threshold, and we selected the radial scales accordingly. We defined nine radial scales and used four different angles, resulting in 36 scales to represent the uv-coverage. The Stokes I image was recovered with DoG-HiT (Müller & Lobanov 2022) using the multi-directional dictionaries introduced in (Müller & Lobanov 2023), as described in Sect. 3.1. As presented in Sect. 3.2, we then computed the multiresolution support. The multiresolution support is presented in Fig. 5. Some scales that are most sensitive to gaps in the uv-coverage are suppressed completely, while other scales encode various parts of the emission structure, for example the ring-like emission (scale 34 and scale 35), the extended emission structure (scale 30 and 32), the fine crescent structure (e.g., scale 4, 7, 9, 14, and 24), or the bright spot to the left of the crescent (e.g., scale 0, 2, and 10). The minimization to the polarized visibilities was done with the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm (Byrd et al. 1995), as implemented in Scipy (Jones et al. 2001). To assert global convergence, we blurred the Stokes Q and U image of the reconstruction with the nominal resolution and redo the minimization with a gradient descent procedure.

We show the final reconstruction result in Fig. 2. The reconstruction of the Stokes I image is relatively successful. The crescent-like shadow image is overall well recovered. However, there are some finer structures that are not recovered by DoG-HiT: the closing of the ring by a thin line toward the right and the fainter structure inside the ring. The linear polarized emission is overall very well recovered. The total fraction of linear polarized light and the overall direction of the electromagnetic vector position angles (EVPA) in the north-south direction are well recovered. The synthetic ground truth image contains a greater number of complex local structures for example a rotation of the EVPA in the bottom left of the image toward the east–west direction. This shift is partly visible in the recovered image as well, although the amount of rotation is smaller.

All in all, this example demonstrates that even for a very challenging and sparse array such as the EHT 2022 array the polari-metric reconstruction with support imaging is quite successful in both the overall structure, but also in the reconstruction of more localized polarimetric structures with a size of ≈5 μas. Thus, similar to the DoG-HiT reconstruction for the Stokes I image, mr-support polarimetry seems to offer mild super-resolution. Interestingly, super-resolution and a good fit to the polarized visibilities is offered without introducing artifacts in the image. This demonstrates the power of the regularization approach.

thumbnail Fig. 5

Multiresolution support for the reconstruction of the static polarization example with EHT coverage.

thumbnail Fig. 6

Recovered solution (recovered with mr-support imaging) for slowly rotating crescent observed with the EHT.

thumbnail Fig. 7

Same as Fig. 6, but for slowly rotating crescent observed with the ngEHT.

4.3 Dynamic Stokes I

The synthetic slowly rotating crescent movie was observed as described in Sect. 4.1 with a ten-minute cycle with EHT coverage. According to this temporal resolution, we cut the observation into frames with a length of ten minutes for the dynamic reconstruction. The reconstruction was then done with the mr-support approach in the best time window t ∈ [10 UT, 14 UT] (Farah et al. 2022) as outlined in Sect. 3.3. As a first step we fitted a symmetric ring model to the data, created a mean image with DoG-HiT with the fitted ring model as an initial guess, and then we solved sequentially for every frame by mr-support imaging with the support calculated from the mean. As an initial guess for the single frame imaging with mr-support imaging we used the reconstruction of the respectively preceding frame (or the mean in case of the first frame).

We present the reconstruction results in Fig. 6. The single frames all show a circular structure with a radius of ≈22 μas. Moreover, nearly all frames have an asymmetry of a crescent. However, the crescent asymmetry is less prominent than in the true image. As for the true dynamic movie, we illustrate the orientation of the crescent by an arrow from the center to the brightest pixel in the reconstruction. Following the orientations of the recovered crescents in Fig. 6 a clear rotation with an orbital period of one hour is visible. The orientation of the recovered crescents match in most frames with the synthetic ground truth except for some notable exceptions at 11 UT (no asymmetry recovered at all), and 13.16 UT-13.5 UT (incorrect orientations). In particular the latter could be a consequence of taking the reconstruction at the preceding frame as an initial guess for the next frame, and hence the false recovery at 13.16 UT also affects all the following frames.

We present in Fig. 7 the reconstruction result for a slowly rotating crescent with ngEHT coverage. The reconstruction of the crescent is excellent at every frame with high contrast images. The single-frame images do not show additional image artifacts. Although the additional ngEHT antennas have rather high thermal noise levels, the much improved density of the array effectively stabilizes against thermal noise. Strikingly the orientation of the crescents matches the ground truth very well. We present in Fig. 8 a comparison between the true position angles and the recovered ones with an error by the temporal smearing due to the scan length.

The ngEHT array is much denser than the EHT configuration of 2022. This enhances the possible temporal resolutions. We therefore also studied the possibility to observe faster rotating structures at the event horizon with the fast rotating crescent model. The dynamic reconstruction was done in this case in frames of three minutes in length. The faster orbital period and the shorter frame length complicate the reconstruction procedure: there are fewer observation points per single frame, which raises the problem of sparsity. Moreover, due to the shorter dynamical timescale and the smaller number of observing points per single frame, the scan-averaged visibility points worsen the signal-to-noise ratio by a factor of compared to the slower rotating crescent. The reconstruction results for dynamic Stokes I imaging with mr-support imaging are shown in Fig. 9. The crescent is observed at every frame. Additionally, the overall orientation matches quite well. However, the quality of the reconstruction decreases compared to the slowly rotating crescent, as can be expected: the asymmetry of the crescents is less clear and the orientation is slightly off by roughly 15 degrees in some frames.

All in all, we observe that with mr-support imaging we recover the correct image structure very well, including overall shadow feature, crescent asymmetry, and orientation, for most frames in the observation. Again, we note that these particular successful reconstructions do not suffer from introducing image artifacts despite the sparsity of the uv-coverage, especially in single frame observations. This, once again, demonstrates the regularizing property of the mr-support approach.

thumbnail Fig. 8

True position angle (blue) and the recovered position angle recovered with mr-support imaging for an EHT configuration (red) and ngEHT configuration (green) for the slowly rotating crescent model. The error bars reflect the change in position angle in the true source model within a ten-minute scan (cycle length of synthetic observation).

thumbnail Fig. 9

Reconstruction of rapidly rotating crescent observed with the ngEHT.

4.4 Dynamic polarimetry

As outlined in Sect. 3.4 we did the dynamic reconstruction of the Stokes I channel first. Hence, we copied the reconstructions from Sect. 4.3, and then added polarization frame by frame by mr-support imaging. Similar to our procedure presented in Sect. 4.2 we first minimized the data terms (fit to polarized visibilities) with a BFGS minimization procedure, blurred the reconstructed polarized images with the nominal resolution, and minimized the fit with a gradient descent procedure starting from the blurred image as an initial guess.

The reconstruction results in the time window t ∈ [10UT, 11UT] are presented in Fig. 10 for a slowly rotating crescent model with EHT coverage. The relatively simple polarized structure is well recovered in each frame. While the recovered images show some local variation from the overall orientation, the larger scale EVPA orientation matches for all frames. The fraction of polarized linearly polarized light is surprisingly well recovered. Again, despite some local variations in the recovered EVPA, the challenging reconstruction does not show image artifacts.

In Fig. 11 we present the reconstruction of the slowly rotating crescent observed with the ngEHT. The quality of the reconstruction improved compared to the reconstructions presented in Fig. 10. The global orientation of the EVPAs is well recovered for every frame. In the reconstructions with the EHT configuration we also observed some local variations from the overall polarimetric structure. These can no longer be observed in the reconstructions with ngEHT coverage.

We present the dynamic polarimetry reconstruction with mr-support imaging of the rapidly rotating crescent in Fig. 12. The reconstruction of the polarimetric structure (i.e., the rotation of the EVPAs) remains excellent. These results suggest that mr-support imaging could handle dynamic polarimetric structural features at the event horizon with realistic dynamic timescales.

thumbnail Fig. 10

True (upper panels) and recovered (lower panels) test images with full Stokes polarization for the slowly rotating crescent. The mr-support imaging approach recovers the true large-scale orientation of the EVPA.

4.5 ngEHT analysis challenge

In addition to the rather simple synthetic data tests presented in the previous subsections, we show here the reconstructions by mr-support imaging for the third ngEHT Analysis challenge3. The ngEHT Analysis challenges are a series of semi-blind data challenges to evaluate the performance of algorithms for the planned ngEHT instrument (Roelofs et al. 2023). The ngEHT is a planned instrument to recover (polarimetric) movies at event horizon scales (Doeleman et al. 2019).

The ground truth movies produced for the ngEHT Analysis challenge resemble the current theoretical state-of-the-art in simulations (Roelofs et al. 2023; Chatterjee et al. 2023). Here we present the reconstructions of a RIAF model of Sgr A* (Broderick et al. 2016) with a shearing hotspot (Tiede et al. 2020) with hotspot parameters inspired by GRAVITY Collaboration (2018a). The data sets were observed with the EHT array and with the ngEHT arrays that we used for the geometric data sets as well. In contrast to the proof of concept with geometric models, the ngEHT challenge data contain the full set of data corruptions that may be expected from real observations (Roelofs et al. 2023) simulated with the SYMBA package (Roelofs et al. 2020) including atmospheric turbulence, atmospheric opacity, pointing offsets, a scattering screen, and thermal noise specific to each antenna. However, no polarization leakage was added to the data. For more information we refer to Roelofs et al. (2023) and the challenge website4. The data sets were network calibrated as it is standard in the EHT data processing (Event Horizon Telescope Collaboration 2022b). The ngEHT Analysis challenge is particularly well suited as a verification data set since the challenge was done blindly; neither the source files nor the specific data corruptions were made public to the analysis teams.

We show the ground truth movie in Fig. 13. A static (but not descattered) image was recovered by DoG-HiT with a systematic error budget of 2%. The static image is computed by DoG-HiT in a completely unsupervised way from closure quantities. We used this calibration-independent model to calibrate the data set on long time intervals (1 h). Next we calculated the multiresolution support and cut the image into frames of six minutes. The dynamic reconstruction was done with mr-support imaging. We self-calibrated the data set in every single observing frame during the procedure. Then we added polarization in every frame.

The recovered movie is presented in Fig. 14. Moreover, we show magnified panels of selected frames in Fig. 15. The single frames all show a ring-like structure with a central depression. Compared to the ground truth frames, the reconstructed images have a worse quality due to the rapid variability, systematics, and sparse coverage. Moreover, an interstellar scattering screen was added to the data that was not removed during the imaging procedure. The reconstruction of the shearing hotspot motion is more challenging. We recover an approaching hotspot to the right of the ring at UT 11.3 (upper panels in Fig. 15), an extended (polarized) tail to the northwest (top right) from UT 11.3 until UT 11.6 (middle panels in Fig. 15), and a clearly visible arc of larger intensity within the ring to the southeast (bottom left) from UT 11.7-UT 11.9 (bottom panels in Fig. 15). These features are consistent with the hotspot motion of the ground truth movie. While we recover some motion related to the hotspot motion, a continuously evolving movie was not recovered. This is a result of the rather bad simulated weather conditions and the observation cadence for the third challenge: the source was (synthetically) observed for ten minutes followed by a gap of ten minutes. While mr-support imaging sufficiently recovers some (scattered) hotspot related features in the frames that have observed visibilities, the algorithm does not contain an interpolation scheme to the scans without observations, it just assumes the starting point (i.e., the preceding frame). Hence, we do not recover an evolving movie, but several frames (e.g., UT 11.5 and UT 11.6 or UT 11.7 until end) show the same image structure.

The synthetic ground truth polarization is less dynamic and hence easier to recover. We recover the overall radially-conic EVPA pattern in every frame with minor small-scale perturbations from the ground truth (which may also be related to the different Stokes I images). Moreover, the recovered polarization fraction matches the true one. As a more detailed feature we successfully recover a larger fractional polarization for the shearing hotspots that follows the hotspot motion.

The presented data set mimics one of the most challenging VLBI data analysis problems so far with various data corruptions, high frequencies (i.e., phase instabilities), fast dynamics and polarimetric structures, the need for super-resolution, and a sparse VLBI uv-coverage. As expected, the reconstruction quality with mr-support imaging is degraded compared to the rather simple geometric data tests discussed above. However, the application already highlights the potential of mr-support imaging to do unsupervised, super-resolving, dynamical, and polarimetric imaging together. This presents a unique capability in the landscape of existing imaging algorithms by now, and in particular a domain of research in which CLEAN remains limited due to its lack of resolution, its high demand of human supervision and calibration, and the lack of support for dynamical reconstructions.

thumbnail Fig. 11

Same as Fig. 10, but for slowly rotating crescent observed with the ngEHT.

thumbnail Fig. 12

Polarimetric reconstruction of fast rotating crescent with ngEHT coverage.

thumbnail Fig. 13

Synthetic ground truth movie of Sgr A* used for the third ngEHT Analysis challenge. The model is a RIAF model with a semianalytic shearing hotspot.

thumbnail Fig. 14

Reconstruction of the movie plotted in Fig. 13 with mr-support imaging for the third ngEHT Analysis challenge.

thumbnail Fig. 15

Selected frames of reconstructions shown in Figs. 13 and 14 at times UT 11.3 (upper panels), UT 11.5 (middle panels), and UT 11.7 (lower panels).

5 Conclusions and outlook

We presented in this manuscript a novel algorithmic approach to do static polarimetry, dynamic imaging and finally dynamic polarimetry. The approach was based on our previous works on multiscalar imaging (Müller & Lobanov 2022, 2023) and the multiresolution support in particular. The multiresolution support encodes important information about the emission structure on one hand (which spatial scales are present in the image and where) and the uv-coverage on the other hand (which of these spatial scales is measured by baselines). Hence, the multiresolution support is well suited to introduce regularization for challenging extensions to the standard VLBI imaging problem in the spirit of constrained minimization: we optimize the fit to the respective data terms (chi-squared to frame-by-frame visibilities or to polarized visibilities), but vary the wavelet coefficients only in the multiresolution support.

We demonstrated with applications to simple geometric synthetic observations the power of this approach. The mr-support constraint suppressed the introduction of image artifacts, and hence provided ample regularization. Moreover, the approach is flexible enough to allow for the reconstruction of both dynamically evolving structures and polarimetric structures. Moreover, the blind application to more complex movies of the third ngEHT Analysis challenges demonstrated that the algorithm may also provide reasonable reconstructions with real data corruptions in one of the most challenging VLBI imaging problems, although the quality of the reconstruction is degraded.

Mr-support imaging shares the basic advantage of multi-scalar approaches that are fitted to the uv-coverage. The static reconstructions are done with DoG-Hit which is completely data driven and largely automatic without many hyperparameters (Müller & Lobanov 2022). The same applies for the extension to dynamics and polarimetric quantities. No further specific regularization terms (with corresponding weights) were introduced, rather the reconstruction was regularized again by the data-driven multiresolution support, which is determined by the uv-coverage and baseline-noise. Hence, mr-support imaging is blind and unbiased as well. However, we recognized an important bottleneck for the dynamic reconstructions with mr-support imaging: the static average image needs to approximate the true time-averaged image quite well.

An extension to RML approaches to dynamic imaging (i.e., the addition of temporal regularizers) is straightforward as well. We note that due to the lack of regularization parameters controlling the temporal correlation, mr-support imaging basically calculates images with rich structures from the extreme spar-sity of a single scan independently of preceding and proceeding scans. That indicates that the multiresolution support information is a rather strong prior information that, once a reasonable static model is established, allows for the handling of extreme sparsity in the data.

The geometric test observations tested throughout this study are rather simple. First, we neglected circular polarization for the purpose of simplicity. We note that we only added thermal noise to the observations and no phase and amplitude errors. This does not affect the reconstruction of the static Stokes I image (neither for a static source nor for a dynamically evolving source) since DoG-HiT uses the closure quantities as data terms only (Müller & Lobanov 2022). However, phase and amplitude calibration errors could affect the subsequent mr-support imaging rounds since for every frame the (polarized) visibilities are used instead of the closure quantities. Hence, we assume that it is possible to solve for the (polarized) self-calibration with the time-averaged mean image. This does not necessarily have to be true, but might be a good approximation when the dynamic timescale of the source and the dynamic timescale of the gain-variability are different allowing a gain self-calibration with the mean image (compare, e.g., Wielgus et al. 2022; Event Horizon Telescope Collaboration 2022b).

Moreover, while a rotating crescent movie might be a good approximation to a rotating hotspot model in the first instance, the model is only a rough approximation to the range of models for the dynamics at the horizon scale. The same applies to the rather simple polarization model used. We therefore tested the algorithm in the blind third ngEHT Analysis challenge as well. Due to the systematic errors added to the synthetic data, the reconstructions are worse than in the previous data tests; however, mr-support imaging, for the first time, is able to recover super-resolved polarized movies in an unsupervised way. This is a unique capability among all currently existing VLBI imaging algorithms. Furthermore, we expect further significant improvements from including a temporal regularizer in the dynamic imaging and from more sophisticated strategies for the static image reconstruction, in particular from frameworks that already demonstrated to be able to recover fast dynamics such as ehtim or StarWarps.

Finally, the application of the same ground truth movie to a possible ngEHT array configuration demonstrates the improvements that the ngEHT project will bring to dynamic reconstructions. The quality of the fits to Stokes I and polarimetric properties improves. With a ngEHT configuration it is even possible to recover structural patterns on dynamic timescales of about ~ 10–20 min and therefore what can be expected from real observations.

Acknowledgements

We thank the team from the ngEHT Analysis challenge lead by Freek Roelofs, Lindy Blackburn and Greg Lindahl for the chance to use and publish their synthetic data set for this work. Special thanks goes in particular to Paul Tiede for providing the RIAFSPOT model of SGR A*. HM received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This work was partially supported by the M2FINDERS project funded by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 101018682). Our imaging pipeline and our software is available online as MrBeam software tool (https://github.com/hmuellergoe/mrbeam). Our software makes use of the publicly available ehtim (Chael et al. 2018), regpy (https://github.com/regpy/regpy), and WISE software packages (Mertens & Lobanov 2015).

Appendix A Wavelet dictionaries

This section is adapted from Müller & Lobanov (2023). The dictionaries used in this paper are as follows:

Here denotes a radial Gaussian function with a standard deviation σ and an elliptical Gaussian with major semiaxis σ1, minor semiaxis σ2, and angle α. The DoB dictionary is composed in the same way by replacing Gaussians with spherical Bessel functions,

with radial spherical Bessel function and elliptical Bessel function .

References

  1. Akiyama, K., Ikeda, S., Pleau, M., et al. 2017a, AJ, 153, 159 [NASA ADS] [CrossRef] [Google Scholar]
  2. Akiyama, K., Kuramochi, K., Ikeda, S., et al. 2017b, ApJ, 838, 1 [NASA ADS] [CrossRef] [Google Scholar]
  3. Arras, P., Frank, P., Leike, R., Westermann, R., & Enßlin, T. A. 2019, A&A, 627, A134 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  4. Arras, P., Bester, H. L., Perley, R. A., et al. 2021, A&A, 646, A84 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  5. Bhatnagar, S., & Cornwell, T. J. 2004, A&A, 426, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  6. Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433 [NASA ADS] [CrossRef] [Google Scholar]
  7. Bouman, K. L., Johnson, M. D., Dalca, A. V., et al. 2018, IEEE Transac. Comput. Imaging, 4, 512 [CrossRef] [Google Scholar]
  8. Bower, G. C., Markoff, S., Dexter, J., et al. 2015, ApJ, 802, 69 [Google Scholar]
  9. Broderick, A. E., Fish, V. L., Johnson, M. D., et al. 2016, ApJ, 820, 137 [Google Scholar]
  10. Broderick, A. E., Gold, R., Karami, M., et al. 2020a, ApJ, 897, 139 [Google Scholar]
  11. Broderick, A. E., Pesce, D.W., Tiede, P., Pu, H.-Y., & Gold, R. 2020b, ApJ, 898, 9 [NASA ADS] [CrossRef] [Google Scholar]
  12. Broderick, A. E., Gold, R., Georgiev, B., et al. 2022, ApJ, 930, 1 [Google Scholar]
  13. Byrd, R. H., Lu, P., & Nocedal, J. 1995, SIAM J. Sci. Stat. Comput., 16, 1190 [CrossRef] [Google Scholar]
  14. Chael, A. A., Johnson, M. D., Narayan, R., et al. 2016, ApJ, 829, 11 [Google Scholar]
  15. Chael, A. A., Johnson, M. D., Bouman, K. L., et al. 2018, ApJ, 857, 23 [Google Scholar]
  16. Chael, A., Chan, C.-K., Klbouman, et al. 2022, https://doi.org/10.5281/zenodo.6519440 [Google Scholar]
  17. Chatterjee, K., Chael, A., Tiede, P., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
  18. Clark, B. G. 1980, A&A, 89, 377 [NASA ADS] [Google Scholar]
  19. Cornwell, T. J. 2008, IEEE J. Selected Topics Signal Process., 2, 793 [NASA ADS] [CrossRef] [Google Scholar]
  20. Doeleman, S., Blackburn, L., Dexter, J., et al. 2019, BAAS, 51, 256 [Google Scholar]
  21. Emami, R., Tiede, P., Doeleman, S. S., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
  22. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2019a, ApJ, 875, L1 [Google Scholar]
  23. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2019b, ApJ, 875, L4 [Google Scholar]
  24. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2021a, ApJ, 910, 43 [CrossRef] [Google Scholar]
  25. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2021b, ApJ, 910, 48 [CrossRef] [Google Scholar]
  26. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2022a, ApJ, 930, L12 [NASA ADS] [CrossRef] [Google Scholar]
  27. Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2022b, ApJ, 930, L14 [NASA ADS] [CrossRef] [Google Scholar]
  28. Farah, J., Galison, P., Akiyama, K., et al. 2022, ApJ, 930, 1 [Google Scholar]
  29. Garsden, H., Girard, J. N., Starck, J. L., et al. 2015, A&A, 575, A90 [CrossRef] [EDP Sciences] [Google Scholar]
  30. Gómez, J. L., Roca-Sogorb, M., Agudo, I., Marscher, A. P., & Jorstad, S. G. 2011, ApJ, 733, 11 [Google Scholar]
  31. Gómez, J. L., Lobanov, A. P., Bruni, G., et al. 2016, ApJ, 817, 96 [Google Scholar]
  32. GRAVITY Collaboration (Abuter, R., et al.) 2018a, A&A, 615, L15 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  33. GRAVITY Collaboration (Abuter, R., et al.) 2018b, A&A, 618, L10 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  34. Hardee, P., Mizuno, Y., & Nishikawa, K.-I. 2007, Ap&SS, 311, 281 [CrossRef] [Google Scholar]
  35. Högbom, J. A. 1974, A&AS, 15, 417 [Google Scholar]
  36. Hovatta, T., Lister, M. L., Aller, M. F., et al. 2012, AJ, 144, 105 [Google Scholar]
  37. Ikeda, S., Tazaki, F., Akiyama, K., Hada, K., & Honma, M. 2016, PASJ, 68, 45 [NASA ADS] [CrossRef] [Google Scholar]
  38. Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2015, Science, 350, 1242 [Google Scholar]
  39. Johnson, M. D., Bouman, K. L., Blackburn, L., et al. 2017, ApJ, 850, 172 [Google Scholar]
  40. Johnson, M. D., Akiyama, K., Blackburn, L., et al. 2023, Galaxies, 11, 61 [NASA ADS] [CrossRef] [Google Scholar]
  41. Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python [Online; accessed 2015-08-25] [Google Scholar]
  42. Kramer, J. A., & MacDonald, N. R. 2021, A&A, 656, A143 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  43. Mertens, F., & Lobanov, A. 2015, A&A, 574, A67 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  44. Müller, H., & Lobanov, A. P. 2022, A&A, 666, A137 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  45. Müller, H., & Lobanov, A. 2023, A&A, 672, A26 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  46. Narayan, R., & Nityananda, R. 1986, ARA&A, 24, 127 [Google Scholar]
  47. Palumbo, D. C. M., Doeleman, S. S., Johnson, M. D., Bouman, K. L., & Chael, A. A. 2019, ApJ, 881, 62 [Google Scholar]
  48. Pötzl, F. M., Lobanov, A. P., Ros, E., et al. 2021, A&A, 648, A82 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  49. Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  50. Raymond, A. W., Palumbo, D., Paine, S. N., et al. 2021, ApJS, 253, 5 [Google Scholar]
  51. Ricci, L., Boccardi, B., Nokhrina, E., et al. 2022, A&A, 664, A166 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  52. Roelofs, F., Janssen, M., Natarajan, I., et al. 2020, A&A, 636, A5 [EDP Sciences] [Google Scholar]
  53. Roelofs, F., Blackburn, L., Lindahl, G., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
  54. Schwab, F. R. 1984, AJ, 89, 1076 [NASA ADS] [CrossRef] [Google Scholar]
  55. Starck, J.-L., Murtagh, F., & Fadili, J. 2015, Sparse Image and Signal Processing: Wavelets and Related Geometric Multiscale Analysis, 2nd edn (Cambridge: Cambridge University Press), 1 [Google Scholar]
  56. Thompson, A. R., Moran, J. M., & Swenson, J. George, W., 2017, Interferometry and Synthesis in Radio Astronomy, 3rd edn. (Berlin: Springer) [CrossRef] [Google Scholar]
  57. Tiede, P., Pu, H.-Y., Broderick, A. E., et al. 2020, ApJ, 892, 132 [NASA ADS] [CrossRef] [Google Scholar]
  58. Tiede, P., Broderick, A. E., & Palumbo, D. C. M. 2022, ApJ, 925, 122 [CrossRef] [Google Scholar]
  59. Wakker, B. P., & Schwarz, U. J. 1988, A&A, 200, 312 [NASA ADS] [Google Scholar]
  60. Wiaux, Y., Jacques, L., Puy, G., Scaife, A. M. M., & Vandergheynst, P. 2009, MNRAS, 395, 1733 [NASA ADS] [CrossRef] [Google Scholar]
  61. Wielgus, M., Marchili, N., Martí-Vidal, I., et al. 2022, ApJ, 930, L19 [NASA ADS] [CrossRef] [Google Scholar]
  62. Zamaninasab, M., Clausen-Brown, E., Savolainen, T., & Tchekhovskoy, A. 2014, Nature, 510, 126 [Google Scholar]

All Figures

thumbnail Fig. 1

Illustration of wavelet decomposition and multiresolution support of an astronomical image. Left panels: true image and true image with additional Gaussian noise; middle panels: Wavelet decomposition of the noised image with the DoG wavelet dictionary computed with filter sizes σ0 = 1,σ1 = 2, σ2 = 4,…,σ5 = 32 pixels; right panels: multiresolution support computed by thresholding the wavelet scales to the scale-dependent noise plotted as a mask with value 1 (coefficient in the support) or 0 (coefficient not in the multiresolution support).

In the text
thumbnail Fig. 2

Static, polarimetric reconstruction with mr-support imaging on synthetic data. Left panel: static polarization ground truth; middle panel: static reconstruction with mr-support imaging; right panel: -coverage of synthetic observation (EHT 2022 array).

In the text
thumbnail Fig. 3

Synthetic ground truth dynamic movie (slowly rotating crescent) in the time interval between 10 UT and 14 UT. The green arrow ranges from the image center to the position of the brightest pixel in the frame, hence illustrating the orientation of the crescent.

In the text
thumbnail Fig. 4

True movie for fast rotating crescent.

In the text
thumbnail Fig. 5

Multiresolution support for the reconstruction of the static polarization example with EHT coverage.

In the text
thumbnail Fig. 6

Recovered solution (recovered with mr-support imaging) for slowly rotating crescent observed with the EHT.

In the text
thumbnail Fig. 7

Same as Fig. 6, but for slowly rotating crescent observed with the ngEHT.

In the text
thumbnail Fig. 8

True position angle (blue) and the recovered position angle recovered with mr-support imaging for an EHT configuration (red) and ngEHT configuration (green) for the slowly rotating crescent model. The error bars reflect the change in position angle in the true source model within a ten-minute scan (cycle length of synthetic observation).

In the text
thumbnail Fig. 9

Reconstruction of rapidly rotating crescent observed with the ngEHT.

In the text
thumbnail Fig. 10

True (upper panels) and recovered (lower panels) test images with full Stokes polarization for the slowly rotating crescent. The mr-support imaging approach recovers the true large-scale orientation of the EVPA.

In the text
thumbnail Fig. 11

Same as Fig. 10, but for slowly rotating crescent observed with the ngEHT.

In the text
thumbnail Fig. 12

Polarimetric reconstruction of fast rotating crescent with ngEHT coverage.

In the text
thumbnail Fig. 13

Synthetic ground truth movie of Sgr A* used for the third ngEHT Analysis challenge. The model is a RIAF model with a semianalytic shearing hotspot.

In the text
thumbnail Fig. 14

Reconstruction of the movie plotted in Fig. 13 with mr-support imaging for the third ngEHT Analysis challenge.

In the text
thumbnail Fig. 15

Selected frames of reconstructions shown in Figs. 13 and 14 at times UT 11.3 (upper panels), UT 11.5 (middle panels), and UT 11.7 (lower panels).

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.