Issue 
A&A
Volume 673, May 2023



Article Number  A151  
Number of page(s)  18  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/202245393  
Published online  24 May 2023 
Dynamic and Polarimetric VLBI imaging with a multiscalar approach
MaxPlanckInstitut für Radioastronomie,
Auf dem Hügel 69,
Bonn
53121, Germany
email: hmueller@mpifrbonn.mpg.de; alobanov@mpifrbonn.mpg.de
Received:
7
November
2022
Accepted:
20
March
2023
Context. Due to the limited number of antennas and the limited observation time, an array of antennas in very long baseline interferometry (VLBI) often samples the Fourier domain only very sparsely. Powerful deconvolution algorithms are needed to compute a final image. Multiscale imaging approaches such as DoGHiT have recently been developed to solve the VLBI imaging problem and show promising performance: they are fast, accurate, unbiased, and automatic.
Aims. We extend the multiscalar imaging approach to polarimetric imaging, to reconstructions of dynamically evolving sources, and finally to dynamic polarimetric reconstructions.
Methods. These extensions (mrsupport imaging) utilize a multiscalar approach. The timeaveraged Stokes I image was decomposed by a wavelet transform into single subbands. We used the set of statistically significant wavelet coefficients, the multiresolution support (mrsupport), computed by DoGHiT as a prior in a constrained minimization manner; we fitted the singleframe (polarimetric) observables by only varying the coefficients in the multiresolution support.
Results. The Event Horizon Telescope (EHT) is a VLBI array imaging supermassive black holes. We demonstrate on synthetic data that mrsupport imaging offers ample regularization and is able to recover simple geometric dynamics at the horizon scale in a typical EHT setup. The approach is relatively lightweight, fast, and largely automatic and data driven. The ngEHT is a planned extension of the EHT designed to recover movies at the event horizon scales of a supermassive black hole. We benchmark the performance of mrsupport imaging for the denser ngEHT configuration demonstrating the major improvements the additional ngEHT antennas will bring to dynamic polarimetric reconstructions.
Conclusions. Current and upcoming instruments offer the observational possibility to do polarimetric imaging of dynamically evolving structural patterns with the highest spatial and temporal resolution. Stateoftheart dynamic reconstruction methods can capture this motion with a range of temporal regularizers and priors. With this work, we add an additional simpler regularizer to the list: constraining the reconstruction to the multiresolution support.
Key words: techniques: interferometric / techniques: image processing / techniques: high angular resolution / methods: numerical / galaxies: nuclei / galaxies: jets
© The Authors 2023
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model.
Open Access funding provided by Max Planck Society.
1 Introduction
In very long baseline interferometry (VLBI) the signals recorded at single antennas are correlated to achieve spatial resolution that would not be achievable with singledish instruments. The correlation product of every antenna pair at a fixed time is the Fourier coefficient (visibility) of the true sky brightness distribution with a Fourier frequency determined by the projected spatial vector joining two antennas (baseline). As the Earth rotates during the observing run, baselines rotate on elliptical tracks in the Fourier domain, hence filling up the Fourier plane (uvplane) continuously. However, due to the limited number of antennas and the limited amount of observing time, the coverage of Fourier coefficients (uvcoverage) is sparse. The procedure to recover the true sky brightness distribution from these sparsely covered Fourier coefficients is called VLBI imaging.
It is a longstanding frontline goal in astronomy to recover images of the shadow of a supermassive black hole. The Event Horizon Telescope (EHT) is a globally spanning VLBI array that observes at 230 GHz (with a recent upgrade to 345 GHz). With the combination of global baselines and short baselines, the EHT achieves the angular resolution that is needed to capture the first image of the black hole shadow in M87 (Event Horizon Telescope Collaboration 2019a) and in the Milky Way (Event Horizon Telescope Collaboration 2022a). The nextgeneration Event Horizon Telescope (ngEHT) is a planned extension of the EHT (Doeleman et al. 2019; Johnson et al. 2023). It will produce movies of the accretion onto the central black hole SGR A* at the scales of the event horizon (Roelofs et al. 2023; Emami et al. 2023). The dynamic timescales for these observations are very short. Observations of Sgr A* in the submillimeter (Bower et al. 2015; Wielgus et al. 2022) and nearinfrared regime (GRAVITY Collaboration 2018a,b) confirm that Sgr A* is timevarying on timescales as short as 30 min. The predicted innermost stable circular orbit (ISCO) period varies between 4 min and roughly 30 min, depending on the spin of the black hole. Palumbo et al. (2019) concluded that a wellsampled baseline coverage on timescales of ~30 min is needed to recover the source dynamics.
CLEAN (Högbom 1974) and its many variants (Clark 1980; Schwab 1984; Wakker & Schwarz 1988; Bhatnagar & Cornwell 2004; Cornwell 2008; Rau & Cornwell 2011; Müller & Lobanov 2023) have served the community well for decades, but are recently being challenged by forwardimaging approaches in the spirit of regularized maximum likelihood (RML) methods (Narayan & Nityananda 1986; Wiaux et al. 2009; Garsden et al. 2015; Ikeda et al. 2016; Chael et al. 2016, 2018; Akiyama et al. 2017b,a; Event Horizon Telescope Collaboration 2019b; Müller & Lobanov 2022) and Bayesian approaches (Arras et al. 2019, 2021; Broderick et al. 2020a,b). Recently we developed new multiresolution tools for performing VLBI imaging (Müller & Lobanov 2022, 2023). For these multiscalar approaches we designed special waveletbased basis functions (difference of Gaussian and difference of spherical Bessel functions) and fitted the basis functions to the uvcoverage. In this way we define smooth basisfunctions that are well suited to describe (compress) the recovered image features by encoding information about the uvcoverage itself. Some wavelets are most sensitive to gaps in the uvcoverage, while others are most sensitive to covered Fourier coefficients. While the signal from the latter should be recovered, the signal from former are suppressed (effectively avoiding overfitting).
As a byproduct of these multiscalar imaging algorithms, we compute the multiresolution support (mrsupport; Müller & Lobanov 2022): a set of wavelet parameters that are deemed statistically significant to represent the recovered image features. The multiresolution support encodes various information about the recovered image. First, it implements a support constraint (to determine where the emission is located in the image). Second, it encodes a spatial constraint (to understand which spatial scales are needed to represent the image features at these locations). In particular, the second prior information is determined by the spatial scales that are present in the data (i.e., covered by baselines in the observation). We demonstrated in Müller & Lobanov (2022) that the multiresolution support is a powerful prior information very well suited to refining the imaging procedure. In Müller & Lobanov (2022) we proposed the addition of amplitudes and phases to the data terms and the removal of any regularizer term, in order to solve the resulting optimization problem by only updating the coefficients in the multiresolution support. The fit to the observed visibilities improved, but without the addition of spurious artifacts that are typical of overfitting.
Among Stokes I imaging, full polarimetric imaging is of interest for the VLBI community, both theoretically (Blandford & Znajek 1977; Hardee et al. 2007; Kramer & MacDonald 2021) and observationally (e.g., Gómez et al. 2011, 2016; Hovatta et al. 2012; Zamaninasab et al. 2014; Pötzl et al. 2021; Ricci et al. 2022, among many others), in particular at event horizon scales (Event Horizon Telescope Collaboration 2021a,b). In polarimetric imaging the recorded data are separated into several polarized subbands and recombined in the four Stokes parameters. Essentially, we have four Stokes parameters (I, Q, U, V) and corresponding polarized visibilities. Hence, the problem that we aim to solve for the other three Stokes parameters is the same as for Stokes I: recovering a signal from a sparse measurement of the Fourier coefficients. However, there are some slight differences. While the Stokes I image is necessarily nonnegative (and used during imaging as a prior), this does not have to be true for Stokes Q, U, and V. Moreover, I^{2} ≥ Q^{2} + U^{2} + V^{2} applies.
The multiresolution support is a wellsuited prior to be applied to the polarimetric imaging when the Stokes I image is already done. The support constraint of the multiresolution support encodes the information that linear and circular polarized emission theoretically can only appear at locations where total intensity (Stokes I) is greater than zero. This might not reflect the observation situation in every case. Sometimes the Stokes I signature cannot be retrieved with the spatial sensitivity of the interferometer, while the more localized (e.g., due to Faraday rotation) polarized structural pattern is visible. However, in most VLBI studies this pathological situation does not appear and support constraint is a good approximation. Moreover, the spatial constraint adheres to the fact that the polarimetric visibilities have the same uvcoverage as total intensity visibilities (i.e., the same spatial scales, those covered by uvcoverage, are present in the polarized images).
Another domain of current research is the study of dynamic sources, such as Sgr A*. We study in this case the static imaging of a dynamically evolving source as in Event Horizon Telescope Collaboration (2022b), and the dynamic reconstruction of a movie (Roelofs et al. 2023). In this work we focus on the latter problem. Data sets of dynamic sources pose additional challenges. Due to the short variability timescale, the effective uvcoverage in every frame is not sufficient for efficient snapshot imaging. Modern approaches utilize a temporal correlation instead, in a Bayesian framework (Bouman et al. 2018; Broderick et al. 2022; Roelofs et al. 2023) or as temporal regularizer in the RML framework (Johnson et al. 2017; Bouman et al. 2018; Chael et al. 2022; Roelofs et al. 2023). Moreover, the variability of the source could be misidentified with the calibration of the gains (Event Horizon Telescope Collaboration 2022b).
Again, the multiresolution support (computed for the timeaveraged image) encodes prior information that is very desirable for dynamic imaging. The support constraint encodes the information that every location of an emission spike appearing during the observation is present also in the mean image. The uvcoverage of the full observation run is the sum of the uvcoverage of the single frames. Hence, the spatial constraint also provides a powerful image prior for dynamic imaging; the multiresolution support only allows spatial scales that are present in the mean image (in the full observation run), which means that the fit in the gaps of the uvcoverage remains under control. On the other hand, the spatial constraint allows for the addition of the spatial scales to single frames that might be not represented in the uvcoverage of this single frame, but in earlier or later snapshots. However, we note that there may be a bias toward larger scales since the mean image suppresses smallscale structures present in only part of the individual frames.
Based on the success of the approach presented in Müller & Lobanov (2022) of only changing the coefficients in the multiresolution support to introduce effective regularization, we propose the same approach for static polarimetric imaging and dynamic imaging. As outlined above, the multiresolution support is well suited to be used as a regularizer in these problems as it exactly encodes the prior information that is needed. As we solve two quite different extensions to the standard VLBI imaging with the same approach, it is natural to use the same approach for the combined problem: a dynamic polarimetric reconstruction.
2 Theory
2.1 VLBI
As described by the van Cittert–Zernike theorem, the visibilities 𝒱 are related to the true skybrightness distribution I(x, y) by a twodimensional Fourier transform under reasonable assumptions (Thompson et al. 2017): (1)
From a full coverage of the Fourier coefficients (visibilities) the true sky brightness distribution could be computed by an inverse Fourier transform. However, in VLBI the uυcoverage is very sparse with significant gaps. This makes the problem of recovering the image an illposed inverse problem. The polarized quantities are measured at every antenna with orthogonal polarimetric filters (linear or circular). The crosscorrelation of these signals gives rise to the polarimetric Stokes I parameters and their respective polarimetric visibilities: (2) (3) (4) (5)
Here I is the total brightness, Q and U the linear polarizations, and V the fraction of circular polarization. By construction it is (6)
2.2 Imaging
Imaging with the CLEAN algorithm and its variants (Högbom 1974; Schwab 1984; Wakker & Schwarz 1988) were the standard in VLBI imaging for the last decades. In CLEAN the imaging problem is equivalently reformulated as a deconvolution problem, (7)
where I^{D} is called the dirty map (the inverse Fourier transform of all measured, and probably reweighted, Fourier coefficients) and B^{D} is called the dirty beam (the dirty map of a synthetic delta source). The astronomer using CLEAN determines some search windows for components; CLEAN looks for the maximum peak in the residual in this window (minor loop) and subtracts the shifted and rescaled dirty beam from the residual (major loop). This procedure is iterated until the residual is noiselike. In this way, CLEAN models the image as a set of delta functions. Finally, these components are restored with a restoring beam (clean beam) that fits the central peak of the dirty beam. CLEAN is an inverse modeling approach to the imaging problem.
Recently, forwardmodeling approaches have gained interest in the community in the framework of RML (Akiyama et al. 2017a; Chael et al. 2018; Müller & Lobanov 2022) and Bayesian methods (Arras et al. 2019; Broderick et al. 2020a,b). These methods seem to outperform classical CLEAN in terms of speed, spatial resolution, sensitivity, and precision, in particular when the uvcoverage is sparse (e.g., Event Horizon Telescope Collaboration 2019b; Arras et al. 2021; Müller & Lobanov 2022; Roelofs et al. 2023). On the other hand, these forwardmodeling methods require the finetuning of some hyperparameters and regularization parameters, despite the recent effort to reduce this dependence (Müller & Lobanov 2022). For the remainder of this manuscript we focus on RML methods, and ignore Bayesian approaches for now.
In RML a sum of data fidelity terms and penalty terms is minimized (8)
where the data fidelity term S_{i;} measures the fidelity of the recovered solution I to the observed data (i.e., polarized visibilities) and the regularization terms R_{j} measure the fidelity of the guess image I. The regularization parameters α_{i;} and β_{j} are manually set weights that balance data fidelity and regularization terms. Typical choices for the data terms are chisquare values to the observed (polarimetric) visibilities, and related calibration independent quantities such as closure phases and closure amplitudes. For the regularization terms a wide range of regularizers has been applied in the past, for example sparsity promoting regularization (11, l2), smoothness constraints (total variation, total squared variation), hard constraints (total flux, nonnegativity), entropy maximization (MEM), and multiscale decompositions (hard thresholding on scales). The regularization terms introduce regularization to the illposed imaging problem. By balancing the data terms and the regularization terms, we select a possible guess solution that fits the data (small data terms) and is robust against noise and artifacts (small penalty terms). We demonstrated in previous works (Müller & Lobanov 2022) that a support constraint has the same regularization effect. By constraining the space of free parameters to the multiresolution support we were able to refine the fit to the observed data in later imaging rounds.
2.3 Wavelets
The basis behind multiscalar approaches are multiscalar dictionaries. In Müller & Lobanov (2022) we proposed the use of radialsymmetric difference of Gaussian (DoG) wavelets, and extended them to directional dependent basis functions in Müller & Lobanov (2023). Moreover, we introduced in Müller & Lobanov (2023) steep quasiorthogonal basis functions to study the Fourier domain by difference of Bessel functions (DoB). Both dictionaries (DoG and DoB) are related to each other: the DoG wavelets approximate the central peak of the DoB wavelets, but do not contain the wider sidelobes of latter ones. In what follows we quickly summarize these wavelet dictionaries. For more detailed information we refer to Müller & Lobanov (2022, 2023).
Wavelets have a wide range of applications in image compression. The most widely used continuous wavelet is the Mexicanhat wavelet which is a rescaled secondorder derivative of a Gaussian (Lagrangian of Gaussians; Starck et al. 2015). The difference of Gaussian method offers some viable approximation to Mexican hat wavelets. A DoG wavelet is described by two width parameters σ_{1},σ_{2}: (9)
The Fourier transform of these DoG wavelets define ringlike filters in the Fourier domain: (10)
The extension to DoB wavelets is natural. We replace the DoG wavelets by spherical Bessel functions: (11)
Moreover, the extension of both wavelets to directional dependent basis functions is straightforward as well. The radial coordinates just need to be replaced by elliptical ones.
The wavelet decomposition is composed from the wavelet basis functions from a sequence of increasing widths σ_{0} ≤ σ_{1} ≤ … ≤ σ_{J}: (12) (13)
For direction dependent dictionaries we use elliptical Gaussians and Bessel functions instead. For more details we referto ourdiscussion in Müller & Lobanov (2023). The multiscale dictionary is the adjoint of the multiscale decomposition (in what follows called Γ) (14)
with an analogous action for DoB wavelets and multidirectional wavelets. The complete action of the multiscalar and multidirectional wavelet decomposition is presented in the appendix.
Fig. 1 Illustration of wavelet decomposition and multiresolution support of an astronomical image. Left panels: true image and true image with additional Gaussian noise; middle panels: Wavelet decomposition of the noised image with the DoG wavelet dictionary computed with filter sizes σ_{0} = 1,σ_{1} = 2, σ_{2} = 4,…,σ_{5} = 32 pixels; right panels: multiresolution support computed by thresholding the wavelet scales to the scaledependent noise plotted as a mask with value 1 (coefficient in the support) or 0 (coefficient not in the multiresolution support). 
2.4 DoGHiT
Our novel algorithm for doing dynamic polarimetric reconstructions is an extension of the DoGHiT algorithm (Müller & Lobanov 2023). We summarize this algorithmic framework in this section. DoGHiT models the image by a radial symmetric wavelet dictionary Ψ^{DoG}. The Fourier transform of the basis functions of the dictionary (atoms) are sensitivity filters in the Fourier domain. Hence, by fitting the widths of the Gaussians to the uvcoverage, we define wavelets that are most sensitive to measured Fourier coefficients and wavelets that are most sensitive to gaps in the uvcoverage. The signal of the former should be kept, while the lack of the latter atoms causes sidelobes in the image. In this way the dictionary allows for a better separation between measured features (covered by baselines) and uncovered artifacts. We interpolate the signal in the gaps by the smooth nature of the basis functions, but suppress the signal in the gaps to a level where overfitting is prohibited. All in all, we solve the minimization problem (Müller & Lobanov 2022): (15)
Here S_{cph} and S_{lca} denote the χ^{2} fit to the closure phases and closure amplitudes, respectively, and R_{flux} denotes a characteristic function on the total flux of the guess solution. We use the pseudonorm (i.e., the number of nonzero coefficients) as a sparsity promoting regularization term weighted with a regularization parameter β Equation (15) is solved by a forward–backward splitting algorithm alternated with rescaling the emission to a predefined total flux (Müller & Lobanov 2022). The final recovered solution is: (16)
The regularization parameter β is the only free parameter that needs to be chosen manually by the user. The number of free parameters is therefore much smaller than the number of free parameters for RML methods such as ehtim (Chael et al. 2016, 2018) or SMILI (Akiyama et al. 2017b,a) since the penalty term is chosen datadriven. We demonstrated in Müller & Lobanov (2022) that although the optimization landscape is much simpler, the reconstructions obtained by DoGHiT are competitive to RML reconstructions. Moreover, we only fit closure phases and closure amplitudes for DoGHiT in Eq. (15) (i.e., the reconstruction is robust against instrumental gain corruptions). Next, we use the model computed by DoGHiT for selfcalibration (i.e., we determine the gains).
2.5 Multiresolution support
A specific property of the multiscalar decompositions is the multiresolution support. Mertens & Lobanov (2015) paved the way for the application of the multiresolution support in the analysis of active glactic nucleus (AGN) jets. The multiresolution support is a set of wavelet components that are statistically significant (Starck et al. 2015). We decompose a noisy image by a wavelet dictionary: [I_{0}, I_{1},I_{2},…, I_{J}] = ΨI. Moreover, we compute the scaledependent noise level s_{j} by decomposing a Gaussian white noise field with the same wavelet dictionary. Given a threshold k_{s}, we can define a set of statistically significant wavelet coefficients with the criterion that , where the noise level is approximated by the variance from an emissionfree region of the image scale I_{j} (i.e., far away from the center). The multiresolution support for a celestial ground truth image from the EHT imaging challenges^{1} is illustrated in Fig. 1.
The multiresolution support encodes two different types of prior information about the model. First, it encodes a support constraint (i.e., it defines the position of significant emission spikes in the field of view). Second, the multiresolution support contains information about the spatial scales that are present in the observation. In sparse VLBI arrays, this is dominated by the uvcoverage (i.e., by which spatial scales are covered by observed baselines in the Fourier domain). As various wavelet basis functions are most sensitive to various baselines or gaps in the uvcoverage, the information about which spatial scales are covered by observations is directly imprinted in the multiresolution support. This is especially true for the directiondependent DoG and DoB wavelets used for DoGHiT that were fitted to the uvcoverage (i.e., that were developed to allow an optimal separation between covered features and gaps in the uvcoverage).
DoGHiT solves the minimization of Eq. (15) with a forwardbackward splitting algorithm. The backward projection step is the application of the proximalpoint operator of the l^{0} penalization function, which is a hard thresholding (Müller & Lobanov 2022). Hence, all insignificant wavelet coefficients are set to zero. DoGHiT therefore computes an approximation of the multiresolution support as a byproduct. This support was used to further refine rounds in the imaging (Müller & Lobanov 2022).
The computation of the multiresolution support as a byproduct of DoGHiT highlights an essential improvement of DoGHiT compared to CLEAN regarding supervision. The support of significant emission is found by DoGHiT automatically, while it has to be selected in CLEAN by the userdefined CLEAN windows. DoGHiT is therefore is less userbiased and provides (compared to standard RML frameworks and CLEAN) an essential step toward unsupervised VLBI imaging.
3 Algorithms
We outline in this section the algorithms used for static polarimetry, dynamic Stokes I imaging, and dynamic polarimetry. In what follows we call these algorithms mrsupport imaging.
3.1 Stokes I
Static Stokes I images are constructed with DoGHiT with the five round pipeline presented in Müller & Lobanov (2022). However, in that work we used only radially symmetric wavelets. As an extension, we used the multidirectional dictionaries developed in Müller & Lobanov (2023) for this work (i.e., we replaced the circular symmetric Gaussians by elliptical Gaussians). Moreover, we used a grid search in Müller & Lobanov (2022) to find a proper starting point for the forwardbackward splitting minimization iterations of DoGHiT. Since the backward step in the minimization is essentially a hard thresholding, we tried different scaledependent thresholds in an equidistant grid to minimize Eq. (15), and used the setting of the minimum as the starting point for the forward–backward iterations. For this manuscript, we used the same grid search, but applied the orthogonal DoB wavelets in the grid search, while still using the DoG wavelets in the imaging rounds of the pipeline. We do not focus on the Stokes I reconstruction in this work as these extensions are rather straightforward and minor, and the focus of the manuscript is on an extension of DoGHiT to polarimetry. We recall one of the main advantages of DoGHiT: the algorithm works mainly unsupervised with a minimal set of free parameters, hence adding a minimal human bias in the imaging procedure.
3.2 Polarimetry
For polarimetric reconstructions we first reconstruct a Stokes I image with DoGHiT and solve for the gains by selfcalibrating to the final output (DoGHiT relies on calibration independent closure quantities). As a second step, we solve for the polarimetric Stokes parameters Q, U, and V. We take the multiresolution support computed by DoGHiT for the Stokes I imaging and constrain the space of free parameters to all wavelet coefficients in the multiresolution support. We then solve for Q, U, V by minimizing the fit to 𝒱_{Q}, 𝒱_{U}, 𝒱_{V} with a gradient descent algorithm, but only allow coefficients in the multiresolution support to vary. In summary we solve the following problems: (17)
Here are the recovered wavelet coefficients for the Stokes I image as in Sect. 2.4, and S_{U} and S_{q} are the chi^{2} fit qualities to the Stokes Q and U visibilities. The side condition Q_{j}(x,y) = 0 whenever denotes the constraint that we only vary coefficients in the multiresolution support.
The multiresolution support is a wellsuited regularizer here; the support constraint encodes the sidecondition Eq. (6) effectively, which means that polarized emission is only allowed to appear at locations in the images in which we found relevant emission in total intensity. While this inequality Eq. (6) holds true theoretically in any case, in practice the pathological situation could occur that due to the instrumental effect a nondetection of Stokes I does not rule out polarimetric structures. With this caveat in mind, we assume for the rest of the manuscript that inequality Eq. (6) holds true in observations as well. Moreover, the polarimetric visibilities have the same uvcoverage as the Stokes I visibility. The spatial constraint of the multiresolution support describes which spatial scales are statistically significant to describe the emission in the image, which in the case of sparse VLBI arrays is dominated by the uvcoverage (i.e., which spatial scales are compressed by which baselines and whether these baselines are measured). Hence, we already computed the multiresolution support as a byproduct in DoGHiT to study the uvcoverage of the observation, and get control over overfitting in the gaps of the uvcoverage by suppressing the respective atoms of the dictionary. This effective regularization can be copied to the polarized visibilities as the uvcoverage is the same.
Moreover, we would like to note once again that the multiresolution support is a completely data driven property computed as a sideproduct by DoGHiT. Hence, the reconstruction of polarimetric properties still relies on a minimal set of hyperparameters and remains largely unsupervised.
We fitted complex polarimetric visiblities directly here, which requires that a good polarization calibration is already available. However, the method is easy to adapt to more realistic situations since it is (opposed to CLEAN) a forwardmodeling technique. First, instead of a constrained χ^{2} minimization to the complex visibilities, one could just optimize the fit to the visibilitydomain polarization fraction as in Johnson et al. (2015). Second, the minimization in Eq. (17) is done iteratively, where the most important features are recovered first and gradually more detailed features are recovered at later iterations. Hence, with a similar philosophy to how selfcalibration interacts with CLEAN, we could run the minimization for some iterations and do the calibration on the current model, then continue the minimization and calibration in an alternating manner.
3.3 Dynamic Stokes I
For dynamic Stokes I imaging, we first reconstruct a static image with DoGHiT. For this work we assume that the static image of a dynamically evolving source might be a good approximation to the mean image during the time of observation. This might be true in particular if the source structure contains some persistent structure during the complete observing run, as could be expected for Sgr A* in EHT observations with a persistent shadow in rotating hotspot models (Tiede et al. 2020). However, based on the dynamics of the target, it may be difficult to recover a decent fit to the data with a static image. In this work we applied a procedure inspired by the strategy in Event Horizon Telescope Collaboration (2022b); we added a systematic noisefloor on every baseline to account for variability. However, we did not repeat the sophisticated noise modeling applied in Event Horizon Telescope Collaboration (2022b).
We computed the multiresolution support by the static mean image. Then we cut the observation in single frames and reconstruct images at every frame independently. All frames together make up the dynamic movie reconstruction. However, due to the shortness of single frames, snapshot imaging is not possible due to the sparsity of the uvcoverage. Again, we propose using the multiresolution support instead. We minimize the χ^{2} for every single frame observation independently for every frame in a gradient descent algorithm (using the mean image as an initial guess), but only allow coefficients in the multiresolution support to vary.
The multiresolution support is a wellsuited regularizer here as well; if the static image is a good approximation to the mean image, the static image contains all the locations of emission in the field of view. If at some time an emission spike occurs at a specific location, this emission spike should be visible in the mean as well. Hence, the support constraint encodes information about the location of emission at single frames. This assumption comes with the caveat that shortlived smallscale features may be not strong enough in the mean image and excluded later from the dynamic reconstructions due to the multiresolution support. However, we also doubt that such a feature would be visible with the much sparser uvcoverage of single scans, and therefore would not be recovered. Moreover, the uvcoverage of the complete observation is the sum of the observations of the single frames. In single frame observations there are three different categories of Fourier coefficients and/or baselines: the ones measured by observations in this single frame (very sparse), the ones that are not measured during the time of the single frame but will be measured at later (earlier) times in the observation, and the baselines that are not measured at all due to the sparsity of the array. By doing constrained optimization (constrained by the multiresolution support) to the single frame observation we fit the first class of baselines, copy the solution over from the initial guess (mean image) for the second class of baselines, and suppress the last class of baselines by the multiresolution support. Hence, the spatial constraint implemented by the multiresolution support is a wellsuited prior to do dynamic imaging.
The reasonable assumption of temporal correlation between scans (e.g., by a regularizer term favoring temporal smoothness) is not used explicitly for mrsupport imaging. However, this assumption can be included in the dynamic reconstruction straightforwardly. Instead of fitting the visibilities with a constrained minimization approach, we can minimize the sum of a quality metric for the fit to the visibilities and a temporal regularization term, but only vary the coefficients in the multiresolution support. However, for this work we restricted ourselves to reconstructions without penalization on the temporal evolution such that now new regularization parameters are introduced and the reconstruction remains automatic and completely datadriven. Moreover, this means that all scans can be computed in parallel, allowing for fast computations.
3.4 Dynamic polarimetry
We propose the same procedure for polarized imaging and dynamic Stokes I imaging: fitting the respective visibilities with a gradient descent approach, while only varying coefficients in the multiresolution support computed by DoGHiT. It is therefore natural to utilize this approach for dynamic polarimetry as well. We propose the following strategy. First, reconstruct a static Stokes I image by DoGHiT and compute the multiresolution support. Then cut the observation in single frames and solve for dynamics and polarimetry together by fitting to 𝒱_{I}, 𝒱_{Q}, 𝒱_{U}, 𝒱_{V} together in single frames independently, but only vary coefficients in the multiresolution support.
4 Synthetic data tests
4.1 Synthetic observations
We tested the capabilities for mrsupport imaging for polarimetric image reconstructions. We tested three different source models (static polarized Sgr A* model, a slowly rotating crescent, and a rapidly rotating crescent) with two different arrays (EHT and a possible ngEHT configuration). A thorough comparison of existing imaging approaches for dynamic polarimetry is in preparation and will be deferred to a later work. For more details we also refer to the ngEHT analysis challenges Roelofs et al. (2023), and in particular the upcoming third challenge^{2} in which we compete with mrsupport imaging. We review our submission to the third challenge in Sect. 4.5.
We observe the synthetic ground truth images and movies with the array of the EHT 2022 observations and added thermal noise according to the measured SEFDs of the 2017 observation campaign (Event Horizon Telescope Collaboration 2019b). We used tenminute cycles consisting of five minutes of continued observation with an integration time of ten seconds and a gap of five minutes offsource (mimicking calibration, pointing scans). This cycle time is of special interest when discussing dynamic reconstructions as the fiveminute gaps essentially limit the temporal resolution. The data sets were scanaveraged prior to the imaging procedure.
As ngEHT configuration we took the EHT 2022 array configuration (i.e., ALMA, APEX, GLT, IRAM30 m, JCMT, KP, LMT, NOEMA, SMA, SMT, SPT) and added ten additional antennas from the list of Raymond et al. (2021), as was done for the ngEHT Analysis challenges (Roelofs et al. 2023): HAY (34 m), OVRO (10.4 m), GAM (15 m), BAR, BAJA, NZ, SGO, CAT, GARS, CNI (all 6 m). We added instrumental noise according to the size of the telescopes, but did not add further calibration errors. As a ground truth we took the slowly rotating crescent model with a rotation period of one hour. As for the EHT 2022 coverage, the ground truth movie was observed with a cycle of five minutes onsource and a fiveminute gap and an integration time of ten seconds (ten minutes onsource with a twominute gap in the quickly rotating crescent example).
As a static synthetic test image we took a synthetic Sgr A* image out of the ehtim software package (Chael et al. 2018). The true image model is presented in Fig. 2. For the dynamic Stokes I imaging we used a crescent model (Tiede et al. 2022), (18)
with the following parameters: I_{0} = 0.6 Jy, s = 0.46, and r_{0} = 22 μas. To account for dynamics roughly similar to rotating hotspot models (Tiede et al. 2020) we let the crescent rotate clockwise. One rotation period takes one hour which is roughly comparable to the flux variability timescale of the SGR A* light curve (Wielgus et al. 2022). The synthetic ground truth image is presented in Fig. 3. To illustrate the orientation of the crescent, we also show a green arrow from the image center to the location of the brightest pixel in the image in Fig. 3. For polarized movies we have to add polarization. For the sake of simplicity, here we used a simpler model to test the capabilities of dynamic polarimetry: we added a constant linear polarized structure at 10% (no circular polarization) with a rotating EVPA. To separate the dynamic polarimetric reconstruction from effects of the Stokes I imaging, the rotation of the EVPAs is counterclockwise (rotation of Stokes I was clockwise) and has a different rotation period of two hours instead of one hour as for the Stokes I images.
As an additional model we also test a rapidly rotating crescent model with an orbistal period time of twenty minutes. We show the ground truth movie in Fig. 4. The constant EVPA pattern rotates counterclockwise in one hour. The advance time between scans that is used for pointing and calibration limits the temporal resolution. For an array as sensitive as the ngEHT a smaller gap time might be possible. We therefore synthetically observed the rapidly rotating movie with a cycle of ten minutes of scientific observation (ten seconds of integration time) and a twominute gap.
Fig. 2 Static, polarimetric reconstruction with mrsupport imaging on synthetic data. Left panel: static polarization ground truth; middle panel: static reconstruction with mrsupport imaging; right panel: uυcoverage of synthetic observation (EHT 2022 array). 
Fig. 3 Synthetic ground truth dynamic movie (slowly rotating crescent) in the time interval between 10 UT and 14 UT. The green arrow ranges from the image center to the position of the brightest pixel in the frame, hence illustrating the orientation of the crescent. 
Fig. 4 True movie for fast rotating crescent. 
4.2 Static polarization with EHT coverage
We fitted the scales to the uvcoverage first with the procedure outlined in Müller & Lobanov (2022) and Müller & Lobanov (2023): we searched for jumps in the sorted distribution of uvdistances that exceed a threshold, and we selected the radial scales accordingly. We defined nine radial scales and used four different angles, resulting in 36 scales to represent the uvcoverage. The Stokes I image was recovered with DoGHiT (Müller & Lobanov 2022) using the multidirectional dictionaries introduced in (Müller & Lobanov 2023), as described in Sect. 3.1. As presented in Sect. 3.2, we then computed the multiresolution support. The multiresolution support is presented in Fig. 5. Some scales that are most sensitive to gaps in the uvcoverage are suppressed completely, while other scales encode various parts of the emission structure, for example the ringlike emission (scale 34 and scale 35), the extended emission structure (scale 30 and 32), the fine crescent structure (e.g., scale 4, 7, 9, 14, and 24), or the bright spot to the left of the crescent (e.g., scale 0, 2, and 10). The minimization to the polarized visibilities was done with the limitedmemory BroydenFletcherGoldfarbShanno (BFGS) algorithm (Byrd et al. 1995), as implemented in Scipy (Jones et al. 2001). To assert global convergence, we blurred the Stokes Q and U image of the reconstruction with the nominal resolution and redo the minimization with a gradient descent procedure.
We show the final reconstruction result in Fig. 2. The reconstruction of the Stokes I image is relatively successful. The crescentlike shadow image is overall well recovered. However, there are some finer structures that are not recovered by DoGHiT: the closing of the ring by a thin line toward the right and the fainter structure inside the ring. The linear polarized emission is overall very well recovered. The total fraction of linear polarized light and the overall direction of the electromagnetic vector position angles (EVPA) in the northsouth direction are well recovered. The synthetic ground truth image contains a greater number of complex local structures for example a rotation of the EVPA in the bottom left of the image toward the east–west direction. This shift is partly visible in the recovered image as well, although the amount of rotation is smaller.
All in all, this example demonstrates that even for a very challenging and sparse array such as the EHT 2022 array the polarimetric reconstruction with support imaging is quite successful in both the overall structure, but also in the reconstruction of more localized polarimetric structures with a size of ≈5 μas. Thus, similar to the DoGHiT reconstruction for the Stokes I image, mrsupport polarimetry seems to offer mild superresolution. Interestingly, superresolution and a good fit to the polarized visibilities is offered without introducing artifacts in the image. This demonstrates the power of the regularization approach.
Fig. 5 Multiresolution support for the reconstruction of the static polarization example with EHT coverage. 
Fig. 6 Recovered solution (recovered with mrsupport imaging) for slowly rotating crescent observed with the EHT. 
4.3 Dynamic Stokes I
The synthetic slowly rotating crescent movie was observed as described in Sect. 4.1 with a tenminute cycle with EHT coverage. According to this temporal resolution, we cut the observation into frames with a length of ten minutes for the dynamic reconstruction. The reconstruction was then done with the mrsupport approach in the best time window t ∈ [10 UT, 14 UT] (Farah et al. 2022) as outlined in Sect. 3.3. As a first step we fitted a symmetric ring model to the data, created a mean image with DoGHiT with the fitted ring model as an initial guess, and then we solved sequentially for every frame by mrsupport imaging with the support calculated from the mean. As an initial guess for the single frame imaging with mrsupport imaging we used the reconstruction of the respectively preceding frame (or the mean in case of the first frame).
We present the reconstruction results in Fig. 6. The single frames all show a circular structure with a radius of ≈22 μas. Moreover, nearly all frames have an asymmetry of a crescent. However, the crescent asymmetry is less prominent than in the true image. As for the true dynamic movie, we illustrate the orientation of the crescent by an arrow from the center to the brightest pixel in the reconstruction. Following the orientations of the recovered crescents in Fig. 6 a clear rotation with an orbital period of one hour is visible. The orientation of the recovered crescents match in most frames with the synthetic ground truth except for some notable exceptions at 11 UT (no asymmetry recovered at all), and 13.16 UT13.5 UT (incorrect orientations). In particular the latter could be a consequence of taking the reconstruction at the preceding frame as an initial guess for the next frame, and hence the false recovery at 13.16 UT also affects all the following frames.
We present in Fig. 7 the reconstruction result for a slowly rotating crescent with ngEHT coverage. The reconstruction of the crescent is excellent at every frame with high contrast images. The singleframe images do not show additional image artifacts. Although the additional ngEHT antennas have rather high thermal noise levels, the much improved density of the array effectively stabilizes against thermal noise. Strikingly the orientation of the crescents matches the ground truth very well. We present in Fig. 8 a comparison between the true position angles and the recovered ones with an error by the temporal smearing due to the scan length.
The ngEHT array is much denser than the EHT configuration of 2022. This enhances the possible temporal resolutions. We therefore also studied the possibility to observe faster rotating structures at the event horizon with the fast rotating crescent model. The dynamic reconstruction was done in this case in frames of three minutes in length. The faster orbital period and the shorter frame length complicate the reconstruction procedure: there are fewer observation points per single frame, which raises the problem of sparsity. Moreover, due to the shorter dynamical timescale and the smaller number of observing points per single frame, the scanaveraged visibility points worsen the signaltonoise ratio by a factor of compared to the slower rotating crescent. The reconstruction results for dynamic Stokes I imaging with mrsupport imaging are shown in Fig. 9. The crescent is observed at every frame. Additionally, the overall orientation matches quite well. However, the quality of the reconstruction decreases compared to the slowly rotating crescent, as can be expected: the asymmetry of the crescents is less clear and the orientation is slightly off by roughly 15 degrees in some frames.
All in all, we observe that with mrsupport imaging we recover the correct image structure very well, including overall shadow feature, crescent asymmetry, and orientation, for most frames in the observation. Again, we note that these particular successful reconstructions do not suffer from introducing image artifacts despite the sparsity of the uvcoverage, especially in single frame observations. This, once again, demonstrates the regularizing property of the mrsupport approach.
Fig. 8 True position angle (blue) and the recovered position angle recovered with mrsupport imaging for an EHT configuration (red) and ngEHT configuration (green) for the slowly rotating crescent model. The error bars reflect the change in position angle in the true source model within a tenminute scan (cycle length of synthetic observation). 
Fig. 9 Reconstruction of rapidly rotating crescent observed with the ngEHT. 
4.4 Dynamic polarimetry
As outlined in Sect. 3.4 we did the dynamic reconstruction of the Stokes I channel first. Hence, we copied the reconstructions from Sect. 4.3, and then added polarization frame by frame by mrsupport imaging. Similar to our procedure presented in Sect. 4.2 we first minimized the data terms (fit to polarized visibilities) with a BFGS minimization procedure, blurred the reconstructed polarized images with the nominal resolution, and minimized the fit with a gradient descent procedure starting from the blurred image as an initial guess.
The reconstruction results in the time window t ∈ [10UT, 11UT] are presented in Fig. 10 for a slowly rotating crescent model with EHT coverage. The relatively simple polarized structure is well recovered in each frame. While the recovered images show some local variation from the overall orientation, the larger scale EVPA orientation matches for all frames. The fraction of polarized linearly polarized light is surprisingly well recovered. Again, despite some local variations in the recovered EVPA, the challenging reconstruction does not show image artifacts.
In Fig. 11 we present the reconstruction of the slowly rotating crescent observed with the ngEHT. The quality of the reconstruction improved compared to the reconstructions presented in Fig. 10. The global orientation of the EVPAs is well recovered for every frame. In the reconstructions with the EHT configuration we also observed some local variations from the overall polarimetric structure. These can no longer be observed in the reconstructions with ngEHT coverage.
We present the dynamic polarimetry reconstruction with mrsupport imaging of the rapidly rotating crescent in Fig. 12. The reconstruction of the polarimetric structure (i.e., the rotation of the EVPAs) remains excellent. These results suggest that mrsupport imaging could handle dynamic polarimetric structural features at the event horizon with realistic dynamic timescales.
Fig. 10 True (upper panels) and recovered (lower panels) test images with full Stokes polarization for the slowly rotating crescent. The mrsupport imaging approach recovers the true largescale orientation of the EVPA. 
4.5 ngEHT analysis challenge
In addition to the rather simple synthetic data tests presented in the previous subsections, we show here the reconstructions by mrsupport imaging for the third ngEHT Analysis challenge^{3}. The ngEHT Analysis challenges are a series of semiblind data challenges to evaluate the performance of algorithms for the planned ngEHT instrument (Roelofs et al. 2023). The ngEHT is a planned instrument to recover (polarimetric) movies at event horizon scales (Doeleman et al. 2019).
The ground truth movies produced for the ngEHT Analysis challenge resemble the current theoretical stateoftheart in simulations (Roelofs et al. 2023; Chatterjee et al. 2023). Here we present the reconstructions of a RIAF model of Sgr A* (Broderick et al. 2016) with a shearing hotspot (Tiede et al. 2020) with hotspot parameters inspired by GRAVITY Collaboration (2018a). The data sets were observed with the EHT array and with the ngEHT arrays that we used for the geometric data sets as well. In contrast to the proof of concept with geometric models, the ngEHT challenge data contain the full set of data corruptions that may be expected from real observations (Roelofs et al. 2023) simulated with the SYMBA package (Roelofs et al. 2020) including atmospheric turbulence, atmospheric opacity, pointing offsets, a scattering screen, and thermal noise specific to each antenna. However, no polarization leakage was added to the data. For more information we refer to Roelofs et al. (2023) and the challenge website^{4}. The data sets were network calibrated as it is standard in the EHT data processing (Event Horizon Telescope Collaboration 2022b). The ngEHT Analysis challenge is particularly well suited as a verification data set since the challenge was done blindly; neither the source files nor the specific data corruptions were made public to the analysis teams.
We show the ground truth movie in Fig. 13. A static (but not descattered) image was recovered by DoGHiT with a systematic error budget of 2%. The static image is computed by DoGHiT in a completely unsupervised way from closure quantities. We used this calibrationindependent model to calibrate the data set on long time intervals (1 h). Next we calculated the multiresolution support and cut the image into frames of six minutes. The dynamic reconstruction was done with mrsupport imaging. We selfcalibrated the data set in every single observing frame during the procedure. Then we added polarization in every frame.
The recovered movie is presented in Fig. 14. Moreover, we show magnified panels of selected frames in Fig. 15. The single frames all show a ringlike structure with a central depression. Compared to the ground truth frames, the reconstructed images have a worse quality due to the rapid variability, systematics, and sparse coverage. Moreover, an interstellar scattering screen was added to the data that was not removed during the imaging procedure. The reconstruction of the shearing hotspot motion is more challenging. We recover an approaching hotspot to the right of the ring at UT 11.3 (upper panels in Fig. 15), an extended (polarized) tail to the northwest (top right) from UT 11.3 until UT 11.6 (middle panels in Fig. 15), and a clearly visible arc of larger intensity within the ring to the southeast (bottom left) from UT 11.7UT 11.9 (bottom panels in Fig. 15). These features are consistent with the hotspot motion of the ground truth movie. While we recover some motion related to the hotspot motion, a continuously evolving movie was not recovered. This is a result of the rather bad simulated weather conditions and the observation cadence for the third challenge: the source was (synthetically) observed for ten minutes followed by a gap of ten minutes. While mrsupport imaging sufficiently recovers some (scattered) hotspot related features in the frames that have observed visibilities, the algorithm does not contain an interpolation scheme to the scans without observations, it just assumes the starting point (i.e., the preceding frame). Hence, we do not recover an evolving movie, but several frames (e.g., UT 11.5 and UT 11.6 or UT 11.7 until end) show the same image structure.
The synthetic ground truth polarization is less dynamic and hence easier to recover. We recover the overall radiallyconic EVPA pattern in every frame with minor smallscale perturbations from the ground truth (which may also be related to the different Stokes I images). Moreover, the recovered polarization fraction matches the true one. As a more detailed feature we successfully recover a larger fractional polarization for the shearing hotspots that follows the hotspot motion.
The presented data set mimics one of the most challenging VLBI data analysis problems so far with various data corruptions, high frequencies (i.e., phase instabilities), fast dynamics and polarimetric structures, the need for superresolution, and a sparse VLBI uvcoverage. As expected, the reconstruction quality with mrsupport imaging is degraded compared to the rather simple geometric data tests discussed above. However, the application already highlights the potential of mrsupport imaging to do unsupervised, superresolving, dynamical, and polarimetric imaging together. This presents a unique capability in the landscape of existing imaging algorithms by now, and in particular a domain of research in which CLEAN remains limited due to its lack of resolution, its high demand of human supervision and calibration, and the lack of support for dynamical reconstructions.
Fig. 12 Polarimetric reconstruction of fast rotating crescent with ngEHT coverage. 
Fig. 13 Synthetic ground truth movie of Sgr A* used for the third ngEHT Analysis challenge. The model is a RIAF model with a semianalytic shearing hotspot. 
Fig. 14 Reconstruction of the movie plotted in Fig. 13 with mrsupport imaging for the third ngEHT Analysis challenge. 
Fig. 15 Selected frames of reconstructions shown in Figs. 13 and 14 at times UT 11.3 (upper panels), UT 11.5 (middle panels), and UT 11.7 (lower panels). 
5 Conclusions and outlook
We presented in this manuscript a novel algorithmic approach to do static polarimetry, dynamic imaging and finally dynamic polarimetry. The approach was based on our previous works on multiscalar imaging (Müller & Lobanov 2022, 2023) and the multiresolution support in particular. The multiresolution support encodes important information about the emission structure on one hand (which spatial scales are present in the image and where) and the uvcoverage on the other hand (which of these spatial scales is measured by baselines). Hence, the multiresolution support is well suited to introduce regularization for challenging extensions to the standard VLBI imaging problem in the spirit of constrained minimization: we optimize the fit to the respective data terms (chisquared to framebyframe visibilities or to polarized visibilities), but vary the wavelet coefficients only in the multiresolution support.
We demonstrated with applications to simple geometric synthetic observations the power of this approach. The mrsupport constraint suppressed the introduction of image artifacts, and hence provided ample regularization. Moreover, the approach is flexible enough to allow for the reconstruction of both dynamically evolving structures and polarimetric structures. Moreover, the blind application to more complex movies of the third ngEHT Analysis challenges demonstrated that the algorithm may also provide reasonable reconstructions with real data corruptions in one of the most challenging VLBI imaging problems, although the quality of the reconstruction is degraded.
Mrsupport imaging shares the basic advantage of multiscalar approaches that are fitted to the uvcoverage. The static reconstructions are done with DoGHit which is completely data driven and largely automatic without many hyperparameters (Müller & Lobanov 2022). The same applies for the extension to dynamics and polarimetric quantities. No further specific regularization terms (with corresponding weights) were introduced, rather the reconstruction was regularized again by the datadriven multiresolution support, which is determined by the uvcoverage and baselinenoise. Hence, mrsupport imaging is blind and unbiased as well. However, we recognized an important bottleneck for the dynamic reconstructions with mrsupport imaging: the static average image needs to approximate the true timeaveraged image quite well.
An extension to RML approaches to dynamic imaging (i.e., the addition of temporal regularizers) is straightforward as well. We note that due to the lack of regularization parameters controlling the temporal correlation, mrsupport imaging basically calculates images with rich structures from the extreme sparsity of a single scan independently of preceding and proceeding scans. That indicates that the multiresolution support information is a rather strong prior information that, once a reasonable static model is established, allows for the handling of extreme sparsity in the data.
The geometric test observations tested throughout this study are rather simple. First, we neglected circular polarization for the purpose of simplicity. We note that we only added thermal noise to the observations and no phase and amplitude errors. This does not affect the reconstruction of the static Stokes I image (neither for a static source nor for a dynamically evolving source) since DoGHiT uses the closure quantities as data terms only (Müller & Lobanov 2022). However, phase and amplitude calibration errors could affect the subsequent mrsupport imaging rounds since for every frame the (polarized) visibilities are used instead of the closure quantities. Hence, we assume that it is possible to solve for the (polarized) selfcalibration with the timeaveraged mean image. This does not necessarily have to be true, but might be a good approximation when the dynamic timescale of the source and the dynamic timescale of the gainvariability are different allowing a gain selfcalibration with the mean image (compare, e.g., Wielgus et al. 2022; Event Horizon Telescope Collaboration 2022b).
Moreover, while a rotating crescent movie might be a good approximation to a rotating hotspot model in the first instance, the model is only a rough approximation to the range of models for the dynamics at the horizon scale. The same applies to the rather simple polarization model used. We therefore tested the algorithm in the blind third ngEHT Analysis challenge as well. Due to the systematic errors added to the synthetic data, the reconstructions are worse than in the previous data tests; however, mrsupport imaging, for the first time, is able to recover superresolved polarized movies in an unsupervised way. This is a unique capability among all currently existing VLBI imaging algorithms. Furthermore, we expect further significant improvements from including a temporal regularizer in the dynamic imaging and from more sophisticated strategies for the static image reconstruction, in particular from frameworks that already demonstrated to be able to recover fast dynamics such as ehtim or StarWarps.
Finally, the application of the same ground truth movie to a possible ngEHT array configuration demonstrates the improvements that the ngEHT project will bring to dynamic reconstructions. The quality of the fits to Stokes I and polarimetric properties improves. With a ngEHT configuration it is even possible to recover structural patterns on dynamic timescales of about ~ 10–20 min and therefore what can be expected from real observations.
Acknowledgements
We thank the team from the ngEHT Analysis challenge lead by Freek Roelofs, Lindy Blackburn and Greg Lindahl for the chance to use and publish their synthetic data set for this work. Special thanks goes in particular to Paul Tiede for providing the RIAFSPOT model of SGR A*. HM received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This work was partially supported by the M2FINDERS project funded by the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 101018682). Our imaging pipeline and our software is available online as MrBeam software tool (https://github.com/hmuellergoe/mrbeam). Our software makes use of the publicly available ehtim (Chael et al. 2018), regpy (https://github.com/regpy/regpy), and WISE software packages (Mertens & Lobanov 2015).
Appendix A Wavelet dictionaries
This section is adapted from Müller & Lobanov (2023). The dictionaries used in this paper are as follows:
Here denotes a radial Gaussian function with a standard deviation σ and an elliptical Gaussian with major semiaxis σ_{1}, minor semiaxis σ_{2}, and angle α. The DoB dictionary is composed in the same way by replacing Gaussians with spherical Bessel functions,
with radial spherical Bessel function and elliptical Bessel function .
References
 Akiyama, K., Ikeda, S., Pleau, M., et al. 2017a, AJ, 153, 159 [NASA ADS] [CrossRef] [Google Scholar]
 Akiyama, K., Kuramochi, K., Ikeda, S., et al. 2017b, ApJ, 838, 1 [NASA ADS] [CrossRef] [Google Scholar]
 Arras, P., Frank, P., Leike, R., Westermann, R., & Enßlin, T. A. 2019, A&A, 627, A134 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Arras, P., Bester, H. L., Perley, R. A., et al. 2021, A&A, 646, A84 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bhatnagar, S., & Cornwell, T. J. 2004, A&A, 426, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433 [NASA ADS] [CrossRef] [Google Scholar]
 Bouman, K. L., Johnson, M. D., Dalca, A. V., et al. 2018, IEEE Transac. Comput. Imaging, 4, 512 [CrossRef] [Google Scholar]
 Bower, G. C., Markoff, S., Dexter, J., et al. 2015, ApJ, 802, 69 [Google Scholar]
 Broderick, A. E., Fish, V. L., Johnson, M. D., et al. 2016, ApJ, 820, 137 [Google Scholar]
 Broderick, A. E., Gold, R., Karami, M., et al. 2020a, ApJ, 897, 139 [Google Scholar]
 Broderick, A. E., Pesce, D.W., Tiede, P., Pu, H.Y., & Gold, R. 2020b, ApJ, 898, 9 [NASA ADS] [CrossRef] [Google Scholar]
 Broderick, A. E., Gold, R., Georgiev, B., et al. 2022, ApJ, 930, 1 [Google Scholar]
 Byrd, R. H., Lu, P., & Nocedal, J. 1995, SIAM J. Sci. Stat. Comput., 16, 1190 [CrossRef] [Google Scholar]
 Chael, A. A., Johnson, M. D., Narayan, R., et al. 2016, ApJ, 829, 11 [Google Scholar]
 Chael, A. A., Johnson, M. D., Bouman, K. L., et al. 2018, ApJ, 857, 23 [Google Scholar]
 Chael, A., Chan, C.K., Klbouman, et al. 2022, https://doi.org/10.5281/zenodo.6519440 [Google Scholar]
 Chatterjee, K., Chael, A., Tiede, P., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
 Clark, B. G. 1980, A&A, 89, 377 [NASA ADS] [Google Scholar]
 Cornwell, T. J. 2008, IEEE J. Selected Topics Signal Process., 2, 793 [NASA ADS] [CrossRef] [Google Scholar]
 Doeleman, S., Blackburn, L., Dexter, J., et al. 2019, BAAS, 51, 256 [Google Scholar]
 Emami, R., Tiede, P., Doeleman, S. S., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2019a, ApJ, 875, L1 [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2019b, ApJ, 875, L4 [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2021a, ApJ, 910, 43 [CrossRef] [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2021b, ApJ, 910, 48 [CrossRef] [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2022a, ApJ, 930, L12 [NASA ADS] [CrossRef] [Google Scholar]
 Event Horizon Telescope Collaboration (Akiyama, K., et al.) 2022b, ApJ, 930, L14 [NASA ADS] [CrossRef] [Google Scholar]
 Farah, J., Galison, P., Akiyama, K., et al. 2022, ApJ, 930, 1 [Google Scholar]
 Garsden, H., Girard, J. N., Starck, J. L., et al. 2015, A&A, 575, A90 [CrossRef] [EDP Sciences] [Google Scholar]
 Gómez, J. L., RocaSogorb, M., Agudo, I., Marscher, A. P., & Jorstad, S. G. 2011, ApJ, 733, 11 [Google Scholar]
 Gómez, J. L., Lobanov, A. P., Bruni, G., et al. 2016, ApJ, 817, 96 [Google Scholar]
 GRAVITY Collaboration (Abuter, R., et al.) 2018a, A&A, 615, L15 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 GRAVITY Collaboration (Abuter, R., et al.) 2018b, A&A, 618, L10 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hardee, P., Mizuno, Y., & Nishikawa, K.I. 2007, Ap&SS, 311, 281 [CrossRef] [Google Scholar]
 Högbom, J. A. 1974, A&AS, 15, 417 [Google Scholar]
 Hovatta, T., Lister, M. L., Aller, M. F., et al. 2012, AJ, 144, 105 [Google Scholar]
 Ikeda, S., Tazaki, F., Akiyama, K., Hada, K., & Honma, M. 2016, PASJ, 68, 45 [NASA ADS] [CrossRef] [Google Scholar]
 Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2015, Science, 350, 1242 [Google Scholar]
 Johnson, M. D., Bouman, K. L., Blackburn, L., et al. 2017, ApJ, 850, 172 [Google Scholar]
 Johnson, M. D., Akiyama, K., Blackburn, L., et al. 2023, Galaxies, 11, 61 [NASA ADS] [CrossRef] [Google Scholar]
 Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python [Online; accessed 20150825] [Google Scholar]
 Kramer, J. A., & MacDonald, N. R. 2021, A&A, 656, A143 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Mertens, F., & Lobanov, A. 2015, A&A, 574, A67 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Müller, H., & Lobanov, A. P. 2022, A&A, 666, A137 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Müller, H., & Lobanov, A. 2023, A&A, 672, A26 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Narayan, R., & Nityananda, R. 1986, ARA&A, 24, 127 [Google Scholar]
 Palumbo, D. C. M., Doeleman, S. S., Johnson, M. D., Bouman, K. L., & Chael, A. A. 2019, ApJ, 881, 62 [Google Scholar]
 Pötzl, F. M., Lobanov, A. P., Ros, E., et al. 2021, A&A, 648, A82 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Rau, U., & Cornwell, T. J. 2011, A&A, 532, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Raymond, A. W., Palumbo, D., Paine, S. N., et al. 2021, ApJS, 253, 5 [Google Scholar]
 Ricci, L., Boccardi, B., Nokhrina, E., et al. 2022, A&A, 664, A166 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Roelofs, F., Janssen, M., Natarajan, I., et al. 2020, A&A, 636, A5 [EDP Sciences] [Google Scholar]
 Roelofs, F., Blackburn, L., Lindahl, G., et al. 2023, Galaxies, 11, 12 [NASA ADS] [CrossRef] [Google Scholar]
 Schwab, F. R. 1984, AJ, 89, 1076 [NASA ADS] [CrossRef] [Google Scholar]
 Starck, J.L., Murtagh, F., & Fadili, J. 2015, Sparse Image and Signal Processing: Wavelets and Related Geometric Multiscale Analysis, 2nd edn (Cambridge: Cambridge University Press), 1 [Google Scholar]
 Thompson, A. R., Moran, J. M., & Swenson, J. George, W., 2017, Interferometry and Synthesis in Radio Astronomy, 3rd edn. (Berlin: Springer) [CrossRef] [Google Scholar]
 Tiede, P., Pu, H.Y., Broderick, A. E., et al. 2020, ApJ, 892, 132 [NASA ADS] [CrossRef] [Google Scholar]
 Tiede, P., Broderick, A. E., & Palumbo, D. C. M. 2022, ApJ, 925, 122 [CrossRef] [Google Scholar]
 Wakker, B. P., & Schwarz, U. J. 1988, A&A, 200, 312 [NASA ADS] [Google Scholar]
 Wiaux, Y., Jacques, L., Puy, G., Scaife, A. M. M., & Vandergheynst, P. 2009, MNRAS, 395, 1733 [NASA ADS] [CrossRef] [Google Scholar]
 Wielgus, M., Marchili, N., MartíVidal, I., et al. 2022, ApJ, 930, L19 [NASA ADS] [CrossRef] [Google Scholar]
 Zamaninasab, M., ClausenBrown, E., Savolainen, T., & Tchekhovskoy, A. 2014, Nature, 510, 126 [Google Scholar]
All Figures
Fig. 1 Illustration of wavelet decomposition and multiresolution support of an astronomical image. Left panels: true image and true image with additional Gaussian noise; middle panels: Wavelet decomposition of the noised image with the DoG wavelet dictionary computed with filter sizes σ_{0} = 1,σ_{1} = 2, σ_{2} = 4,…,σ_{5} = 32 pixels; right panels: multiresolution support computed by thresholding the wavelet scales to the scaledependent noise plotted as a mask with value 1 (coefficient in the support) or 0 (coefficient not in the multiresolution support). 

In the text 
Fig. 2 Static, polarimetric reconstruction with mrsupport imaging on synthetic data. Left panel: static polarization ground truth; middle panel: static reconstruction with mrsupport imaging; right panel: uυcoverage of synthetic observation (EHT 2022 array). 

In the text 
Fig. 3 Synthetic ground truth dynamic movie (slowly rotating crescent) in the time interval between 10 UT and 14 UT. The green arrow ranges from the image center to the position of the brightest pixel in the frame, hence illustrating the orientation of the crescent. 

In the text 
Fig. 4 True movie for fast rotating crescent. 

In the text 
Fig. 5 Multiresolution support for the reconstruction of the static polarization example with EHT coverage. 

In the text 
Fig. 6 Recovered solution (recovered with mrsupport imaging) for slowly rotating crescent observed with the EHT. 

In the text 
Fig. 7 Same as Fig. 6, but for slowly rotating crescent observed with the ngEHT. 

In the text 
Fig. 8 True position angle (blue) and the recovered position angle recovered with mrsupport imaging for an EHT configuration (red) and ngEHT configuration (green) for the slowly rotating crescent model. The error bars reflect the change in position angle in the true source model within a tenminute scan (cycle length of synthetic observation). 

In the text 
Fig. 9 Reconstruction of rapidly rotating crescent observed with the ngEHT. 

In the text 
Fig. 10 True (upper panels) and recovered (lower panels) test images with full Stokes polarization for the slowly rotating crescent. The mrsupport imaging approach recovers the true largescale orientation of the EVPA. 

In the text 
Fig. 11 Same as Fig. 10, but for slowly rotating crescent observed with the ngEHT. 

In the text 
Fig. 12 Polarimetric reconstruction of fast rotating crescent with ngEHT coverage. 

In the text 
Fig. 13 Synthetic ground truth movie of Sgr A* used for the third ngEHT Analysis challenge. The model is a RIAF model with a semianalytic shearing hotspot. 

In the text 
Fig. 14 Reconstruction of the movie plotted in Fig. 13 with mrsupport imaging for the third ngEHT Analysis challenge. 

In the text 
Fig. 15 Selected frames of reconstructions shown in Figs. 13 and 14 at times UT 11.3 (upper panels), UT 11.5 (middle panels), and UT 11.7 (lower panels). 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.