Using multiobjective optimization to reconstruct interferometric data (II): polarimetry and time dynamics

Context. In Very Long Baseline Interferometry (VLBI), signals from multiple antennas combine to create a sparsely sampled virtual aperture, its effective diameter determined by the largest antenna separation. The inherent sparsity makes VLBI imaging an ill-posed inverse problem, prompting the use of algorithms like the Multiobjective Evolutionary Algorithm by Decomposition (MOEA/D), as proposed in the first paper of this series. Aims. This study focuses on extending MOEA/D to polarimetric and time dynamic reconstructions, particularly relevant for the VLBI community and the Event Horizon Telescope Collaboration (EHTC). MOEA/D’s success in providing a unique, fast, and largely unsupervised representation of image structure serves as the basis for exploring these extensions. Methods. The extension involves incorporating penalty terms specific to total intensity imaging, time-variable, and polarimetric variants within MOEA/D’s multiobjective, evolutionary framework. The Pareto front, representing non-dominated solutions, is computed, revealing clusters of proximities. Testing MOEA/D with synthetic datasets representative of EHTC’s main targets demonstrates successful recovery of polarimetric and time-dynamic signatures despite sparsity and realistic data corruptions. Results. MOEA/D’s extension proves effective in the anticipated EHTC setting, offering an alternative and independent claim to existing methods. It not only explores the problem globally but also eliminates the need for parameter surveys, distinguishing it from Regularized Maximum Likelihood (RML) methods. MOEA/D emerges as a novel and useful tool for robustly characterizing polarimetric and dynamic signatures in VLBI datasets with minimal user-based choices. Conclusions. Future work aims to address the last remaining limitation of MOEA/D, specifically regarding the number of pixels and numerical performance, to firmly establish it within the VLBI data reduction pipeline.


Introduction
For Very Long Baseline Interferometry (VLBI) multiple antennas in a radio-interferometric array are combined to achieve unmatched astronomical resolutions.The source is simultaneously observed by an array of radio antennas.As described by the van-Cittert-Zernike theorem (Thompson et al. 2017), the correlated products of the signals recorded by pairs of antennas are approximately the Fourier transform of the true sky brightness distribution with a Fourier frequency determined by the baseline separating the two antennas on the sky plane.
Image reconstruction deals with the problem of recovering the sky brightness distribution from these measurements.With a full sampling of the Fourier domain (commonly referred to as the u, v plane), the original image could be retrieved by an inverse Fourier transform.However, due to the limited number of antennas and the limited amount of observing time, only a sparse subset of all ⋆ Both first authors have contributed equally to this work.
Fourier coefficients is measured (the subset of all observed spatial Fourier frequencies is commonly referred to as the u, v-coverage).Moreover, the observations are corrupted by additional thermal noise and instrumental calibration effects.Hence, the imaging problem is an ill-posed inverse problem.
Particularly global VLBI at mm-wavelengths pose a number of additional challenges: The number of antennas is small, the visibilities are less well calibrated (in fact typically the phases are poorly constrained) and the signal-tonoise ratio is worse compared to denser arrays operating at longer wavelengths.All in all, the reconstruction problem is under-constrained and, due to missing data and the need for self-calibration, nonconvex and possibly multimodal.Thus, the imaging relies on strong prior information, and a different selection of the regularization terms or prior distributions may produce significantly different image features.since decades.However, recent years saw the ongoing development of novel imaging algorithms for the global VLBI data regime, particularly inspired by the needs of the Event Horizon Telescope (EHT), in three main families: superresolving CLEAN-based algorithms (Müller & Lobanov 2023b), Bayesian methods (Arras et al. 2021;Broderick et al. 2020b;Tiede 2022), and regularized maximum likelihood (RML) methods (Honma et al. 2014;Chael et al. 2016Chael et al. , 2018;;Akiyama et al. 2017b,a;Müller & Lobanov 2022).These methods have been proven successful on synthetic data (Event Horizon Telescope Collaboration et al. 2019, 2022a) and in a wide range of frontline observations with the EHT (Event Horizon Telescope Collaboration et al. 2019;Kim et al. 2020;Janssen et al. 2021;Event Horizon Telescope Collaboration et al. 2022a;Issaoun et al. 2022;Jorstad et al. 2023), the Global Millimetre VLBI Array (GMVA) observations (Zhao et al. 2022), as well as with space-VLBI (Fuentes et al. 2022;Müller & Lobanov 2023a;Kim et al. 2023).The issue that the image structure is only weakly constrained by the data, and multiple solutions may fit the observed data, is handled in these frameworks either by manual interaction in CLEAN (CLEAN windows, hybrid imaging and self-calibration), a global posterior evaluation for Bayesian methods, or the combination of various data terms and regularization terms for RML methods.In particular, to overcome the limitation imposed by the sparse uv-coverage and challenging calibration when observing at 230 GHz, the EHT performed extensive and successful surveys exploring different parameter combinations.These were utilized to select a top-set of hyper-parameters, but also served the purpose of a synthetic data verification with a variety of source morphologies to build confidence in the reconstructed images (in particular for RML methods) (see Event Horizon Telescope Collaboration et al. 2019Collaboration et al. , 2022a)).In this work, we focus on improving the top-set selection strategy.
Recently, we proposed method, MOEA/D, that presents an alternative claim (Paper I).MOEA/D approaches the image reconstruction by a multiobjective formulation similar in philosophy to parameter surveys that are common for RML methods (Paper I;Zhang & Li (2007)).The nondominated, locally optimal solutions to the image reconstruction problem are identified by Pareto optimality.In this way, MOEA/D directly fits into the framework of a multimodal, nonconvex imaging problem, since it investigates all clusters of optimal features simultaneously (Zhang & Li 2007).As a particular benefit of MOEA/D, it is relatively fast and largely unsupervised.Compared to CLEAN, no human supervision is needed anymore.Compared to Bayesian methods with manually chosen prior distributions and RML methods, MOEA/D has a small number of hyperparameters, explicitly explores inherent degeneracies and may not require parameter surveys (Paper I).An alternative strategy (to mapping the impact of various regularization assumptions by the Pareto front, and thus reducing the number of critical free hyperparameters) would be to infer the priors or regularization terms together with image structure as realized by resolve in a Bayesian framework (Arras et al. 2019), or by particle swarm optimization in an RML setting (Mus et al. 2024).
The VLBI community shows great interest in two extensions of the classical imaging problem: Polarimetric reconstructions (e.g.Lister et al. 2018;Kramer & MacDonald 2021;Event Horizon Telescope Collaboration et al. 2021a,b;Ricci et al. 2022;Johnson et al. 2023) and time dynamic reconstructions (e.g.Johnson et al. 2017;Bouman et al. 2017;Lister et al. 2018Lister et al. , 2021;;Satapathy et al. 2022;Wielgus et al. 2022;Farah et al. 2022;Johnson et al. 2023).The option to recover polarimetric movies is of particular importance for the EHT and its possible successors (Johnson et al. 2023), but poses significant challenges (Roelofs et al. 2023).The imaging needs to deal with the relative sparsity of the array, various calibration issues at mm-wavelengths, superresolution, polarimetry and time dynamics simultaneously.Several algorithms and softwares have been proposed that could deal specifically with time-dynamic reconstructions at the event horizon scales (Bouman et al. 2017;Johnson et al. 2017;Arras et al. 2022;Müller & Lobanov 2023a;Mus & Martí-Vidal 2024) or static polarimetry (Chael et al. 2016;Broderick et al. 2020b,a;Martí-Vidal et al. 2021;Pesce 2021;Müller & Lobanov 2023a).Recently the potential of these approaches was demonstrated by Roelofs et al. (2023).However, the combined capability, i.e. the reconstruction of polarimetric movies within one framework, remains a unique imaging capability for only a few algorithms, such as DoG-HiT (Müller & Lobanov 2023a).
In this manuscript we build on the success of the recently proposed MOEA/D algorithm (Paper I) in presenting a fast, self-calibration independent (as long as selfcalibration agnostic data functionals are used) and unsupervised alternative to the already established imaging methods in total intensity and extend it both to polarimetric and time dynamic reconstructions in the same framework.In this spirit, this work contributes towards an automatic, unbiased reconstruction of polarimetric movies, particularly for the needs of the EHT and its planned successors.

VLBI measurements and polarimetry
The correlated signal of an antenna pair (the visibility V(u, v)) is described by the van-Cittert-Zernike theorem (Thompson et al. 2017): i.e. the true sky brightness distribution I(l, m) and the visibilies form approximately a Fourier pair.The harmonic coordinates (u, v) are determined by the baseline separating each antenna pair relative to the sky plane.Not all Fourier frequencies are measured, i.e. the u, v-coverage is sparse.Hence, imaging (i.e.recovering the sky brightness distribution from the observed visibilities) is an ill-posed inverse problem, since the codomain is only sparsely sampled.Furthermore, the visibilities are corrupted by calibration issues and thermal noise.We have to correct for these effects during imaging as well.The measured visibilities at a time t observed along one pair of antennas i, j are multiplied by station-based gain factors at each instant t, g i,t to correct direction-independent calibration effects, mathematically expressed as with a baseline-dependent random noise term N i,j,t .Hereafter, we consider V, V, I, g and N to be timedependent, unless the contrary is said.For simplifying the notation, we omit the t.
The polarized fraction of the light recorded at every antenna is splitted by polarization filters (either orthogonal linear, or righthanded/lefthanded circular).These four channels per baseline (two per antenna) are combined in the four Stokes parameters: I, Q, U, V .I describes non-negative total intensity, Q, U linear polarization and V the circular polarization.The Stokes parameters satisfy the following inequality: i.e. the total intensity of polarized and unpolarized emission is always greater than the polarized intensity.Similar as for Stokes I imaging we identify Stokes visibilities for the various Stokes parameters (Thompson et al. 2017): where F denotes the Fourier transform.We treat the Stokes visibilities as the observable for the remainder of this manuscript.
Preparing the discussion of leakage corruptions, and the corresponding calibration within MOEA/D, we also discuss the formulation by visibility matrices here.However, note that we will fit the Stokes visibilities directly with MOEA/D, and not the coeherency matrices.For circular feeds, we arrange the polarized visibilities for an antenna pair i, j in the visibility matrix (Hamaker et al. 1996): For polarimetry, we have to deal with station-based gains and thermal noise in all bands.Additionally we have to deal with leakages and feed rotations in polarimetry.Gains are easiest represented by the Jones matrix (Thompson et al. 2017) for antenna i: and feed rotations by the matrix θ (Thompson et al. 2017): The leakage between the perpendicular polarization filters introduces cross-terms (Thompson et al. 2017): Let us denote the complete corruption by J i = J i gain J i leakage J i rotation .Then the observed visibility matrix V i,j is: with thermal noise N i,j .

MOEA/D
where where S vis , S amp , S clp , S cla are the reduced χ 2 -metrics with respect to the visibilities, amplitudes, closure phases and closure amplitudes respectively, and R l1 , R l2 , Rtv, R tsv , R f lux , R entr denote the l 1 -, l 2 -, total variation, total squared variation, compact flux and entropy penalty terms.For more details on these terms we refer to the more detailed discussion in Paper I.
The output of the MOEA/D is a sample of potential image features covering the decision space among the axes of all data terms and regularizers rather than a single image.Hence, the problem is high-dimensional limiting the number of pixels to achieve a sufficient numerical performance.
Furthermore, the genetic algorithm, while exploring the parameter space globally, does not take into account gradient or hessian information, thus converges rather slow.These points are tackled in a companion effort (Mus et al. 2024) that replaces the genetic evolution by a hybrid approach and waives the limitation of a small number of pixels.
Some of these objectives have discrepancies.For instance sparsity in the pixel basis (promoted by R l1 ) and smoothness (promoted by R tsv ) are contradicting assumptions.In essence, this means that a single image that minimizes all objective functionals f i typically does not exist (Pardalos et al. 2017).For RML methods we achieve a potential, regularized reconstruction by balancing these terms with the proper, but initially unknown, weighting parameters α, β, ... by a weighted sum minimization.For MOEA/D all the different weighting combinations are explored simultaneously and are approximated by the Pareto front (Zhang & Li 2007).Due to the aforementioned discrepancies between data terms and penalty terms and in between different penalty terms, the Pareto front divides into several clusters of solutions (Paper I).The number of disconnected clusters is larger, the weaker the image features are constrained, i.e. for enhanced sparsity of the array and for self-calibration independent closure-only imaging (Paper I).The recovered image morphologies within one cluster vary only marginally.However, the recovered image features among several disconnected clusters vary significantly.We identify the independent clusters in the Pareto front by a friends-on-friends algorithm (for more details we refer to Paper I).
While all the nondominated solutions are explored simultaneously within the Pareto front, we have to deal with the problem to find the cluster of solutions that is objectively best.In Paper I we argued that this could be done without the need for lengthy parameter surveys by a more automatized choice in a data-driven way.We proposed to use the solution that has the highest number of close neighbors (i.e. the dominating cluster of solutions), or the cluster that is minimizing the (euclidean) distance to the ideal point (mimicking a least-square principle).In the application to synthetic data and to real data, particularly the choice with the largest number of close neighbors performed best (Paper I).
Generally, it is challenging to compute the Pareto front analytically.Hence, the Pareto front is typically approximated by a sample of characteristic members (Pardalos et al. 2017).Several strategies exist to approximate the Pareto front.Most of these algorithms decompose the multiobjective optimization problem into a sample of singleobjective problems either by the Tchebycheff Approach, the Boundary Intersection Method, or the weighted sum approach (Tsurkov 2001;Zhang & Li 2007;Xin-She & Xing-Shi 2019;Sharma & Chahar 2022).We applied the weighted sum approach in Paper I due to its philosophical similarity to parameter surveys that are common in VLBI for RML methods (e.g.compare Event Horizon Telescope Collaboration et al. 2019;Kim et al. 2020;Janssen et al. 2021;Event Horizon Telescope Collaboration et al. 2022a;Zhao et al. 2022;Fuentes et al. 2022).In this way, the Pareto front approximates the set of images using parameter surveys.We introduced normalized weight arrays .. that are related to the objective functionals f 1 , f 2 , ..., f m in Paper I to rewrite as a weighted sum the multiobjective formulation of Prob.(MOP) into a sequence of single-objective optimization problems (Paper I): Note that there are two different weights.The weights α, β, ... (which remain fixed during the running time of MOEA/D) and the weights λ i (which we search over for Pareto optimal solutions).The weights α, β, ... are used to renormalize the regularization terms and data terms to a similar order of magnitude to help the convergence procedure.However, they do not affect the estimation of the Pareto front as long as the grid the weights λ i is large and dense enough.(for more details see Appendix A in Paper I).
One particularly successful strategy to approximate the Pareto front by decomposition is the multiobjective evolutionary algorithm MOEA/D proposed in Zhang & Li (2007); Li & Zhang (2009).MOEA/D computes the Pareto front by a genetic algorithm.We start with a random initial population.The subsequent population is obtained by evolutionary operations, i.e. by random mutation and genetic mixing.MOEA/D fits specifically well into the framework of VLBI imaging, since only parent solutions with similar weight combinations {λ j 0 , λ j 1 , ..., λ j m } are genetically mixed, i.e. the algorithm keeps the diversity within the population (Zhang & Li (2007); Paper I).
MOEA/D for VLBI combines several advantages.It is faster than complete parameter surveys for RML methods or exact Bayesian sampling schemes, provides an alternative claim of the image structure in an automatized (unsupervised) way, and explores the image features with a more global search technique, i.e. by a randomized evolution in a multiobjective framework (Paper I).In conclusion, the dependence on the selection of regularization terms inherent to all RML imaging procedures is addressed effectively by a multiobjective, more global exploration of the possible solution space.

Polarimetry
For polarimetry, we introduce the χ 2 -fit to the linear polarized visibilities V Q , V U as an additional data term: S pvis .Although it may be a small extension to also include circular polarization in the set of objectives and recover circular polarization among the linear polarization, we omit to this extension.Recovering circular polarization requires a range of additional calibration steps, particularly of the R/L gain ratio that are typically not performed in self-calibration, but across multiple sources (Homan & Wardle 1999;Homan & Lister 2006).
Moreover, we use specifically designed polarimetric regularization terms inspired by the selection in Chael et al. (2016) and subsequently used by the EHT collaboration (e.g.Event Horizon Telescope Collaboration et al. 2021a).In particular, we use the functionals R ms , R hw , R ptv that will be presented in the following paragraphs.
First, we use the conventional KL polarimetric entropy (Ponsonby 1973;Narayan & Nityananda 1986;Holdaway & Wardle 1988;Chael et al. 2016) of an image I with respect to a prior image M i (chosen to be a Gaussian with roughly the size of the compact flux region): Here m i denotes the fraction of linear polarization in pixel i. R hw favors images of N pixels with fractional polarization, m less than one.
The second regularizer R ms extends the concept of the entropy to the polarized image: We note that R ms diverges to negative infinity if the polarization fraction approaches zero.To avoid this, we atcually use max(R ms , −z) as regularization term.Here z is a real number that has been found as the smallest scroring R ms (x) in the Pareto front that has not diverged.Finally, we utilize the polarimetric counterpart of the total variation regularizer (Chael et al. 2016): where we used the complex realization of the linear polarized emission P = Q + iU to note down the total variation for Q and U in a compact form.

Dynamics
The philosophy behind time dynamic imaging is as follows: We cut the full observation into r keyframes where, for a given time window, all frames satisfy that there is at least one visibility observed.The visibilities of one frame are: where t l is the observing time, and ∆t l is a frame-dependent scalar that determines its duration.For each l, the associated data set V l produces an "image keyframe".The model will have a total of r × N 2 parameters (i.e., r images of N 2 pixels each).Now we naturally extend the data terms and penalty terms to a time-series, e.g.by: where p is a time-series of images (i.e. a movie).We proceed analogously for all the other data terms and regularization terms.
In Paper I, we constructed Prob.(MOP) using seven functionales including l 2 -norm (R l2 ), l 1 -norm (R l1 ), total variation (TV) and total squared variation (TSV)(R tv , R tsv ), flux modeling (R f lux ), the entropy (R entr ) and standard visibility and closures data terms (S vis , S amp , S clp , S cla ) (detailed descriptions of each regularizer can be found in the referenced paper.).
To extend this problem to include dynamics we consider a new objective f ngMEM , the ngMEM regularization term defined by (Mus & Martí-Vidal 2024) acting on a time-series of images p ∈ R r×N 2 + : where j, k ∈ {1, 2, ..., r} denote the frame in the time-series, µ ngMEM is a hyper-parameter and Here, t j denotes the times of the associated keyframe j.This functional seeks the most constant dynamic reconstruction (or movie hereafter) which still fits into the data (Mus & Martí-Vidal 2024).The ngMEM is using the Shannon entropy Shannon (1949) without imposing any a priori model to ensure a uniform distribution in both brightness and time for a timeseries of such frames.Therefore it is a conservative approach, since any change in contrast will contribute to the increment of the entropy.
The two parameters µ ngMEM 1 and π are used as regularizers and must be explored.π, the "time memory", has a minor effect, while µ ngMEM , the "time weight", must be carefully selected.Therefore, µ ngMEM should be investigated using MOEA/D.Finally, C is just a floor that should be small to avoid extremely large values or not-a-number errors for the logarithm.
We can solve this problem of dynamic imaging (i.e., find the set of image keyframes that optimally fit the data) by using the following formalism.(38)

Dynamic polarimetry
Ultimately, the fusion of polarimetry regularizers with ng-MEM offers a potent approach for the retrieval of polarimetric evolution.Given the inherent intricacies of this challenge, only few algorithms possess the capability to address it.In particular, the only RML-like algorithm that is currently able to do so is DoG-HiT Müller & Lobanov (2023a).The Bayesian algorithm resolve Arras et al. (2019Arras et al. ( , 2021) ) has the potential to also recover dynamic polarimetry, although no publication up-to date has been done in that direction.
The modelization of the problem within the framework of MOEA/D is notably straightforward.In principle this could be achieved in the most consistent way by combining Eq. ( 24)-( 27) with Eq. (31)-Eq.(37), i.e. by adding the polarimetric objectives to every single scan and solving for total intensity and polarization at the same time.However, the problem becomes rapidly high-dimensional and complex to solve, particularly when exploring the range of possible solutions the Pareto front.In particular, the complexity of the MOEA strategies ranges between O n o n 2 Moreover, since gains and d-terms are not part of the forward model, an application in practice is naturally limited to aa hierarchical approach: solve for the d-terms and gains from the static image (with subsequent refinement with the movie in total intensity first) before proceeding to a full polarimetric movie reconstruction.
Therefore, we split the strategy into several steps.First, we solve for the dynamics in total intensity as described above (step 1).Second we calculate a static polarimetric image as described above (step 2).Third, we cut the observation into keyframes and solve the polarimetric imaging at every keyframe independently with MOEA/D, initializing the population with the final population of the static polarization step (step 2) and assuming the Stokes I image from the time-dynamic exploration (step 1).This strategy resembles the strategy that was also applied in Müller & Lobanov (2023a) for DoG-HiT, one of the few algorithm that notably has the capability for dynamic polarimetry already.

Pareto Fronts
In case of polarimetry, the Pareto front is a fourdimensional hypersurface (four objective terms).As for total intensity, the Pareto front divides into several disjunct clusters due to conflicting assumptions of several regularization terms.We examine the Pareto front in the same way as described in Paper I, i.e. we identify the clusters of solutions in the scoring by a friends-on-friends algorithm.Afterwards, we evaluate the representative solutions of every cluster by its number of close neighbors and by the distance to the ideal, for more details see Paper I. For special clusters, i.e. for the clusters that present overfitted data, the polarization fraction accumulates at the edges of the interval [0,1].In this case, the regularizer R ms approaches negative infinity.We explicitly exclude these clusters for the finding of the objectively best solution.
In the dynamic case, the dimensionality of the Pareto front is 8.The resulting "Pareto movies" are defined by the action of the different objectives on the recovered movies defined by the frames obtained.Finally, the Pareto front obtained solving the dynamic polarimetric problem is composed by the two disjoint Pareto fronts defined above.

Array Configurations and Problem Framework
The self-consistent reconstruction of polarimetric dynamics at the event horizon scales is one of the major goals of the EHT and its planned successors (Johnson et al. 2023;Roelofs et al. 2023).Hence, in consistency with our analysis presented in Müller et al. (2023), we study two array configurations for the remainder of this manuscript.On one hand, we study the configuration of the EHT observations of SgrA* in 2017 at 230 GHz (Event Horizon Telescope Collaboration et al. 2022b).On the other hand, we study a possible EHT + ngEHT configuration that was also used within the ngEHT Analysis challenges (Roelofs et al. 2023), including ten additional stations at 230 GHz, a quadrupled bandwidth and an enhanced frequency coverage.The corresponding uv-coverages are shown in Fig. 1.For consistency with Paper I, we have solved the different MOPs using 256 pixels.

Polarimetry
To demonstrate the capabilities of MOEA/D in the polarimetric domain, we use a synthetic data source out of the ehtim software package (Chael et al. 2016(Chael et al. , 2018) that was previously used in Müller & Lobanov (2023a) for synthetic data tests as well.The ground truth image is plotted in the upper left panel of Fig. 2. The model mimics the typical crescent-like black hole shadow of Sgr A * in total intensity with a highly linear polarized structure.
We show our reconstructions in the same way as we did in Paper I. The non-dominated solutions in the final, evolved population of the Pareto front, form disconnected clusters of solutions.The variety of the recovered structure within one cluster is small, the variety in the recovered image structures between different clusters of reconstructions however is significant.We present in Fig. 2, one representant of each cluster of solutions (the accumulation point) in comparison to the true image structure.The existence of multiple clusters with a single particle starting point already hints that MOEA/D at least partially overcame local convexity and searched the solution space globally independent of the starting point.
For the polarimetric reconstruction, we fixed the Stokes I reconstruction since we suppose datasets need to be dterm calibrated (however note that if d-terms are lower than 10%, imaging algorithms have stronger effects on the reconstruction than the d-terms Martí-Vidal & Marcaide 2008), and only solved for linear polarization with MOEA/D.Moreover, this strategy mimics the strategy of splitting the RML parameter surveys into a total intensity and polarimetry part as was applied by the EHT.We initialized the initial population with images with constant electric vector position angle (EVPA) at a constant polarization fraction of 1% across the whole field of view.Rather than with the Stokes parameters Q, U , we equivalently model the linear polarization by the linear polarization fraction m and the mixing angle χ, i.e.: This turns out to be beneficial since inequality ( 3) is automatically satisfied by imposing 0 ≤ m ≤ 1 (assuming small Stokes V ) during the genetic evolution.We recover some undesired solutions when the weighting favors R ms as this term promotes small polarization fraction approaching m = 0, as shown in Fig. 2).However, these solutions are disfavored as they are at the edge of the Pareto front, and are not selected by neither of the selection heuristics.Specifically we cannot create spurious polarimetric signals outside of the total intensity contours.MOEA/D evolves the population of solutions by genetic operations, i.e. by genetic mixing and random mutation.For latter step, it is beneficial for the numerical performance to rescale all guess arrays (i.e.m and χ) such that the resulting values are of order of 1, we refer to Paper I for more details on the numerical performance.
Out of the clusters of solutions presented in Fig. 2 one (cluster 0) recovers the polarization fraction well.The EVPA pattern, particularly the small-scale perturbation towards the south-east of the ring, is recovered well.Cluster 0 is both strongly preferred by the absolute χ 2 (i.e. it is the cluster that fits the data best) as also by the accumulation point criterion that we proposed in Paper I (i.e. it is the cluster with the largest number of neighbors, hence the most 'common' one).
To verify that MOEA/D reconstructions performs well for a wider range of EVPA structures on such a challenging data set as the EHT observations, we performed a number of additional tests with an artificial ring image and a constant EVPA pattern, a toroidal magnetic field, a radial magnetic field and a vertical magnetic field, see Fig. 3,Fig. 4,Fig. 5,and Fig. 6 respectively.For each case, we show the respective best solutions selected by the accumulation point criterion (right columns) and the real model (left columns).Bottom rows show model and solution convolved with a 20 µas beam (represented as a white circle in the plot).The reconstruction of the overall pattern is successful in all four convolved cases, while ee observe how super-resolution scrambles the pattern.In conclusion, we can differentiate various magnetic field configurations by the MOEA/D reconstructions of EHT like data.Now we study how the reconstruction improves with an improved array.We present the same GRMHD simulated reconstruction observed with a ngEHT configuration in Fig. 7. Due to the larger number of visibilities, the observation is more constraining.The reconstruction within the preferred cluster(cluster 0) improves, particularly with respect to the total polarization fraction.Interestingly, the non-preferred clusters (at the edge of the parameter space) still exist, resembling the weight combinations that overweight R ms , but are strongly discouraged by investigating the properties of the Pareto front.For more details regarding tools to investigate the Pareto front, e.g. by looking for disjoint clusters, accumulation points and the solution closest to the optimum, we refer to Paper I.
Finally, we study the reconstruction performance in the presence of more realistic data corruptions as they may be expected in real data sets.For this we introduced gain errors (i.e. the need for self-calibration) and leakage errors (i.e. the need to calibrate d-terms) into the synthetic observation with the ngEHT coverage.We add d-terms randomly at all sites with a an error of roughly 5% for d r and d l independently, and gains with a standard deviation of 10%.First we performed a Stokes I reconstruction with MOEA/D.As described in Paper I, this reconstruction uses the closure quantities only, hence is less prone to the gain uncertainties.We select the best image based on the accumulation point criterion and self-calibrate the data set with this Stokes I image.In the next step, we hold the total Stokes I structure constant and recover linear polarization.While a common reconstruction (and calibration) of total intensity and polarized structures is desired and realized for instance in state-of-the-art Bayesian frameworks (Broderick et al. 2020b;Tiede 2022), we skip this approach to avoid too high dimensionality due to the large number of multiobjective functionals that we would need to combine.We calculate an initial guess for the polarimetric structure by ehtim with an unpenalized reconstruction, similar in philosophy to the initial guesses that are typically used for DoG-HiT (Müller & Lobanov 2022).This initial guess is used to perform an a-priori d-term calibration and to initialize the population for MOEA/D.We have used ehtim for the final d-term calibration and then imaging with MOEA/D.
In Fig. 8 we show the correlation of MOEA/D obtained dterms and the ground truth.We can qualitatively see how we recover a close correlation (dashed blue line), implying that MOEA/D solution has dterms similar to the ground truth.In Appendix B the specific value of the dterm for both, true and solution, for every site can be found.
MOEA/D approximates the overall structure during the first few iterations, timely evolving the front towards more fine-structure in later iterations.Hence, it is a natural approach, to use imaging and calibration in an alternating mode: We evolve the initial population by MOEA/D for some iterations, use the current best guess model to calibrate d-terms, and rerun MOEA/D with the old population as an initial guess (i.e.we let the population adapt to slightly varied objectives in an evolutionary way).We Article number, page 7 of 23 iterate this procedure three times.The corresponding reconstruction result is shown in Fig. 9.The overall structure of the Pareto front is similar to the fronts presented before in Fig. 2 and Fig. 7 with one dominating cluster.We recover the structure in the EVPAs successfully, however we would like to highlight that the linear polarization fraction gets underestimated.This may be a consequence of the selfcalibration procedure since an initial guess with a smaller polarization fraction than the true one was used.

Dynamics
The capability of recovering dynamic reconstructions using MOEA/D has been tested using two independent synthetic data simulating a SMBH with an accretion disk based on General Relativity Magneto Hydrodynamics (GRMHD) models.The first data set, a more simple one, is based on the one presented in Johnson et al. (2017); Shiokawa (2013), and the data was shared by private communication.This data only contains Stokes I information.We call this data MJ2017.In similarity with Appendix A Paper I, we have performed a small survey to find values that better show numerical performance for the µ ngMEM initial value.We have found that one is a good starting value.
The second synthetic data, called CH3 set is the one given for the Challlenge 3 presented in Roelofs et al. (2023) 2 .The synthetic data of this last dataset is dynamically polarized, i.e., the intrinsic polarization structure of the source varies along the observation.Therefore, we show an example of a joint polarimetric and dynamic reconstruction, and thus we will see how MOEA/D solves for polarimetric movies and dynamics are not limited to Stokes I To assess the performance of MOEA/D in dynamic reconstruction, we replicated the experiment conducted by Johnson et al. (2017), based on a Gerneral Relativistic Magento-Hydrodynamic (GRMHD) simulation of a black hole.More details can be found in the cited paper and in Shiokawa (2013).Specifically, we reconstructed a similar keyframes presented in Figure 4 of their work, which correspond to time instances at 16:00 UTC, 00:43:26 UTC and 06:29:27 UTC.These keyframes represent subsets of antennas observed within the EHT 2017 array configuration.The first keyframe involves ∼ 4 antennas, the second ∼ 5 antennas, and the final one only ∼ 2 antennas.The differing numbers of baselines between these keyframes render the reconstruction of a continuous movie through snapshot imaging unfeasible.A comprehensive comparison between the performance of ngMEM and the regularizers proposed by Johnson et al. (2017) in this specific scenario is detailed in Mus & Martí-Vidal (2024).
To generate a comparable movie, we initially construct a tentative movie using the constrained ngMEM.Subsequently, we flagged all visibilities corresponding to the aforementioned timestamps.Employing the "guess movie" as an initial point of our algorithm, we executed MOEA/D to explore the neighborhood of the movie.
We present our reconstruction results in Fig. 10.Despite the extreme sparsity of the observation, MOEA/D successfully captures the fine crescent structure for all three snapshots.First column of the figure represents the simulation with infinite resolution.The middle column, the simulation convolved with the beam of the EHT ∼ 20µarcsec (indi- The obtained results exhibit a performance deficit compared to those demonstrated in Mus & Martí-Vidal (2024).This discrepancy can be attributed to the inherent loss of cross-talk information between frames in our methodology.The cited paper use a Hessian modeling.Hessian encodes correlation information of the frames.Therefore, a better reconstruction can be done.However, it is important to acknowledge that this choice restricts the optimization process to a local scope, potentially resulting in the loss of the possibly inherent multimodal nature of the problem.

Dynamic Polarimetry
We test the capability to do dynamic polarimetry with a synthetic data set that is based on the ngEHT Analysis challenges (Roelofs et al. 2023).Particularly we use the SGRA_RIAFSPOT model from the third ngEHT Analysis challenge3 .The ground truth movie of Sgr A* is a RIAF model (Broderick et al. 2016) with a shearing hotspot (Tiede et al. 2020) inspired by the observations of GRAV-ITY Collaboration et al. (2018).For more details on the simulation we refer to Roelofs et al. (2023); Chatterjee et al. (2023).We show the ground truth movie in Fig. 11.For the plotting, we regridded every frame to the field of view and pixel size that was adapted for MOEA/D, i.e. we represent the image structure by 10 µas pixels.We observed the movie synthetically by the possible EHT+ngEHT configuration specified in Roelofs et al. (2023).This configuration consists of all current EHT antennas and ten additional antennas from the list of proposed sites by Raymond et al. (2021).We synthetically observe the source in a cadence with scans of ten minutes of observation, and two minutes gaps between integration times, as was done already in Müller & Lobanov (2023a).The respective uv-coverage for the whole observation is presented in Fig. 1.We add thermal noise, but assume a proper gain and d-term calibration.
We show the recovered reconstruction in Fig. 12.The polarimetric movie was recovered with snapshots of six minutes in the time-window with the best uv-coverage.For better comparison we show single snapshots in Figs. 13, 14 and 15.The ring-like image of the black hole shadow is successfully recovered at every keyframe with varying brightness asymmetry.Moreover, we successfully recover some hints for the dynamics at the event horizon in the form of a shearing hotspot, albeit the large pixel size limits the quality of the reconstruction.The overall EVPA pattern (orientation and strengths) is very well recovered in all keyframes.The dynamics of the EVPA pattern is well recovered as well as demonstrated for example by the well recovered enhanced polarization fraction towards the shearing hotspot in the top-right of Fig. 14 and the top-left of Fig. 15.

Quantitative evaluation
To evaluate quantitatively the similarity of MOEA/D polarimetric dynamic reconstruction, we show to different figures-of-merits: the angular distribution of the reconstruction, the normal cross-correlation in total intensity (see, for instance Farah et al. 2022)), and β 2 to compare the EVPA pattern.First, is worth to notice that since ngMEM allows capture the evolution in every integration time, the comparison can be done in every frame (integration times).
Regarding the Stokes I evolution, we present Fig. 16, 17.The first plot shows the angular brightness distribution in each integration time for both, model and reconstructed.The second plot compares the maximum bright peak of the phase of the ground truth and the model.This plot also shows the nxcorr frame-wise, being worst at the beginning of the experiment (as seen also with β 2 ) and improving during the observation.Because the use of closures during the optimization process, the absolute position of the source is lost.Therefore, in order to compare with the real source, we have aligned each keyframe of the reconstruction with the model by shifting the baricenter of the image to the North, East, South and West by one pixel.Errors bars in the β 2 reconstruction are the standard deviations by these shifts, representing the possible errors obtained by image alignment.In both, amplitude and phases we see a similar and congruent trend between the true solution and the recovered solution.How-ever, the first keyframes recover worst β 2 in consistency with the findings in total intensity.Note that the first scans are also the scans in which the source is evolving the fastest, and thus the scans are more challenging recover.
We show in Fig. 20 the net polarization (for instance Event Horizon Telescope Collaboration et al. 2021c) evolving in time.MOEA/D performs quite well in recovering the correct polarization fraction at all times, even in the rapidly evolving first part of the evolution at which MOEA/D had more issues with finding the twistiness of the pattern (as indicated by β 2 ).MOEA/D's potential to recover the correct polarization fraction with the EHT+ngEHT coverage has already bee shown in Fig. 7.It is noteworthy that while we spotted an underestimated polarization fraction for leakagecorrupted data, the time-dynamic (but fully calibrated) reconstruction with the same array does not show this tendency.

Summary and Conclusions
Imaging reconstruction of interferometric data is a challenging ill-posed problem, particularly when dealing with very sparse uv-coverage, as is often the case in global VLBI observations.These challenges are amplified when working with polarimetric data or attempting to capture the dy-  namic behavior of the source, and become even more complex when reconstructing evolving polarimetric data.On the one hand, while static polarimetry imaging has been extensively studied in the past (Ponsonby 1973;Narayan & Nityananda 1986;Holdaway & Wardle 1988;Coughlan & Gabuzda 2013;Chael et al. 2016), solving the problem of polarimetric multiobjective imaging remains an open challenge.
On the other hand, the intricacies presented by rapidly evolving sources, such as the case of Sgr A * (Event Horizon Telescope Collaboration et al. 2022a), have prompted the VLBI community to develop innovative algorithms capable of effectively addressing variability concerns.The inherent limitations of snapshot imaging due to restricted coverage and the potential loss of information resulting from temporal averaging highlight the necessity for the formulation of functionals that can efficiently mitigate such information loss.
In this study, we have extended the MOEA/D algorithm, which was initially introduced in (Paper I), to tackle the challenges associated with imaging Stokes I, Q, U pa-rameters, fractional polarization (m), and dynamics.This enhanced iteration of MOEA/D showcases its efficiency in reconstructing both static and dynamic interferometric data.
In this work, we first introduced the modelization of MOEA/D for static polarimetry and dynamic Stokes I by incorporating the relevant functionals ( 24), ( 25), ( 26), ( 27) (for polarimetry) and ngMEM (Mus & Martí-Vidal 2024) (37) for dynamic recovery.Subsequently, we tested our algorithm using synthetic data for both scenarios: polarimetry and dynamics.Finally, utilizing synthetic data from EHT+ngEHT, we demonstrated how MOEA/D excels in recovering polarimetric sequences, a capability possessed by only a select few algorithms.
The main benefits of MOEA/D for VLBI imaging are as follows.Opposed to classical RML and Bayesian techniques MOEA/D self-consistently explores imaging problem globally with multiple image modes, i.e. the issue of missing data for sparsely sampled VLBI observations.In this way, MOEA/D presents a robust, alternative claim of the image structure.Moreover, due to the full exploration of the Pareto front, parameter surveys are not needed to establish the estimate of the image, thus MOEA/D is a significant step towards an effectively unsupervised imaging procedure.It is noteworthy that MOEA/D handle the nonnegativity by imposing bounds on the image structure during the genetic operations rather than using the lognormal transform for the functional evaluations or adding lowerbound-constraints.In the same way, m is constrained to be in [0, 1] during the genetic evolution process.However, solutions accumulating at the edges of this interval are common due to the polarimetric entropy, and the random nature of the genetic algorithm.
In conclusion, in the set of these two papers, we have shown how MOEA/D is able to recover static and dynamic imaging algorithm, for Stokes I and polarimetry, effectively mitigating some of the limitations associated with the current state-of-the-art RML methods.Its flexibility and proficiency in recovering Pareto optimal solutions render this algorithm an invaluable tool for imaging reconstruction of interferometric data, catering to both current and nextgeneration telescopes.
One remaining weakness of MOEA/D is its current limitation to a small number of pixels (256 in this set of two papers).To overcome this problem, an alternative formula-tion is done in Mus et al. (2024) in which the evolutionary search algorithm is only used to find the optimal combination of weights.The authors develop a novel strategy based on swarm intelligence to find the optimal weights associated to the RML problem, regardless the quality of the obtained image.Once the optimal weights are found, the corresponding image is, by definition, a member of the Pareto front, and thus, a valid and desirable solution.Then, an optimal image can be found using a local search algorithm, as L-BFG-S (Liu & Nocedal 1989) imposing a correlation between close pixels (via a quasi-Hessian).The resulting image is so-called "Shapley reconstruction".

p
and O n o n 3 p(Curry & Dagli 2014), being O the asymptotic upper bound.The population size is the number of weight combinations drawn from an equidistant grid with overall sum 1, i.e. n p = binom(n o + n g − 1, n g ) where n g is the grid size.Current work is ongoing to achieve a better scaling by a grid-free evolution of the weights(Mus et al. 2024).

2
https://challenge.ngeht.org/cated as a with circle), and the third column, one representative solution of the MOEA/D convolved with the same beam.In Fig. A.1 of Appendix A, the Pareto front for each of the keyframes is presented.

Fig. 2 :
Fig. 2: Clustered images for simulated Sgr A * polarimetric model using the EHT 2017 array at 230 GHz and.The preferred cluster by the largest number of representants is indicated by a red box, the one preferred by the closest distance to the ideal point by a blue box.Vertical lines represent the EVPA .

Fig. 18 ,
19 show the β 2 amplitude (first figure) and phase (second figure) evolution during the experiment.The parameter β 2 to describe the polarimetric signature in the EHT observations was introduced by Palumbo et al. (2020).

Fig. 3 :
Fig. 3: Simulated ring-like source with constant vertical EVPA using EHT 2017 array.Left panel: Ground truth.Right panel: preferred image (solution) recovered with MOEA/D image.Bottom row: super-resolved structures.Bottom row: images convolved with 20 µas beam shown in white.Vertical lines represent the EVPA and their color, the amount of polarization fraction.The color map for the brightness of the source is different to the one of the EVPAs.

Fig. 4 :
Fig. 4: Same as Fig, 3 but the EVPA are following a toroidal magnetic field configuration.