Issue 
A&A
Volume 672, April 2023



Article Number  A120  
Number of page(s)  16  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/202345883  
Published online  12 April 2023 
Robust construction of differential emission measure profiles using a regularized maximum likelihood method
^{1}
Department of Physics & Astronomy, Western Kentucky University,
1906 College Heights Blvd.,
Bowling Green, KY
42101, USA
email: paolo.massa@wku.edu; gordon.emslie@wku.edu
^{2}
SUPA School of Physics & Astronomy, University of Glasgow,
Glasgow,
G12 8QQ, UK
Received:
10
January
2023
Accepted:
2
March
2023
Context. Extremeultraviolet (EUV) observations provide considerable insight into evolving physical conditions in the active solar atmosphere. For a prescribed density and temperature structure, it is straightforward to construct the corresponding differential emission measure profile ξ(Τ), such that ξ(Τ) dT is proportional to the emissivity from plasma in the temperature range [T, T + dT]. Here we study the inverse problem of obtaining a valid ξ(T) profile from a set of EUV spectral line intensities observed at a pixel within a solar image.
Aims. Our goal is to introduce and develop a regularized maximum likelihood (RML) algorithm designed to address the mathematically illposed problem of constructing differential emission measure profiles from a discrete set of EUV intensities in specified wavelength bands, specifically those observed by the Atmospheric Imaging Assembly (AIA) on the NASA Solar Dynamics Observatory.
Methods. The RML method combines features of maximum likelihood and regularized approaches used by other authors. It is also guaranteed to produce a positive definite differential emission measure profile.
Results. We evaluate and compare the effectiveness of the method against other published algorithms, using both simulated data generated from parametric differential emission profile forms, and AIA data from a solar eruptive event on 2010 November 3. Similarities and differences between the differential emission measure profiles and maps reconstructed by the various algorithms are discussed.
Conclusions. The RML inversion method is mathematically rigorous, computationally efficient, and produces acceptable measures of performance in the following three key areas: fidelity to the data, accuracy in the reconstruction, and robustness in the presence of data noise. As such, it shows considerable promise for computing differential emission measure profiles from datasets of discrete spectral lines.
Key words: Sun: corona / Sun: UV radiation / methods: numerical
© The Authors 2023
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
1 Introduction
Extremeultraviolet (EUV) lines emitted by atomic species with a variety of characteristic formation temperatures provide a powerful probe of evolving physical conditions in the active solar atmosphere. However, it has not been a customary practice to use a physical source model to predict a complete set of EUV line intensities and then directly compare these predicted intensities with those observed (the “full forward” approach). Neither is it feasible (for several extremely good reasons, some of which are addressed below) to use a set of observed line intensities to infer the temperature and density structure of the source (the “full inverse” approach). Instead, it has been a common practice in the literature to “meet in the middle,” using the construct of a differential emission measure (DEM) profile.
From the physical (“forward”) perspective, the formal definition of the (volumetric) DEM (cm^{−3} K^{−1}) is (cf. Eq. (10) of Craig & Brown 1976) (1)
where n(r) (cm^{−3}) is the number density of electrons at position r, ∇T is the gradient of the temperature distribution T(r), and the sum is over the (one or more) constanttemperature surfaces S_{T,i} within the source. For a onedimensional source (e.g., a loop with a constant crosssectional area) that has a monotonic temperature variation T(s) along it, this reduces to the considerably simpler expression for the DEM per unit crosssectional area: (2)
This quantity, with units in cm^{−5} K^{−1}, is the product of the square of the electron density in, and the thickness of, a layer that corresponds to a prescribed elemental range of temperatures, and it can readily be determined from a prescribed density and temperature structure.
Coming from the other (“inverse”) direction, the observed count rates (per pixel per second) from a specified pixel in an image, measured in a set of m wavelength channels, are given in terms of ξ(Τ) by (3)
Here K_{i}(T) (cm^{5} pixel^{−1} s^{−1}) is the emissivity function corresponding to the one or more spectral lines that fall within the bounds of the ith wavelength channel and ξ(Τ) = n^{2} ds/dT is the emission measure per unit area on the sky, with s being the distance along the line of sight. The observational inverse problem is to deduce the form of ξ(Τ), given a set of observed count rates I_{i} and knowledge of the emissivity functions K_{i}(T).
Early EUV flare observations made with the Skylab Apollo Telescope Mount (ATM) captured the emission in a set of spectral lines formed at temperatures ranging from a few ×10^{4} K to more than 10^{6} K, within a set of 5.5″ × 5.5″ pixels, revealing (Emslie & Noyes 1978) significant temporal correlations between the emissions corresponding to different temperature ranges during several small flares. Emslie & Noyes (1978) also computed the weighted DEMrelated quantity (μ(Τ), in their notation) (4)
for each observed spectral line i, thus allowing a relatively straightforward estimate of the evolution of the ξ(T) profiles throughout the events studied. Relatively modest increases in 〈ξ(T)〉 over the durations of the various flares were found, which were interpreted as being due to a combination of a significant increase in density in the emitting volume, coupled with a steepening of the temperature gradient dT/ds (e.g., from an enhanced thermal conductive flux) and hence reduced thickness of the emitting volume.
Subsequent measurements by the Hinode XRay Telescope (XRT; Golub et al. 2007) and EUV Imaging Spectrometer (EIS; Culhane et al. 2007), and by the Solar Dynamics Observatory’s Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) and EUV Variability Experiment (EVE; Woods et al. 2012) have provided a large amount of flareevent data from atomic species formed over a broad range of temperatures. Given the superior nature of this data, it is manifestly appropriate to go beyond the construction of average 〈ξ(T)〉 values corresponding to each wavelength channel using the prescription (4), and instead treat Eq. (3) as an integral equation to be solved for the desired source function ξ(T), given the set of observed data vector values I_{i} and the prescribed kernel functions K_{i}(T) (shown in Fig. 1 for the six EUV channels in the SDO/AIA instrument).
In principle, this appears straightforward: Eq. (3) can be discretized as an m × n matrix equation (5)
where the ξ_{j} are components of a vector of ξ(T) values at a set of chosen temperatures T_{j}; j = 1,…n and the m × n matrix K has coefficients K_{ij} that represent the emissivity function for the ith channel integrated over the temperature bin [T_{j}, T_{j} + ΔT_{j}]. Naively, this has a straightforward solution in terms of the generalized inverse of the matrix K: (6)
However, it is well understood (e.g., Craig & Brown 1986) that, due to a significant interdependence between the rows of the matrix K, this is an illposed inverse problem (Tikhonov 1963; Bertero et al. 1985; Schmitt et al. 1996). Because of the ill—conditioning of K, the solution given by Eq. (6) is usually corrupted by oscillations due to noise amplification effects, and it often yields (unphysical) negative values at one or more points. Hence straightforward application of Eq. (6) typically does not yield physically useful solutions. In practice, viable solutions for ξ(T) must instead be found by imposing additional “reasonableness” requirements (e.g., smoothness, boundedness, positivity) on the solution.
Given the considerable importance of the recovered ξ(T) profile in probing the physics of the emitting plasma (see, e.g., Emslie & Bradshaw 2022, and references therein), numerous authors have proposed a variety of methods for generating physically acceptable ξ(T) profiles. For a general overview of such methods, we refer the reader to, for instance, Chapter 5 of Phillips et al. (2008). Whatever the method chosen, quantitative uncertainties in the ξ(T) profiles derived can straightforwardly be determined by constructing multiple realizations of the data, with each realization drawn from a range of values consistent with the level of noise in the data; synthesis of the ξ(T) profiles obtained from multiple realizations of the data produces a “confidence strip” of ξ(T) profiles (cf. Brown et al. 2006).
Perhaps the most basic approach to solving Eq. (3) is the socalled “EM Loci” method, originally applied to observations of UV lines observed in stellar spectra by Jordan et al. (1987) and subsequently applied to solar observations from the SoHO Coronal Diagnostic Spectrometer (CDS; Harrison et al. 1995) by Cirtain et al. (2007) and Schmelz et al. (2007), and to data from the Hinode EUV Imaging Spectrometer (EIS; Culhane et al. 2007) by Tripathi et al. (2009). The basis of this method is to initially assume that the emitting plasma is isothermal with temperature T_{o}, so that the DEM can be approximated by a Dirac deltafunction ξ(T) = EM δ(T − T_{o}), where EM is the total source emission measure (cm^{−5}). The problem then reduces to determining the location and strength of the Dirac deltafunction, i.e., the correct temperature T_{o} and corresponding emission measure EM. For such a deltafunction form of ξ(T), Eq. (3) reduces to I_{i} = EM K_{i}(T_{o}), so that dividing an observed intensity I_{i} by the corresponding response function value K_{i}(T_{o}) gives the emission measure EM. Using different assumed temperatures T_{o} yields a (generally, concave upward) set of EM(T_{o}) points corresponding to that single observation, and repeating this for different observing channels gives a set of such “EM Loci.” If the source were truly isothermal, then all the EM Loci would intersect at a single point, which would then inform both the source temperature T_{o} and the emission measure EM. However, for a nonisothermal emitting region, the various EM Loci intersect at different (T_{o}, EM) points. Further, since the emission in a given observing channel has contributions from material at temperatures other than the assumed To, the actual value of ξ(T) at any temperature is necessarily less than the value deduced from the assumption that the plasma is isothermal at that temperature. It follows that the set of local minima of the overlapping EM Loci curves forms an upper bound (“curtain”) to the ξ(T) profile (see, e.g., Fig. 17 of Hannah & Kontar 2012). This can be useful in determining a starting point for iterative methods.
A more physical approach, that allows a priori for the nonisothermal nature of the region under observation, is to model ξ(T) as a locally smooth function, using, for example, a discretized cubic spline fitting procedure. Such an approach was originally applied to data from the NRL High Resolution Telescope and Spectrograph (HRTS; Tousey et al. 1977) by MonsignoriFossi & Landini (1992), from the Solar EUV Rocket Telescope and Spectrograph (SERTS; Neupert et al. 1992) by Brosius et al. (1996), and from the SoHO CDS by Parenti et al. (2000). Kashyap & Drake (1998) have presented a Bayesian approach to derive ξ(T) using a Metropolis Markov chain Monte Carlo (MCMC) method, which, similar to the cubic spline fitting of the above references, applies a local smoothing, but over a scale determined by correlating the line emissivity function K_{i}(T) (G_{uℓ}(T) in the notation of Kashyap & Drake 1998) with a parametric Gaussianbased “Mexican Hat” wavelet function (their Eq. (8)). Even for simulated data generated from rather complicated forms of assumed ξ(T) functions (their Figs. 7 and 8), the recovered ξ(T) have a generally good overall agreement with the initially assumed forms, but unfortunately with rather large uncertainties. Kashyap & Drake (1998) argue that “a careful selection of the spectral lines used to infer the DEM is needed in order to avoid ‘artificially’ generating structure in the DEM.”
A fundamentally different approach to recovering the DEM ξ(T) involves assuming a parametric form and determining the best values of the associated parameters by forwardfitting to the observations. Such an approach, using a parametric form constructed from multiple Gaussian functions of log T (see their Eq. (7)), was employed in the analysis of SDO/AIA data by Aschwanden & Boerner (2011). A similar parametric “basis function” approach was employed by Cheung et al. (2015), using a combination of finitewidth Gaussians of log T (see their Eq. (13)) and (isothermal) deltafunctions. However, comparison of the ξ(T) profiles derived from simulated data generated even from relatively simple functional forms of ξ(T) showed on occasion spurious features at high temperatures (e.g., Fig. 4 in Cheung et al. 2015). This “basis pursuit” technique seeks the minimum number of nonzero basis coefficients needed to fit the data within plausible uncertainties and, to avoid (unphysical) negative solutions, it applies a positivity constraint to each of the nonzero coefficients returned. However, the underlying simplex optimization procedure sometimes fails (cf. remarks in Sect. 3.1.2) to generate a solution that satisfies all the required constraints, leading to reconstructed DEM maps (see Sect. 3.3 below) that contain a number of “empty” pixels where the method has been unable to find a solution.
The SITES (Solar Iterative Temperature Emission Solver) algorithm (Morgan & Pickering 2019) first normalizes the temperature response curves K_{i}(T) for each AIA observing channel to produce a set of “relative response curves” that sum to unity in each of several temperature bins. Count rates obtained by forward processing of an initially assumed ξ(T) profile with the normalized temperature response curves are then compared to the observed counts and the differences used to generate a correction term to ξ(T), and this step incorporates a positivity constraint. This iterative process is then repeated until the ξ(T) profile converges to acceptable limits. Pickering & Morgan (2019) have expanded this concept into a “Gridded SITES” method in which pixels with similar intensities in all six EUV channels of the AIA are grouped together and a ξ(T) for each group of pixels is found; this saves considerable computational time by avoiding the calculation of ξ(T) profiles that are similar to those that have already been calculated for another pixel with similar data properties. Such a grouping method could obviously also be used to expedite other algorithms used to generate ξ(T) profiles from datasets that comprise a set of spectral line intensities.
Goryaev et al. (2010) employed a maximum likelihood (ML; see Sect. 2) approach, specifically termed a Bayesian iterative method (BIM; Richardson 1972). They compared the results using this method with previous analyses of spectral line data from the SoHO SUMER (Wilhelm et al. 1995) instrument in order to validate the conclusions of Landi & Feldman (2008). They also compared their results with the results of Shestov et al. (2010), who had applied the BIM method (with fewer iterative steps) to spectral line data from the SPIRIT/CORONASF instrument. Lastly, they compared their results to broadband soft Xray data from the Hinode XRT to validate the conclusions of Reale et al. (2009), whcih were in turn based on a parametric basis function technique similar to that of Cheung et al. (2015), but using “tophat,” rather than Gaussian, basis functions. Figures 2 and 3 of Goryaev et al. (2010) show that the BIM method can recover rather complicated) functions (e.g., those with multiple peaks) with a reasonable degree of accuracy (we note that, in their figures, a logarithmic scale is applied to the yaxes). However, we show in Sect. 3.1 that ML approaches without additional regularization constraints in general produce unreliable results when applied to data consisting of only a few spectral lines, as in the case of AIA data. Another ML method proposed by Withbroe (1975) and Sylwester et al. (1980) was utilized in the interpretation of Hinode XRT data by Siarkowski et al. (2008), and of EIS and AIA data by Mackovjak et al. (2014). Certain similarities between the performances of this ML method and those of a genetic algorithm have been noted (Siarkowski et al. 2008), and the use of a genetic algorithm has also proven effective (McIntosh et al. 2000) as a preconditioning step to determine the subset of spectral lines for which the corresponding K matrix has minimum condition number. After this preconditioning step, the profile of ξ(T) is then determined by applying the Tikhonov (1963) regularization method to this selected subset of data.
Several authors have utilized regularized inversion methods; such methods impose a priori information on the solution in order to suppress the amplification of data uncertainties and thus obtain a relatively stable “smooth” recovery of the ξ(T) profile (see Craig & Brown 1986). Several forms of regularized inversion, such as zerothorder regularization, secondorder regularization and maximum entropy regularization have been employed and tested using simulated EUV spectral line emission data (e.g., Judge et al. 1997). For example, the secondorder regularized inversion method of Hannah & Kontar (2012) seeks to minimize the quantity (7)
where I = (I_{1},…, I_{m}), ξ = (ξ(T_{1}),…, ξ(T_{n})), and λ is the regularization parameter. High values of λ smooth the solution by penalizing large deviations in ξ(T), while lower values of λ place greater emphasis on the accuracy with which the reconstructed ξ(T) profile produces spectral line intensities that replicate those observed. Although these regularization approaches are generally superior to simple methods such as EM Loci, (unphysical) negative solutions can still result from overfitting of (noisy) data; accordingly, Hannah & Kontar (2012) chose the lowest value of λ that is consistent with a nonnegative ξ(T) at all temperatures T. The addition of the regularization term resolves many of the problems with ξ(T) profiles corrupted by unphysical artifacts and generally recovers the underlying ξ(T) (with uncertainties) quite well. However, analysis of simulated SDO/AIA data constructed from an assumed Gaussian ξ(T) profile showed that the method can underestimate the actual ξ(T) at high temperatures and overestimate it at low temperatures (e.g., Figs. 11 and 13 of Hannah & Kontar 2012). Hannah & Kontar (2013) applied this method to a number of “interesting” pixels within the SDO/AIA images for a solar eruptive event on 2010 November 3, and we shall return to these results in Sect. 3.2 below.
Plowman et al. (2013) have presented a “fast iterative regularized” (FIR) method based on the L^{2} norm regularization of Eq. (7), starting with construction of a ξ(T) profile in terms of a number of prescribed basis functions B_{l}(T), the coefficients e_{l} of which are to be determined. Plowman et al. (2013) also impose a positivity constraint on the recovered ξ(T) values, although their sixstep iterative procedure is rather involved. Their ξ(T) reconstructions compare favorably with those of Hannah & Kontar (2012, their Fig. 13, noting that this comparison does not (as it possibly could) impose positivity on the reconstructions produced by the Hannah & Kontar 2012 method). However, recovery of ξ(T) profiles from simulated data constructed from simple Gaussian functions of log T shows (e.g., their Figs. 1−4) that the method tends to generate spurious features, especially at high temperatures.
In summary, it is apparent that both maximum likelihood and regularized approaches each bring their own set of strengths (and also some weaknesses) to the overarching problem of constructing ξ(T) profiles from a discrete set of optically thin spectral line intensities. Strengths include a high degree of mathematical rigor, the absence of any need to assume an a priori (parametric) mathematical form for ξ(T), the ability to impose global (rather than local) smoothness constraints, the ability to authentically reconstruct model ξ(T) profiles from simulated data and to determine uncertainties in the solution. Weaknesses and/or concerns include the possible generation of spurious features in the solution (particularly at high temperatures) and the occasional inability to find a solution which satisfies the imposed constraints.
We therefore here present a new “regularized maximum likelihood” (RML) method for the generation of ξ(T) profiles from a discrete set of EUV spectral line intensities. This method combines the favorable elements of the maximum likelihood approaches of Withbroe (1975), Sylwester et al. (1980), and Goryaev et al. (2010), and the regularization approaches of Hannah & Kontar (2012) and Plowman et al. (2013). We shall demonstrate, using simulated data generated from idealized singleGaussian (Sect. 3.1.1) and doubleGaussian (Sect. 3.1.2) ξ(T) forms, that this mathematically rigorous method, which does not require an a priori choice of a functional form for the solution, is nevertheless robust and always generates a (physically required) nonnegative solution. In Sect. 3.2 we apply the method to the same set of representative pixels in a solar eruptive event previously studied by Hannah & Kontar (2013), and we compare the ξ(T) reconstructions obtained for each of these representative pixels by several methods, noting the similarities and differences between the reconstructions obtained. Overall, we find that the RML method produces results that are generally consistent with those obtained using other methods and, unlike other methods, never generates “outlier” results. The method generally shows an acceptable of fidelity to the data without generating unphysical features caused by unnecessary overfitting. In Sect. 4, we summarize the results obtained and offer prospects for the implementation of this powerful DEM reconstruction method in the nearrealtime prediction of solar activity.
Fig. 1 Temperature response functions for the SDO/AIA EUV channels (Boerner et al. 2012). 
2 The regularized maximum likelihood (RML) method
Let g_{i}(x, y) denote the count rate per unit time detected in the ith observing channel of the SDO/AIA telescope that originates within an 0.6″ × 0.6″ pixel centered at location (x, y) on the solar disk, K_{i}(T) (cm^{5} pixel^{−1} s^{−1}) the temperature response function of the ith AIA channel (see Fig. 1), and ξ(T; x, y) (cm^{−5} K^{−1}) the lineofsight DEM at location (x, y). These quantities are related by the integral equation (cf. Eq. (3)) (8)
where the approximation is due to the fact that experimental data are inevitably corrupted by statistical (and possibly other sources of) noise. Once discretized, Eq. (8) becomes (9)
where we have denoted g_{i} = g_{i}(x, y), K_{ij} = K_{i}(T_{j}), and ξ_{j} = ξ(T_{j}/; x, y), i = 1,…, m; j = 1,…, n. The essence of solving the DEM inverse problem involves finding a physically acceptable source vector ξ that, given an observed data vector g, satisfies Eq. (9) within the bounds of observational uncertainty.
We assume that data are affected by statistical noise only, so that (10)
where 𝒫(η) denotes a Poisson random variable with a mean η > 0. The maximum likelihood problem is to determine ξ* such that (11)
where P is the likelihood function associated with a Poisson distribution, defined as (12)
The condition (11) is equivalent to minimizing the negative logarithm of the likelihood function (cf. the Cstatistic; Cash 1979) (13)
is the Kullback–Leibler divergence (Bertero et al. 2008) and we have ignored terms in Eq. (12) that depend only on the (known) quantities g_{i}.
The partial derivative of D_{KL} with respect to each of the unknowns ξ_{k} is (15)
where 1 is an mvector with components all equal to unity, and hence the gradient (16)
where g/Κξ is the vector with ith component equal to the ratio of the respective components g_{i} and (Κξ)_{i}. By using the KarushKuhnTucker (KKT) conditions (Kuhn & Tucker 2014; Kuhn 2014), it can be shown that the minimization problem (13) is equivalent to finding a solution of (17)
where the multiplication and the inequality between arrays are to be interpreted component–wise. This equation can be solved by means of the iterative scheme (18)
starting with a “gray” trial vector^{1} with all unit values. This iterative algorithm, also known as Expectation Maximization (Dempster et al. 1977) or, in the context of astronomical image deconvolution, Richardson–Lucy (Richardson 1972; Lucy 1974), has been applied to the solution of inverse problems in several different fields, including medical imaging (Shepp & Vardi 1982), microscopy (Sarder & Nehorai 2006), and most recently hard Xray imaging of solar sources (Benvenuto et al. 2013; Massa et al. 2019; Piana et al. 2022). We refer the interested reader to Bertero et al. (2008, 2018) and Massa & Benvenuto (2021) for a comprehensive overview of the ML method and its applications.
The ML method defined by Eq. (18) is essentially the same as the BIM method proposed by Goryaev et al. (2010, cf. their Eq. (22)). Also, the ML strategy is similar to the method proposed by Withbroe (1975) and Sylwester et al. (1980), the main difference being the adoption by the latter method of empirically defined weight functions in the iterative process. Contrary to many of the methods discussed in Sect. 1, the ML method naturally includes a positivity constraint on the solution, which prevents the generation of unphysical negative ξ(Τ) values. However, as is evident from the simulation results below, the ML strategy is not well suited for addressing the ξ(Τ) reconstruction problem from EUV AIA data only, since the low number of available data channels does not permit the application of a sufficient number of constraints; this generally leads to the presence of spurious features in the reconstructions. This suggests that it would be highly advantageous to incorporate (Green 1990) a penalty term to allow the inclusion of a priori information on the solution. Specifically, we incorporate a term which penalizes solutions with large ξ(Τ) values at high temperatures in order to suppress the generation of artifacts at the upper end of the temperature range under consideration. With this term added, the minimization problem (13) becomes (19)
where λ > 0 is a regularization parameter and the penalty term represents the DEMweighted mean temperature. The rationale for this choice of the penalty term is to select the solution, among all the ones with similar accuracy in fitting the data, that results in a ξ(Τ) profile that is most concentrated at low temperatures. Indeed, the penalty term can be written as , where k_{B} is the Boltzmann constant. The penalty term is thus proportional to the product of the mean source density and the quantity ∫ 3 n k_{B}T ds, representing the total thermal energy per cm^{2} along the line of sight. The regularization constraint thus effectively seeks, among solutions that adequately fit the data, the one with the lowest thermal energy content.
For solving problem (19), we note that the derivative ∂/∂ξ_{k} of the objective function (Eq. (15)) now includes an additional term λ T_{k} ΔΤ_{k}, which accordingly modifies the gradient expression (16). Incorporating this additional term, the iteration formula (18) becomes (20)
where T = (Τ_{1} ΔΤ_{1},…, T_{n} ΔT_{n}) is an array formed by multiplying each temperature selected for the discretization of Eq. (3) by the width of the corresponding energy bin. Since the array λ T has only positive entries, at each iteration the estimated ξ^{(l)} profile is multiplied componentwise by a nonnegative array to yield a revised estimate ξ^{(l+1)}. Therefore, since the iterative scheme is initialized with an array ξ^{(0)} with all positive entries, each iteration, and hence the final solution, automatically satisfies the (physically required) nonnegativity requirement. The value of the regularization parameter λ is determined according to the Morozov (1966) discrepancy principle: starting from an intentionally high value of λ (which leads to an over—regularized solution with a correspondingly high value of the reduced; χ^{2} measure of fidelity to the data), we then progressively decrease λ by a constant multiplicative factor (typically 2/3), until we reach a value at which the solution has a reduced χ^{2} lower than 1. This criterion proves to be almost always effective and leads to solutions which do not overfit the data. We term the method expressed by Eq. (20) the “regularized maximum likelihood” (RML) method.
We conclude this section with a few remarks.
First, in rare instances, especially in weak pixels with relatively poor statistics, it is possible that the Morozov discrepancy principle cannot be satisfied for any value of the regularization parameter λ; in such cases we simply adopt the solution corresponding to the initially chosen value of λ. Further, the main drawback of the progressivelydecreasingestimate approach for determining λ is that it involves performing several reconstructions in sequence and hence substantially increases the computational cost. In a future work we will explore techniques for determining a priori the optimum value of λ, possibly exploiting the use of neural networks (see, e.g., Alberti et al. 2021).
Second, we can straightforwardly apply the iterative formula (20) for each of the N pixels in an AIA image in parallel by casting g and ξ as matrices of dimension m × N and n × N, respectively. We start by applying Eq. (20) to every pixel and terminate the iterative process for each pixel once that pixel satisfies the discrepancy principle constraint; χ^{2} < 1. Pixels for which acceptable solutions have been so far obtained are then discarded when we perform the next iteration with a decreased value of λ. This approach significantly reduces the total amount of computational time required to produce acceptable ξ(T) profiles for every pixel in the image.
Third, similar to the ML method of Goryaev et al. (2010) and the regularized approach of Hannah & Kontar (2012), we can provide a quantitative estimate of the uncertainty of the RML solution by means of the confidence strip approach. Specifically, we apply the procedure 25 times, each using a randomized Poisson noise perturbation of the data, and then assign the standard deviation of the recovered ξ(T) values for each temperature bin as an estimate of the uncertainty of that ξ(T) value. For each AIA pixel, the selection of the regularization parameter value λ is performed just once (according to the Morozov discrepancy principle) during the reconstruction process; this value of λ is then used for each of the 25 reconstructions, thus speeding up the uncertainty estimation process.
Finally, to the best of our knowledge, this is the first time that a maximumlikelihoodtype algorithm has been applied to the reconstruction of ξ(T) profiles from AIA data exclusively.
3 Results
Throughout this section, the ξ(T) profiles are generated as a function of the temperature T (K). However, in order to highlight features in different temperature ranges, they are plotted as a function of the logarithm (to base 10) of the temperature.
3.1 Simulated data
As a test of the ability of the various inversion algorithms to accurately recover the source DEM function ξ(T), several “ground truth” analytic forms of ξ(T) were used to generate simulated SDO/AIA data in the five SDO/AIA EUV channels 94 Å, 131 Å, 171 Å, 193 Å, and 211 Å. The 335 Å EUV channel was excluded because of its relatively weak temperature response function (Fig. 1); it is the only AIA channel that does not dominate^{2} the instrument response at any temperature (Fig. 1), and we have found that attempting to fit the data in this channel often introduces spurious features in the recovered solutions. This simulated data in these five channels was then inverted, using six of the algorithms described above, namely:
the basis pursuit (BP) technique of Cheung et al. (2015);
the fast iterative regularized (FIR) methodology of Plowman et al. (2013);
the iterative SITES method of Morgan & Pickering (2019);
the regularized (REG) approach of Hannah & Kontar (2012);
the (unregularized) maximum likelihood (ML) method of Eq. (18); and
the regularized maximum likelihood (RML) approach of Eq. (20),
to produce recovered ξ(T) forms, to be compared with the original input form as a measure of the accuracy of each method. The reconstructed ξ(T) profiles for all methods involved the discretization into 12 temperature bins; this number was chosen as being more than the number of data channels (five), but not so large as to unnecessarily exacerbate the nonuniqueness aspect of the solution. The chosen temperatures (at the [logarithmic] center of each bin) are given by log_{10} T (K) = 5.85(0.15)7.50. For each method we calculated ξ(T) profiles using 25 different realizations of the data, established by perturbing the data with Poisson noise; assessing the similarities and/or differences between the ξ(T) profiles reconstructed from each data realization allows a characterization of the robustness of the method.
We find in Sect. 3.2 that many AIA pixels during flares and/or solar eruptive events contain data that are consistent with EUV emission generated either by a single source containing material at a relatively small range of temperatures corresponding to quietSun coronal temperatures (≃1.5 × 10^{6} K), or by a twotemperature source with both quiet Sun and enhanced (≃10^{7} K) components. We do not dwell extensively on the various physical reasons for this (although we do offer plausible explanations for the appearance of both components), but we lean heavily on this observed twocomponent structure to inform the types of simulated data to be used as a test of the various DEM reconstruction methods. Specifically, the methods should be able to clearly identify and adequately characterize both lowtemperature and (if they exist) hightemperature components in the DEM profile. We therefore test the methods using simulated data generated from two main types of “ground truth” DEM profiles: a single Gaussian function of log_{10} T with a centroid temperature ranging from 10^{6.1} K−10^{6.4} K (Sect. 3.1.1), and a double Gaussian (Sect. 3.1.2) with both a 10^{6.1} K component and a higher temperature component with a centroid temperature ranging from 10^{6.6} K to 10^{7.2} K.
3.1.1 Single Gaussian forms
Figure 2 shows the results using simulated “ground truth” ξ(T) that take the form of a Gaussian function of log_{10} T. For each method we show the results for 25 different realizations of the data, established by perturbing the data with Poisson noise; the red line is the mean of these 25 reconstructions, while the gray lines show the individual reconstructions for each realization. From left to right, the various columns represent “ground truth” ξ(T) forms with a total^{3} Emission Measure EM = ∫ ξ(T)dT = 10^{28} cm^{−5} and a standard deviation equal to 0.15 dex; the centroid temperatures are given by log_{10} T_{c} = 6.1, 6.2, 6.3, and 6.4.
Table 1 shows the values of two metrics that collectively measure the fidelity, accuracy, and robustness of each DEM reconstruction method. The first is the reduced χ^{2} metric, which measures the fidelity of the method with respect to the data: we computed the sum of the squared differences (normalized by the corresponding squared uncertainty) between the original fivechannel SDO/AIA spectral line data and the set of line intensities produced by substituting the recovered ξ(Τ) profiles into Eq. (3); this quantity is then normalized by the number of degrees of freedom. The second metric is the “normalized root mean square error” (NRMSE), the root—mean—squared difference between the “ground truth” and reconstructed profiles, normalized by the mean intensity of the “ground truth” profile: (21)
The mean value of this metric over the 25 different noisy data realizations provides a measure of the accuracy of the reconstruction, while its standard deviation over this set of realizations is a measure of the robustness of the method, i.e., the sensitivity of the recovered ξ(T) profile to data noise. Table 1 shows the mean and standard deviation of both the reduced χ^{2} and the NRMSE, for the different “ground truth” input DEM functions used and for each of the six reconstruction methods studied. To emphasize the different types of information that we derive from the mean and the standard deviation of the NRMSE metric, we report in Table 1 the mean and standard deviation in two different columns.
We now discuss the features of the various reconstructions.
The basis pursuit (BP) approach of Cheung et al. (2015) (first row of Fig. 2) is very robust, as evident from the nearcongruence of most of the gray curves for the 25 different data realizations and the very low standard deviation values of the NRMSE metric in Table 1. (The only exception is for the simulation centered on log_{10} T_{c} = 6.3, for which one of the 25 reconstructions is corrupted by spurious features.) Compared to the other inversion methods, BP appears to be the most accurate in reconstructing the “ground truth” ξ(T) profiles with log_{10} T_{c} = 6.1 and 6.2, as is evident from the lowest mean NRMSE values (Table 1). This high degree of accuracy is likely due to the strong similarity between the shapes of the simulated “ground truth” configurations and the (Gaussian) basis functions used by this method. However, the fidelity of the reconstructed ξ(T) forms becomes poorer for higher centroid temperatures (third and [especially] fourth columns of Fig. 2). We note that the mean χ^{2} values of the BP reconstructions are systematically larger than those associated with the FIR, SITES, ML, and RML methods. This is probably because the BP method slightly underestimates the peak DEM value and, in order to limit the number of basis functions that are used, it does not compensate for this by adding spurious features at high temperatures, as do SITES and ML (see below).
The fast iterative regularized (FIR) methodology of Plowman et al. (2013, second row of Fig. 2) has good fidelity, with a reduced χ^{2} of order unity (Table 1). However, it significantly overestimates the peak in ξ(T) for the low centroid temperature cases, while underestimating it (and shifting the peak to lower temperatures) for cases with higher centroid temperatures. This worsens the agreement with the corresponding “ground truth” configurations with respect to BP, as evidenced by the much higher values of the mean NRMSE value (Table 1). The method is very robust: the NRMSE standard deviations have values that are among the lowest of all the methods studied.
The iterative SITES method of Morgan & Pickering (2019) (third row of Fig. 2) produces quite acceptable reduced χ^{2} and NRMSE metrics, especially for the “ground truth” profiles centered at log_{10} T_{c} = 6.3 and 6.4. However, for lowcentroidtemperatures, it recovers a ξ(T) profile that is significantly broader (with a commensurately lower peak value) than the “ground truth” profile. This is to be expected: the essence of the SITES algorithm is to add, at each iterative step, a correction to the ξ(T) profile that is based on the difference between the set of forwardfit spectral line intensities and those observed. Any such correction must necessarily introduce structure in ξ(T) that was not present in the previous iteration, and thus, as the number of iterations is increased to provide a better match to the data, the method produces ξ(T) profiles that are progressively broader, leading to a final result that is substantially broader than the “ground truth.” Indeed, Morgan & Pickering (2019) note that “SITES performs poorly for narrow DEM profiles at all temperatures,” such that “narrow peaks are found by SITES, but are smoother,” and this characteristic is evident in the application to AIA data in Sect. 3.2. The method also introduces a spurious component at very high temperatures (log_{10} T ≳ 7.2) in the first three cases, presumably to compensate for the relative inability of the lowtemperature component, with its substantial deviation from the “ground truth” profile, to adequately fit the data by itself.
The regularized approach of Hannah & Kontar (2012, REG; fourth row of Fig. 2) typically underestimates the value of the peak and/or shifts it to lower temperatures. The mean reduced χ^{2} values tend to be quite large since, because of the underestimated value of the peak DEM, the intensities in the 171 Å, 193 Å and 211 Å channels are poorly fitted. However, the reconstructed ξ(T) profiles are generally consistent with the respective ground truth configurations, as evidenced by mean NRMSE values similar to (or even better than) those corresponding to the reconstructions produced by the other methods. The relatively poor data—fitting performance of the method is likely due to the significant emphasis that the penalty term in the minimization Eq. (7) places on smoothness of the recovered ξ(T) profile, even at the expense of loss of fidelity in matching the input data.
The (unregularized) maximum likelihood (ML) method based on Eq. (18) (fifth row of Fig. 2) produces acceptable values of both the mean reduced χ^{2} and NRMSE metrics; however, it underestimates the peak ξ(Τ) for the two lowest centroid temperature cases. This method performs much better for cases with a higher centroid temperature, and in all cases it well reproduces the value of that centroid temperature. However, it systematically creates a spurious artifact at temperatures log_{10} Τ ≳ 7.1. Because of the presence of artifacts at high temperatures (particularly for the two configurations with lowest centroid temperatures), the total Emission Measure EM = ∫ ξ(Τ) dT of the reconstructions provided by ML and SITES is ~3 to 4 times larger than the ground truth value of 1 × 10^{28} cm^{−5}. The method is also less robust than BP and FIR, with significant differences between the reconstructed ξ(Τ) profiles for the 25 different data realizations and hence higher standard deviation values of the NRMSE metric in Table 1.
The regularized maximum likelihood (RML) method (final row of Fig. 2) produces reduced χ^{2} values that are all of order unity, indicating that it is able to fit the data with high fidelity. It also reproduces the peak intensities and centroid temperatures of the input ξ(Τ) with log_{10} T_{c} = 6.1 and 6.2 (first two columns of Fig. 2) with greater accuracy compared to FIR, SITES and ML, as indicated by the lower mean NRMSE values (Table 1). However, the RML method does progressively underestimate the hightemperature wings (at log_{10} Τ ≃ 6.5) in the third column, and significantly underestimates the log_{10} Τ = 6.4 centroid temperature in the last column. Indeed, underestimation of ξ(Τ) at such temperatures is a common element of all of the reconstruction methods tested (except ML and, to a lesser extent, SITES). Reference to Fig. 1 reveals why this may be the case: temperatures log_{10} Τ ≃ 6.3 – 6.7 correspond to rather low values of the temperature response curves K_{i}(T) in all channels, so that fitting the data in any observed channel is more straightforwardly accomplished by adding a relatively small amount of emission measure at other temperatures. Finally, RML is a very robust method, as evidenced by the values of the NRMSE standard deviation, which, except for the configuration centered at log_{10} T_{c} = 6.4, are among the lowest produced by any method.
Fig. 2 Results of the singleGaussian simulation tests. From top to bottom: reconstructions of simulated Gaussian ξ(Τ) profiles by means of the basis pursuit (BP) technique (Cheung et al. 2015), the FIR inversion method of Plowman et al. (2013), the iterative SITES technique (Morgan & Pickering 2019), the regularization technique of Hannah & Kontar (2012, REG), the ML method defined in Eq. (18), and finally our proposed RML technique. The black solid lines represent the “ground truth” configurations, which are all Gaussian functions of log_{10} Τ with total Emission Measure EM = ∫ ξ(Τ)dT = 10^{28} cm^{−5} and standard deviation equal to 0.15 dex. From left to right, the centroid temperatures are given by log_{10} T_{c} = 6.1, 6.2, 6.3, and 6.4. The gray lines represent the reconstructions from 25 different realizations of Poissonnoiseperturbed data; the red lines represent the mean values of these 25 reconstructions. The ξ(Τ) profiles are plotted as a function of the (base 10) logarithm of the temperature (K), so that the peak value of each Gaussian is inversely proportional to 10^{x}, where x is the abscissa. 
Metrics for the single Gaussian “ground truth” model test.
3.1.2 Double Gaussian forms
Figure 3 shows the results obtained with simulated “ground truth” ξ(Τ) that take the form of two Gaussian functions of log_{10} Τ. In application of the methods to real data (see Sect. 3.2 below), it is apparent that ascertaining the veracity of recovered hightemperature (log_{10} T_{c} ≳ 7.0) components in ξ(Τ) is important. Hence, in the simulation test shown in Fig. 3, the lowertemperature Gaussian has a total Emission Measure EM = ∫ ξ(Τ) dT = 10^{28} cm^{−5} and a standard deviation of 0.1 dex, and is centered on log_{10} T_{c} = 6.1, while the highertemperature Gaussian has a total Emission Measure EM = ∫ ξ(Τ)dT = 2 × 10^{28} cm^{−5} and a standard deviation of 0.15 dex, and is centered on four different values of log_{10} T_{c} = 6.6, 6.8, 7.0 and 7.2. Again, the gray lines represent the reconstructions from 25 different data realizations, and the red lines represent the mean values of these 25 reconstructions. Table 2 shows the means and standard deviations of the same two validation metrics used in Table 1.
The basis pursuit (BP) approach of Cheung et al. (2015) generally underestimates the intensity of both the lowtemperature and hightemperature components, the latter being especially obvious in the first column, corresponding to a hightemperature component with the lowest centroid temperature of the four cases considered. The resulting mean NRMSE values (Table 2) are hence considerably larger than those for the first two cases presented in the singleGaussian experiment. This underestimation of the peak intensity of the low temperature component is likely the reason for the rather large mean χ^{2} values (Table 2). With the exception of the test presented in the first column, and, to a lesser extent, the test presented in the fourth column, the BP method does correctly identify the approximate centroid temperatures of both low and hightemperature components. As measured by the standard deviation of the NRMSE values, the robustness of the method for the doubleGaussian case is comparable to that for the singleGaussian case. However, as expected from the more complex configuration that has to be reconstructed, the NRMSE standard deviation values corresponding to the first two configurations presented in Table 2 are slightly larger than most of those obtained in the singleGaussian test.
The FIR approach (Plowman et al. 2013) typically underestimates the hightemperature component, but apparently compensates for this by significantly overestimating the intensity of the lowtemperature component. Similar to the results for the singleGaussian experiment, the method is able to fit the data with high fidelity, as shown by mean reduced χ^{2} values that are generally of order unity. It also reproduces the “ground truth” profile rather accurately, as evident from mean NRMSE values (Table 2) that are among the lowest of all the methods considered. With regard to robustness, there is considerable variation in the reconstructions for different data realizations, particularly with reference to the hightemperature component. This results in a standard deviation of the NRMSE metric which is almost always larger than those for the other methods.
In all cases the iterative SITES method of Morgan & Pickering (2019) significantly broadens the lowtemperature Gaussian component and concomitantly underestimates its peak value, similar to the method’s performance in the singleGaussian tests (Fig. 2). To compensate for this, and still reproduce reasonably low reduced χ^{2} values (see Table 2), the method introduces a spurious feature at log_{10} Τ ≳ 7.2. This behavior, similar to that of the ML method (see below), results in mean NRMSE values that are generally quite large (near unity; see Table 2). Similar behavior is found in Fig. 5 of Morgan & Pickering (2019), who note that “the hot peak is wellfitted by SITES, but the fit for the cool peak is poor.” For the case with a hightemperature component at log_{10} T_{c} = 6.8 (second column of Fig. 3) SITES does a reasonable job of fitting this hightemperature component, but its performance is not as good for other centroid temperatures for the hightemperature component.
The regularized inversion technique (REG) of Hannah & Kontar (2012) significantly underestimates both low and hightemperature components, especially for lower values of the centroid temperature of the hightemperature component. This is particularly evident in the case presented in the first column of Fig. 3, where the method consistently produces a rather large χ^{2} value. Interestingly, in all the other doubleGaussian cases, REG systematically fits the data with remarkable fidelity, with mean reduced χ^{2} values very close to unity. The method is also among the most accurate and robust, as evident from the low means and standard deviations of the NRMSE metric. Consistently, the ξ(T) profiles for the 25 different data realizations are very similar, even to the point of consistently reconstructing the spurious hightemperature features at log_{10} T ≳ 7.2. Similar artifacts are also present in the reconstructions shown in the first two columns of Fig. 2, although, in these cases, they are less prominent.
The (unregularized) maximum likelihood (ML) approach significantly underestimates the magnitude of the lowtemperature component, and severely overestimates both the magnitude and width of the hightemperature component, in all cases creating features at temperatures well in excess of any that are actually present in the “ground truth” ξ(Τ) profile. The presence of these artifacts is reflected in the large (near unity) mean values for the NRMSE metric. Similar to the SITES reconstructions, these spurious features are most likely introduced to compensate for the underestimated peak of the lowtemperature component, as evidenced by the low mean reduced χ^{2} value (see Table 2). Interestingly, similar to REG, ML deduces these (erroneous) hightemperature ξ(Τ) features rather robustly, as evidenced by values of the NRMSE standard deviation that are comparable to those provided by the other methods. We finally note that, due to the presence of high temperature artifacts, the total Emission Measure of the reconstructions provided by ML and SITES ranges from 8 × 10^{28} cm^{−5} to 1.5 × 10^{29} cm^{−5}, and from 1.0 × 10^{29} cm^{−5} to 1.2 × 10^{29} cm^{−5}, respectively; all these are an order of magnitude larger than the ground truth value of 3 × 10^{28} cm^{−5}.
The regularized maximum likelihood approach (RML) tends to slightly overestimate the centroid temperature of the low—temperature component. However, together with SITES, it is the most accurate in reconstructing both the centroid and peak value of the high temperature components for the tests presented in the second column of Fig. 3. Further, only RML, SITES and ML suggest the presence of the (actual) hightemperature component in the test case considered in the first column, albeit without always accurately reproducing the centroid temperature of this component. A comparison with the results obtained by means of the (un—regularized) ML method shows the effectiveness of the adopted regularization, which suppresses spurious hightemperature features present in the ML reconstructions. Table 2 shows that RML achieves a good compromise between fidelity to the data and accuracy (similarity to the ground truth configuration), with mean reduced χ^{2} values that are systematically close to unity and mean NRMSE values that are systematically among the lowest for the methods considered. The RML method also proves to be very robust, with NRMSE standard deviations that are systematically the lowest (Table 2). Most importantly, comparing the singleGaussian results of Fig. 2 and the doubleGaussian results of Fig. 3, it is apparent that the RML method recovers a (suitably scaled and positioned) hightemperature component if and only if one is actually present.
Fig. 3 Results of the doubleGaussian simulation tests. From top to bottom: Reconstructions of a simulated double Gaussian ξ(T) profile by means of the basis pursuit (BP) technique (Cheung et al. 2015), the FIR inversion method of Plowman et al. (2013), the iterative SITES technique (Morgan & Pickering 2019), the regularization technique of Hannah & Kontar (2012, REG), the ML method defined in Eq. (18), and finally our proposed RML technique. The black solid lines represent the “ground truth” configurations, which consist of double Gaussian profiles. The left Gaussian has a total Emission Measure EM = ∫ξ(T) dT = 10^{28} cm^{−5} and a standard deviation of 0.1 dex, and is centered on log_{10} T_{c} = 6.1, while the right Gaussian has a total Emission Measure EM = ∫ξ(T)dT = 2 × 10^{28} cm^{−5} and a standard deviation of 0.15 dex, and is centered on (from left to right) log_{10} Tc = 6.6, 6.8, 7.0 and 7.2. The gray lines represent reconstructions from 25 different realizations of Poisson noise affecting the data; the red lines represent the mean values of these 25 reconstructions. The ξ(T) profiles are plotted as a function of the (base 10) logarithm of the temperature (K). 
3.2 Application to AIA data
Here we compare^{4} the ξ(Τ) reconstructions for a selected set of pixels in the 2010 November 3 12:15 UT event previously studied by Hannah & Kontar (2013). Fig. 4 shows the general morphology of the region, which produced both a flare and an erupting flux rope, at 12:15:02 UT in several SDO/AIA channels. Eight pixels (see Fig. 5) were selected by Hannah & Kontar (2013) as representative of different features in the field of view. In Fig. 5 we show the ξ(Τ) reconstructions for each of these pixels by the various reconstruction methods (excluding the problematic unregularized ML method) considered above. Below we discuss and compare the general features of the results obtained.
Pixel 1 — Core of the erupting plasmoid: The various reconstructions are quite similar, and clearly show two components to the emission, a relatively cool one at around 1.5 × 10^{6} K and a hotter one at around 10^{7} K. We agree with the interpretation of Hannah & Kontar (2013) that these components probably represent the background corona along the line of sight, and the plasmoid emission, respectively. The methods provide consistent reconstructions of the low—temperature component, although SITES returns a significantly broader high temperature wing. For the high—temperature component, the centroid is consistently retrieved by the different techniques, while its reconstructed peak value, width and skewness vary somewhat from method to method.
Pixel 2 — Filament/stem behind the plasmoid: Here the relatively cool background lineofsight emission is about 3× more intense than for Pixel 1, while the hot component is about 3× less intense. Although most of the methods agree on the temperature of the two components, the FIR method appears to broaden the lowtemperature component and shift its centroid temperature downward slightly, while, as expected from the results of Sects. 3.1.1 and 3.1.2, the SITES method produces a much broader profile with a lower value of ξ(Τ) at the centroid.
Pixel 3 — High corona away from the event: This pixel contains a relatively small amount of emission (peak value of ξ(Τ) ≃ 2 × 10^{20} cm^{−5} K^{−1}) at a temperature around log_{10} Τ = 6.2, similar to the temperature of the background corona components in Pixels 1 and 2. The BP, REG and RML methods agree on the centroid temperature of this component, although BP produces an enhanced hightemperature wing and RML returns a higher estimate of the peak value of ξ(T). However, the reconstructions by BP and REG both fall within the ± 1σ error bars associated with the RML reconstruction, suggesting that this is simply due to statistical uncertainty in the data. The FIR and SITES methods again (cf. Pixels 1 and 2) produce a broader peak with an enhanced low (high) temperature wing compared to the other methods.
Pixel 4 – Corona away from the event: All methods agree very well as to the presence and intensity of a relatively cool log_{10} T ≃ 6.2 component; however, the SITES method again produces a much broader profile, and considerations similar to those for Pixel 3 apply to the enhanced hightemperature wing produced by BP and the higher estimate of the peak flux returned by RML.
Pixel 5 − Corona near the event: Again, all methods agree very well as to the presence and intensity of a relatively cool log_{10} T ≃ 6.2 K component; however, the FIR method substantially overestimates the lowtemperature “wing” of this component relative to the other three methods, while the SITES method overestimates the hightemperature wing. Further, similar to Pixels 3 and 4, the BP method reconstructs a ξ(T) profile that is more skewed toward higher temperature values.
Pixel 6  Low corona flare emission: All methods agree very well with regard to both the relatively cool (log_{10} T ≃ 6.2) and hot (log_{10} T ≃ 7.1) components present. Again, the FIR method creates a lowtemperature ξ(T) component that is broader (and skewed toward lower temperatures) than those of the other methods. Further, both the BP and the SITES reconstructions of the hightemperature component show an enhancement in the hightemperature wing, while the other methods agree quite well in terms of the centroid value, peak intensity and width of the reconstructed hightemperature component.
Pixel 7 – Envelope just ahead of the plasmoid: Here the FIR method once again produces a centroid of the lowtemperature component that is broader and slightly shifted toward lower temperatures compared to the other reconstructions, while the SITES method again produces a broader component with an enhanced hightemperature wing. Also, the BP method retrieves an enhanced hightemperature “wing” for this component. There is also evidence of a weak hightemperature (log_{10} T ≃ 7.0) component. Given the location of this material, it is not unreasonable to expect additional heating there, given, for instance, the findings of Mishra et al. (2020) that “the CME is in the heatreleasing state (i.e., entropy loss) throughout its journey from the Sun to Earth”.
Pixel 8 − Further ahead of the plasmoid: Here all methods agree that there is a lowtemperature component with an intensity similar to that of Pixel 7. The BP and SITES methods both show an enhancement in the hightemperature wing of the ξ(T) profile. The 131 Å image shows that this pixel is at the leading edge of the erupting material, providing (similar to Pixel 7) a plausible explanation for such enhanced heating of the lowtemperature component.
In any illposed inversion problem, there is a necessary tradeoff between fidelity to the data and accuracy in the recovered solution. The fidelity to the AIA data is determined by the reduced χ^{2} values shown in Table 3, which are based on a comparison of the original data with the line intensities obtained by substituting the recovered ξ(T) profile into Eq. (3). Most of the χ^{2} values for the eight pixels are of order unity, showing that the data is wellfit but not overfitted at the expense of the plausibility of the ξ(T) profile. Notable exceptions are many of the SITES reconstructions, which have reduced χ^{2} values that are generally below unity: the resulting overfitting of noisy data could be among the reasons behind the presence of spurious artifacts in the reconstructions. Other notable exceptions occur in application of the FIR method to Pixel 5, where the method creates a more intense peak compared to the reconstructions by the other methods, and in application of the REG method to Pixel 8, where the reconstruction has a significantly lower peak value compared to those of the other methods.
Fig. 4 Images recorded by AIA on November 3, 2010 around 12:15:02 UT in the 94 Å, 131 Å, 171 Å, 193 Å, and 211 Å channels, from left to right, respectively. 
Fig. 5 DEM profiles reconstructed from observed AIA count rates in selected pixels of the 2010 November 3 event. Top row: AIA images recorded around 12:15:02 UT in the 131 A and 211 A channels (left and right panel, respectively). The numbered crosses denote the location of the pixels selected for a comparison of the ξ(Τ) profiles reconstructed by the different methods. Second to fifth row: ξ(Τ) profiles associated with Pixels 1 through 8. In each panel, the profiles reconstructed by the BP, FIR, SITES, REG and RML methods are plotted in magenta, blue, brown, green, and orange, respectively. The ± 1σ uncertainties associated with the SITES, REG and RML reconstructions have been added as error bars at each temperature point in the pertinent reconstruction. The intensities of the reconstructions of Pixels 1, 3 and 5 have been multiplied by a factor of 2. The intensity axes are linear, while the temperature axes are in terms of log_{10} Τ (K). 
3.3 Differential emission measure maps
In Fig. 6, we show the DEM maps reconstructed by each of the five reconstruction methods considered (rows), at five different temperatures (columns), logarithmically scaled with a factor of two between successive temperatures. In each map, the pixel value corresponds to the value of the DEM profile ξ(Τ) calculated for that specific pixel at the temperature in question.
As pointed out by Hannah & Kontar (2013), for some temperatures the DEM maps closely resemble the images from one of the AIA channels: for example, the T = 1.4 MK map closely resembles the 171 Å image, and the T = 11 MK map closely resembles the 131 Å image. These close matches reflect the welldefined peaks in the response curves for those channels at those respective temperatures (Fig. 1). However, the DEM maps for other temperatures reveal features that are not as apparent in the individual SDO/AIA images, such as the large region of emission extending out to coordinates (x ≃ −1100, y ≃ [−450 : −350]) above the eastern limb in the T = 2.8 MK maps. This feature is reproduced consistently by all the reconstruction methods (see Fig. 6), confirming its reality and highlighting the generally complicated relationship between temperature and emitted wavelength, particularly for SDO/AIA channels with a relatively broad temperature response. We also note (cf. remarks in Sect. 1) that the DEM maps produced by the BP method (see Fig. 6) contain 253 “null” pixels, showing points where the simplex optimization method adopted by BP has not been able to find a solution that satisfies all the required constraints. Finally, looking at the high temperature (22 MK) maps, we see that the SITES method produces a relatively high amount of emission at these temperatures (cf. the results of Sects. 3.1.1 and 3.1.2). On the other hand, the RML method, consistent with the way it is designed (with a regularization term that acts to suppress the presence of ξ(T) features at high temperatures; see remarks following Eq. (19)), produces substantially less emission at such high temperatures.
In Fig. 7, we show the RML 2.8 MK DEM map, together with a map of the regularization parameter λ (Eqs. (19) and (20)) used at each pixel within the image and a histogram of the λ values used throughout the image. In general, bright pixels have a higher signaltonoise ratio and hence require a lower degree of regularization during the inversion process, and Fig. 7 indeed shows that the more intense regions of the flare are associated with smaller values of the regularization parameter λ. The number of pixels that use a given value of λ has an approximately monotonic dependence on λ: most pixels require a high degree of regularization to produce a physically acceptable result, while a few pixels in the most intense regions of the flare produce a satisfactory ξ(Τ) profile with little to no regularization required. It is important to note, however, that the value of the regularization parameter is not simply determined by the statistical quality of the data: the optimal regularization parameter value selected by the Morozov discrepancy principle also depends on the “shape” of the DEM profile to be reconstructed. Finally, we note that, given the several orders of magnitude spanned by the values of the regularization parameter λ necessary to yield acceptable reduced χ^{2} values in all pixels, approximating the regularization parameter with a constant value would result in a substantial decrease of the RML performance, both in terms of fidelity to the data and accuracy of the reconstructed ξ(Τ) profiles.
Fig. 6 DEM maps reconstructed from the AIA images shown in Fig. 4. Rows, from top to bottom: the BP method (Cheung et al. 2015), the FIR method (Plowman et al. 2013), the SITES method (Morgan & Pickering 2019), the REG method (Hannah & Kontar 2012), and our proposed RML method. The images of each column correspond to the same temperature, which is reported in the top left corner of the first row panels. The same color map is shared by each of the images in the same temperature column, with different (logarithmic) scalings for the different temperatures. 
Fig. 7 Distribution of values of the RML regularization parameter. Left panel: reconstructed DEM map corresponding to Τ = 2.8 MK, produced by the RML method. Middle panel: value of the regularization parameter λ used at each pixel within the image. Right panel: histogram of the λ values selected by RML for the ~250 000 0.6″ × 0.6″ pixels within the image. 
4 Summary
The results of Sects. 3.1 and 3.2 show that, both for simulated and actual AIA data, the ξ(Τ) reconstructions obtained using the regularized maximum likelihood (RML) method described in Sect. 2 are broadly compatible with those of other methods; in no case does the RML method create a ξ(Τ) profile that is an “outlier,” and, with the possible exception of the very weak (and hence statistically uncertain) Pixel 3, in no case is its χ^{2} value unacceptably large (corresponding to oversmoothing) or unacceptably small (corresponding to overfitting of data).
As evidenced by the reconstructed ξ(Τ) profiles constructed from different (Poissonnoise) realizations of the data (Figs. 2 and 3), and by comparisons of the spectral line data produced by using the “ground truth” and recovered ξ(Τ) profiles in Eq. (3), the RML method^{5} is characterized by excellent performance in all three areas of concern: fidelity to the data, accuracy in the reconstructed ξ(Τ) profiles, and robustness in the presence of data noise. Further, it is straightforward to implement and computationally efficient, taking^{6} about 35 s to reconstruct ξ(Τ) profiles for 500 × 500 pixels (~7000 ξ(Τ) reconstructions per s) on an Apple MacBook Pro M1 (Chip Apple M1, CPU 8core) processor, without considering need to reconstruct solutions for multiple data realizations in order to estimate the uncertainty on the solution. Further, it does not require an a priori choice of a parametric functional form and, very importantly for this particular application, always generates a nonnegative solution without the need to impose a posteriori adjustments on the solutions obtained (cf. Plowman et al. 2013). We conclude that it is an appropriate method to use in the construction of ξ(Τ) profiles from pixelbypixel AIA data.
In future work we will develop a more general version of RML that is applicable to other data sets, such as those from EIS or XRT, in order to better constrain (cf. Hannah & Kontar 2013) the overall solution of the DEM reconstruction problem. Given the general features of the RML reconstructions (existence, fidelity, accuracy, robustness, and nonnegativity), this DEM reconstruction method is particularly suitable for the application of machine learning tools to solar data. Furthermore, the computational efficiency of the method will allow us to generate, from AIA data in near realtime, fourdimensional data hypercubes ξ(x, y; Τ; t) (see Sect. 5 of Massa & Emslie 2022) that represent a time series of DEM maps. Application of machine learning tools to such hypercubes, each labeled with the eventual level of activity (e.g., maximum GOES level, duration of high levels of emission, total energy released) that results in the event represented by the data hypercube in question, can be used to identify both morphological and thermodynamic precursors of solar activity (see, e.g., Gontikakis et al. 2020), thus providing a promising tool for predicting the timing, location, and characteristics of solar eruptive events.
Acknowledgements
We thank Lars Hebenstiel for several useful discussions on the features, strengths and weaknesses of various DEM reconstruction algorithms, and Dr. Huw Morgan for several constructive suggestions on improving the manuscript. We also thank the authors of the BP (Cheung et al. 2015), FIR (Plowman et al. 2013), and SITES (Morgan & Pickering 2019) methods for making their codes freely available, thus allowing us to use them in the comparison of various methods. PM and AGE were supported by NASA Kentucky under award number 80NSSC21M0362; IGH and EPK were supported by STFC consolidated grant ST/T000422/1.
References
 Alberti, G. S., De Vito, E., Lassas, M., Ratti, L., & Santacesaria, M. 2021, Adv. Neural Inform. Process. Syst., 34, 25205 [Google Scholar]
 Aschwanden, M. J., & Boerner, P. 2011, ApJ, 732, 81 [NASA ADS] [CrossRef] [Google Scholar]
 Benvenuto, F., Schwartz, R., Piana, M., & Massone, A. M. 2013, A&A, 555, A61 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bertero, M., DeMol, C., & Pike, E. R. 1985, Inverse Problems, 1, 301 [NASA ADS] [CrossRef] [Google Scholar]
 Bertero, M., Lantéri, H., Zanni, L., et al. 2008, in Mathematical Methods in Biomedical Imaging and IntensityModulated Radiation Therapy (IMRT), eds. Y. Censor, M. Jiang, & A. K. Louis (Edizioni della normale) 7, 37 [Google Scholar]
 Bertero, M., Boccacci, P., & Ruggiero, V. 2018, Inverse Imaging with Poisson Data (IOP Publishing), 2053 [Google Scholar]
 Boerner, P., Edwards, C., Lemen, J., et al. 2012, Sol. Phys., 275, 41 [Google Scholar]
 Brosius, J. W., Davila, J. M., Thomas, R. J., & MonsignoriFossi, B. C. 1996, ApJS, 106, 143 [NASA ADS] [CrossRef] [Google Scholar]
 Brown, J. C., Emslie, A. G., Holman, G. D., et al. 2006, ApJ, 643, 523 [NASA ADS] [CrossRef] [Google Scholar]
 Cash, W. 1979, ApJ, 228, 939 [Google Scholar]
 Cheung, M. C. M., Boerner, P., Schrijver, C. J., et al. 2015, ApJ, 807, 143 [Google Scholar]
 Cirtain, J. W., Del Zanna, G., DeLuca, E. E., et al. 2007, ApJ, 655, 598 [NASA ADS] [CrossRef] [Google Scholar]
 Craig, I. J. D., & Brown, J. C. 1976, A&A, 49, 239 [NASA ADS] [Google Scholar]
 Craig, I. J. D., & Brown, J. C. 1986, Inverse Problems in Astronomy. A Guide to Inversion Strategies for Remotely Sensed Data (Bristol: Adam Hilger) [Google Scholar]
 Culhane, J. L., Harra, L. K., James, A. M., et al. 2007, Sol. Phys., 243, 19 [Google Scholar]
 Dempster, A. P., Laird, N. M., & Rubin, D. B. 1977, J. Roy. Stat. Soc. B (Methodological), 39, 1 [Google Scholar]
 Emslie, A. G., & Bradshaw, S. J. 2022, ApJ, 939, 19 [NASA ADS] [CrossRef] [Google Scholar]
 Emslie, A. G., & Noyes, R. W. 1978, Sol. Phys., 57, 373 [NASA ADS] [CrossRef] [Google Scholar]
 Golub, L., DeLuca, E., Austin, G., et al. 2007, Sol. Phys., 243, 63 [NASA ADS] [CrossRef] [Google Scholar]
 Gontikakis, C., Kontogiannis, I., Georgoulis, M. K., et al. 2020, ArXiv eprints [arXiv:2011.06433] [Google Scholar]
 Goryaev, F. F., Parenti, S., Urnov, A. M., et al. 2010, A&A, 523, A44 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Green, P. J. 1990, J. Roy. Stat. Soc. B (Methodological), 52, 443 [CrossRef] [Google Scholar]
 Hannah, I. G., & Kontar, E. P. 2012, A&A, 539, A146 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hannah, I. G., & Kontar, E. P. 2013, A&A, 553, A10 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Harrison, R. A., Sawyer, E. C., Carter, M. K., et al. 1995, Sol. Phys., 162, 233 [CrossRef] [Google Scholar]
 Jordan, C., Ayres, T. R., Brown, A., Linsky, J. L., & Simon, T. 1987, MNRAS, 225, 903 [NASA ADS] [CrossRef] [Google Scholar]
 Judge, P. G., Hubeny, V., & Brown, J. C. 1997, ApJ, 475, 275 [NASA ADS] [CrossRef] [Google Scholar]
 Kashyap, V., & Drake, J. J. 1998, ApJ, 503, 450 [Google Scholar]
 Kuhn, H. W. 2014, in Traces and Emergence of Nonlinear Programming (Springer), 393 [CrossRef] [Google Scholar]
 Kuhn, H. W., & Tucker, A. W. 2014, in Traces and Emergence of Nonlinear Programming (Springer), 247 [CrossRef] [Google Scholar]
 Landi, E., & Feldman, U. 2008, ApJ, 672, 674 [NASA ADS] [CrossRef] [Google Scholar]
 Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17 [Google Scholar]
 Lucy, L. B. 1974, AJ, 79, 745 [Google Scholar]
 Mackovjak, Š., Dzifčáková, E., & Dudík, J. 2014, A&A, 564, A130 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Massa, P., & Benvenuto, F. 2021, Inverse Problems, 37, 045013 [NASA ADS] [CrossRef] [Google Scholar]
 Massa, P., & Emslie, A. G. 2022, Front. Astron. Space Sci., 9 [Google Scholar]
 Massa, P., Piana, M., Massone, A. M., & Benvenuto, F. 2019, A&A, 624, A130 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 McIntosh, S. W., Charbonneau, P., & Brown, J. C. 2000, ApJ, 529, 1115 [NASA ADS] [CrossRef] [Google Scholar]
 Mishra, W., Wang, Y., Teriaca, L., Zhang, J., & Chi, Y. 2020, Front. Astron. Space Sci., 7, 1 [NASA ADS] [CrossRef] [Google Scholar]
 MonsignoriFossi, B. C., & Landini, M. 1992, Mem. Soc. Astron. Italiana, 63, 767 [Google Scholar]
 Morgan, H., & Pickering, J. 2019, Sol. Phys., 294, 135 [NASA ADS] [CrossRef] [Google Scholar]
 Morozov, V. A. 1966, Dokl. Akad. Nauk, 167, 510 [Google Scholar]
 Neupert, W. M., Epstein, G. L., Thomas, R. J., & Thompson, W. T. 1992, Sol. Phys., 137, 87 [NASA ADS] [CrossRef] [Google Scholar]
 Parenti, S., Bromage, B. J. I., Poletto, G., et al. 2000, A&A, 363, 800 [NASA ADS] [Google Scholar]
 Phillips, K. J. H., Feldman, U., & Landi, E. 2008, Ultraviolet and Xray Spectroscopy of the Solar Atmosphere (Cambridge University Press) [CrossRef] [Google Scholar]
 Piana, M., Emslie, A. G., Massone, A. M., & Dennis, B. R. 2022, Hard Xray Imaging of Solar Flares (SpringerVerlag) [CrossRef] [Google Scholar]
 Pickering, J., & Morgan, H. 2019, Sol. Phys., 294, 136 [NASA ADS] [CrossRef] [Google Scholar]
 Plowman, J., Kankelborg, C., & Martens, P. 2013, ApJ, 771, 2 [NASA ADS] [CrossRef] [Google Scholar]
 Reale, F., Testa, P., Klimchuk, J. A., & Parenti, S. 2009, ApJ, 698, 756 [CrossRef] [Google Scholar]
 Richardson, W. H. 1972, JoSA, 62, 55 [NASA ADS] [CrossRef] [Google Scholar]
 Sarder, P., & Nehorai, A. 2006, IEEE Signal Process. Mag., 23, 32 [CrossRef] [Google Scholar]
 Schmelz, J. T., Nasraoui, K., Del Zanna, G., et al. 2007, ApJ, 658, L119 [NASA ADS] [CrossRef] [Google Scholar]
 Schmitt, J. H. M. M., Drake, J. J., Stern, R. A., & Haisch, B. M. 1996, ApJ, 457, 882 [NASA ADS] [CrossRef] [Google Scholar]
 Shepp, L. A., & Vardi, Y. 1982, IEEE Trans. Med. Imaging, 1, 113 [CrossRef] [Google Scholar]
 Shestov, S. V., Kuzin, S. V., Urnov, A. M., Ul’Yanov, A. S., & Bogachev, S. A. 2010, Astron. Lett., 36, 44 [NASA ADS] [CrossRef] [Google Scholar]
 Siarkowski, M., Falewicz, R., Kepa, A., & Rudawy, P. 2008, Ann. Geophys., 26, 2999 [NASA ADS] [CrossRef] [Google Scholar]
 Sylwester, J., Schrijver, J., & Mewe, R. 1980, Sol. Phys., 67, 285 [NASA ADS] [CrossRef] [Google Scholar]
 Tikhonov, V. I. 1963, Sov. Phys. Uspekhi, 5, 594 [NASA ADS] [CrossRef] [Google Scholar]
 Tousey, R., Bartoe, J. D. F., Brueckner, G. E., & Purcell, J. D. 1977, Appl. Opt., 16, 870 [NASA ADS] [CrossRef] [Google Scholar]
 Tripathi, D., Mason, H. E., Dwivedi, B. N., del Zanna, G., & Young, P. R. 2009, ApJ, 694, 1256 [CrossRef] [Google Scholar]
 Wilhelm, K., Curdt, W., Marsch, E., et al. 1995, Sol. Phys., 162, 189 [Google Scholar]
 Withbroe, G. L. 1975, Sol. Phys., 45, 301 [NASA ADS] [CrossRef] [Google Scholar]
 Woods, T. N., Eparvier, F. G., Hock, R., et al. 2012, Sol. Phys., 275, 115 [Google Scholar]
It is not necessary to use a realistic guess (such as the base of the EM Loci plots; see Sect. 1) as the initial estimate ξ^{(0)}.
The 94 Å channel is never strictly dominant, but it does have a response roughly equal to that of the 131 Å and 193 Å channels at log_{10} T ≃ 6.9 (Fig. 1) and so is included in the analysis.
The “ground truth” ξ(T) profiles are Gaussians with different centroid temperatures but the same standard deviation in log_{10} T. Thus the total Emission Measure EM =∫ ξ(T)dT=∫ξ(T) T d ln T = (ln 10) ∫ξ(T) T d log_{10} T scales as the peak value of the Gaussian times its centroid temperature T_{c}. It follows that, in order for all the “ground truth” profiles to have the same EM, the peak value of each profile must be inversely proportional to its centroid temperature . This pattern is evident in Fig. 2.
Small differences between the results presented in this paper and those in Hannah & Kontar (2013) result from the fact that we do not consider the AIA 335 Å channel in reconstructing the ξ(Τ) profiles. Although, as discussed in Sect. 3.1 above, the 335 A channel has a small emissivity function over the temperature range of interest (Fig. 1) and hence can lead to the introduction of spurious features in the recovered ξ(Τ) profiles, this very consideration means that its influence on the recovered ξ(Τ) profiles is not entirely negligible.
Both IDL and Python codes that implement the RML method are available at https://github.com/paolomassa/WAFFLE.git
When a more efficient rule for the selection of the regularization parameter (which does not involve performing multiple reconstructions as is the case when the Morozov discrepancy principle is employed) is implemented, the computational time for reconstructing 500 × 500 ξ(Τ) profiles (without uncertainty estimation) should decrease to less than 10 s.
All Tables
All Figures
Fig. 1 Temperature response functions for the SDO/AIA EUV channels (Boerner et al. 2012). 

In the text 
Fig. 2 Results of the singleGaussian simulation tests. From top to bottom: reconstructions of simulated Gaussian ξ(Τ) profiles by means of the basis pursuit (BP) technique (Cheung et al. 2015), the FIR inversion method of Plowman et al. (2013), the iterative SITES technique (Morgan & Pickering 2019), the regularization technique of Hannah & Kontar (2012, REG), the ML method defined in Eq. (18), and finally our proposed RML technique. The black solid lines represent the “ground truth” configurations, which are all Gaussian functions of log_{10} Τ with total Emission Measure EM = ∫ ξ(Τ)dT = 10^{28} cm^{−5} and standard deviation equal to 0.15 dex. From left to right, the centroid temperatures are given by log_{10} T_{c} = 6.1, 6.2, 6.3, and 6.4. The gray lines represent the reconstructions from 25 different realizations of Poissonnoiseperturbed data; the red lines represent the mean values of these 25 reconstructions. The ξ(Τ) profiles are plotted as a function of the (base 10) logarithm of the temperature (K), so that the peak value of each Gaussian is inversely proportional to 10^{x}, where x is the abscissa. 

In the text 
Fig. 3 Results of the doubleGaussian simulation tests. From top to bottom: Reconstructions of a simulated double Gaussian ξ(T) profile by means of the basis pursuit (BP) technique (Cheung et al. 2015), the FIR inversion method of Plowman et al. (2013), the iterative SITES technique (Morgan & Pickering 2019), the regularization technique of Hannah & Kontar (2012, REG), the ML method defined in Eq. (18), and finally our proposed RML technique. The black solid lines represent the “ground truth” configurations, which consist of double Gaussian profiles. The left Gaussian has a total Emission Measure EM = ∫ξ(T) dT = 10^{28} cm^{−5} and a standard deviation of 0.1 dex, and is centered on log_{10} T_{c} = 6.1, while the right Gaussian has a total Emission Measure EM = ∫ξ(T)dT = 2 × 10^{28} cm^{−5} and a standard deviation of 0.15 dex, and is centered on (from left to right) log_{10} Tc = 6.6, 6.8, 7.0 and 7.2. The gray lines represent reconstructions from 25 different realizations of Poisson noise affecting the data; the red lines represent the mean values of these 25 reconstructions. The ξ(T) profiles are plotted as a function of the (base 10) logarithm of the temperature (K). 

In the text 
Fig. 4 Images recorded by AIA on November 3, 2010 around 12:15:02 UT in the 94 Å, 131 Å, 171 Å, 193 Å, and 211 Å channels, from left to right, respectively. 

In the text 
Fig. 5 DEM profiles reconstructed from observed AIA count rates in selected pixels of the 2010 November 3 event. Top row: AIA images recorded around 12:15:02 UT in the 131 A and 211 A channels (left and right panel, respectively). The numbered crosses denote the location of the pixels selected for a comparison of the ξ(Τ) profiles reconstructed by the different methods. Second to fifth row: ξ(Τ) profiles associated with Pixels 1 through 8. In each panel, the profiles reconstructed by the BP, FIR, SITES, REG and RML methods are plotted in magenta, blue, brown, green, and orange, respectively. The ± 1σ uncertainties associated with the SITES, REG and RML reconstructions have been added as error bars at each temperature point in the pertinent reconstruction. The intensities of the reconstructions of Pixels 1, 3 and 5 have been multiplied by a factor of 2. The intensity axes are linear, while the temperature axes are in terms of log_{10} Τ (K). 

In the text 
Fig. 6 DEM maps reconstructed from the AIA images shown in Fig. 4. Rows, from top to bottom: the BP method (Cheung et al. 2015), the FIR method (Plowman et al. 2013), the SITES method (Morgan & Pickering 2019), the REG method (Hannah & Kontar 2012), and our proposed RML method. The images of each column correspond to the same temperature, which is reported in the top left corner of the first row panels. The same color map is shared by each of the images in the same temperature column, with different (logarithmic) scalings for the different temperatures. 

In the text 
Fig. 7 Distribution of values of the RML regularization parameter. Left panel: reconstructed DEM map corresponding to Τ = 2.8 MK, produced by the RML method. Middle panel: value of the regularization parameter λ used at each pixel within the image. Right panel: histogram of the λ values selected by RML for the ~250 000 0.6″ × 0.6″ pixels within the image. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.