Issue |
A&A
Volume 655, November 2021
|
|
---|---|---|
Article Number | A88 | |
Number of page(s) | 10 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/202141238 | |
Published online | 25 November 2021 |
Numerical solutions to linear transfer problems of polarized radiation
II. Krylov methods and matrix-free implementation
1
Euler Institute, Università della Svizzera italiana (USI), 6900 Lugano, Switzerland
e-mail: pietro.benedusi@usi.ch
2
Istituto Ricerche Solari (IRSOL), Università della Svizzera italiana (USI), 6605 Locarno-Monti, Switzerland
3
Leibniz-Institut für Sonnenphysik (KIS), 79104 Freiburg im Breisgau, Germany
Received:
3
May
2021
Accepted:
27
June
2021
Context. Numerical solutions to transfer problems of polarized radiation in solar and stellar atmospheres commonly rely on stationary iterative methods, which often perform poorly when applied to large problems. In recent times, stationary iterative methods have been replaced by state-of-the-art preconditioned Krylov iterative methods for many applications. However, a general description and a convergence analysis of Krylov methods in the polarized radiative transfer context are still lacking.
Aims. We describe the practical application of preconditioned Krylov methods to linear transfer problems of polarized radiation, possibly in a matrix-free context. The main aim is to clarify the advantages and drawbacks of various Krylov accelerators with respect to stationary iterative methods and direct solution strategies.
Methods. After a brief introduction to the concept of Krylov methods, we report the convergence rate and the run time of various Krylov-accelerated techniques combined with different formal solvers when applied to a 1D benchmark transfer problem of polarized radiation. In particular, we analyze the GMRES, BICGSTAB, and CGS Krylov methods, preconditioned with Jacobi, (S)SOR, or an incomplete LU factorization. Furthermore, specific numerical tests were performed to study the robustness of the various methods as the problem size grew.
Results. Krylov methods accelerate the convergence, reduce the run time, and improve the robustness (with respect to the problem size) of standard stationary iterative methods. Jacobi-preconditioned Krylov methods outperform SOR-preconditioned stationary iterations in all respects. In particular, the Jacobi-GMRES method offers the best overall performance for the problem setting in use.
Conclusions. Krylov methods can be more challenging to implement than stationary iterative methods. However, an algebraic formulation of the radiative transfer problem allows one to apply and study Krylov acceleration strategies with little effort. Furthermore, many available numerical libraries implement matrix-free Krylov routines, enabling an almost effortless transition to Krylov methods.
Key words: radiative transfer / methods: numerical / Sun: atmosphere / stars: atmospheres / polarization
© ESO 2021
1. Introduction
When looking for numerical solutions to transfer problems of polarized radiation, it is common to rely on stationary iterative methods. In particular, fixed point iterations with Jacobi, block-Jacobi, or Gauss-Seidel preconditioning have been successfully employed. An illustrative convergence analysis of the stationary iterative methods usually used in the numerical transfer of polarized radiation is presented in the first paper of this series (Janett et al. 2021). Unfortunately, stationary iterative methods often show unsatisfactory convergence rates when applied to large problems. By contrast, Krylov iterative methods gained popularity in the last decades, proving to be highly effective solution strategies, especially when dealing with large and sparse linear systems.
Among the various Krylov techniques, the biconjugate gradient stabilized (BICGSTAB) and the generalized minimal residual (GMRES) methods are very common choices, especially when dealing with nonsymmetric systems (Meurant & Duintjer Tebbens 2020). As for stationary iterative methods, suitable preconditioning can significantly improve the convergence of Krylov iterations.
In the radiative transfer context, both BICGSTAB and GMRES methods have been employed for various applications, such as time-dependent fluid flow problems (Klein et al. 1989; Hubeny & Burrows 2007), problems arising from finite element discretizations (Castro & Trelles 2015; Badri et al. 2019), and nonlinear radiative transfer problems for cool stars (Lambert et al. 2015). In particular, Krylov methods have already been employed to linear transfer problems of unpolarized radiation, showing promising results. We would like to mention the first convergence studies on the BICGSTAB method by Paletou & Anterrieu (2009) and Anusha et al. (2009).
Nagendra et al. (2009) first applied BICGSTAB to the transfer of polarized radiation in 1D atmospheric models. Anusha et al. (2011) and Anusha & Nagendra (2011) employed the same method in 2D and 3D geometries, while Anusha & Nagendra (2012) additionally included angle-dependent partial frequency redistribution (PRD) effects. For the sake of simplicity, these papers are limited to the study of hypothetical lines in isothermal atmospheric models. Finally, Anusha & Nagendra (2013) used the BICGSTAB method to model the Ca II K 3993 Å resonance line using an ad hoc 3D atmospheric model, and Sampoorna et al. (2019) applied the GMRES method to model the D2 lines of Li I and Na I, taking the hyperfine structure of these atoms into account. However, Krylov methods have not been fully exploited in the numerical solution of transfer of polarized radiation. Indeed, only a few applications of Krylov methods aim to model the polarization profiles in realistic atmospheric models. Moreover, numerical studies investigating multiple preconditioned Krylov methods, in terms of convergence and run time, and presenting key implementation details, are still lacking in this context.
This article is organized as follows. Section 2 recalls the idea that lies behind Krylov iterative techniques, with a special focus on the GMRES and BICG methods. Section 3 presents a benchmark analytical problem, its discretization, and its algebraic formulation, and shows how to apply Krylov methods in the radiative transfer context. In Sect. 4, we describe the problem settings and the solvers options and present a quantitative comparison between Krylov and stationary iterative methods with respect to convergence, robustness, and run time. Finally, Sect. 5 provides remarks and conclusions, which are also generalized to more complex problems.
2. Krylov methods
This section contains a gentle, but not exhaustive, introduction to Krylov methods. For more interested readers, Ipsen & Meyer (1998) explain the idea behind Krylov methods, while Saad (2003) and Meurant & Duintjer Tebbens (2020) provide a comprehensive and rigorous discussion on the topic.
We consider the linear system
where the nonsingular matrix A ∈ ℝN × N and the vector b ∈ ℝN are given, and the solution x ∈ ℝN is to be found. Given an initial guess x0, a Krylov method approximates the solution x of (1) with the sequence of vectors xn ∈ 𝒦n(A, b), with n = 1, …, N, where
is the nth Krylov subspace generated by b. This is the case for the (very common) initial guess x0 = 0. In general, xn − x0 ∈ 𝒦n(A, r0), where r0 = b − Ax0 is the initial residual.
We first try to clarify why it is convenient to look for an approximate solution xn of the linear system (1) inside the Krylov subspace 𝒦n(A, b). The minimal polynomial of A is the monic polynomial q of minimal degree such that q(A) = 0. If A has d distinct eigenvalues λ1, …, λd, its mth degree minimal polynomial reads
where mj is the multiplicity1 of λj (that is, the jth root of q) and . Polynomial (3) can be rewritten in the form
where α1, …, αm are unspecified scalars and . Evaluating the polynomial in A, one obtains
which allows us to express the inverse A−1 in terms of powers of A, namely,
showing that the solution vector x = A−1b belongs to the Krylov subspace 𝒦m(A, b). Hence, a smaller degree of the minimal polynomial of A (e.g., m ≪ N) generally corresponds to a faster convergence of Krylov methods, since x ∈ 𝒦m(A, b)⊂ℝN. We remark that (4) is well defined if A is nonsingular: if λj ≠ 0 for all j then α0 ≠ 0.
Secondly, we try to better characterize the smallest Krylov space containing x. The minimal polynomial of b with respect to A is the monic polynomial p of minimal degree such that p(A)b = 0. Denoting with g the degree of p, it is possible to show that 𝒦g(A, b) is invariant under A, that is
meaning that 𝒦n(A, b) = 𝒦g(A, b) for all n ≥ g. Therefore, a Krylov method applied to (1) will typically terminate in g iterations, where g can be much smaller than N. Since g ≤ m ≤ N, a Krylov method terminates after at most N steps2. In practice, approximate solutions of system (1) are sufficient and a suitable termination condition allows a Krylov method to converge in less than g iterations.
Crucially, because of the structure of 𝒦n(A, b), the computation of xn mainly requires a series of matrix-vector products involving A. This paradigm is particularly beneficial when n is large and A is sparse, in which cases matrix-vector products can be computed efficiently. Moreover, Krylov methods are also handy when there is no direct access to the entries of the matrix A and its action is only encoded in a routine that returns Av from an arbitrary input vector v. The various Krylov methods differ from one other in two main aspects. The first difference is in the way they construct the subspace 𝒦n(A, b). In practice, the direct use of definition (2) is not convenient to build 𝒦n(A, b), because the spanning vectors b, …, An − 1b can become closer and closer being linearly dependent as n grows, resulting in numerical instabilities. The second difference is the way they select the “best” iterate xn ∈ 𝒦n(A, b).
The following sections introduce three common Krylov methods usually applied to nonsymmetric linear systems: the GMRES, BICGSTAB and conjugate gradient squared (CGS) methods.
2.1. GMRES
Saad & Schultz (1986) introduced one of the best known and mostly applied Krylov solvers: the GMRES method. At iteration n, the GMRES method first constructs an orthonormal basis of 𝒦n(A, b) by the Arnoldi algorithm, which is a modified version of the Gram-Schmidt procedure adapted to Krylov spaces. Secondly, the GMRES method sets the approximate solution xn ∈ 𝒦n(A, b) minimizing the Euclidean norm of the residual rn = b − Axn, that is, solving the least squares problem of size n given by
Assuming exact arithmetic, the GMRES method returns the exact solution in at most g iterations. Regarding computational complexity, the most expensive stages in the nth GMRES iteration is a O(N2) matrix-vector product involving A, possibly decreasing to O(N) for sparse A, and a nested loop requiring O(nN) floating point operations. In terms of memory, n basis vectors have to be stored. Therefore, the GMRES method becomes inefficient for large n. A standard remedy to this problem is to employ restarts, that is, to terminate the GMRES method after k iterations, using xk as the initial guess for a subsequent GMRES run, until a desired termination condition is met. A suitable choice of k can significantly reduce the time to solution, but convergence is not always guaranteed. However, if A is positive definite, GMRES converges for any k ≥ 1.
2.2. CGS and BICGSTAB
If A is symmetric, the Arnoldi iteration can be simplified, avoiding nested loops, obtaining the so-called Lanczos procedure, that requires just three vectors to be saved at once. For the nonsymmetric case, this procedure can be extended to the Lanczos biorthogonalization algorithm, that keeps in memory six vectors at once, yielding to a significant storage reduction with respect to the Arnoldi method. On the other hand, two matrix-vector products (involving A and AT) are needed at each iteration. Starting from two arbitrary vectors v1 and w1 satisfying (v1, w1) = 1, this algorithm builds two bi-orthogonal bases and
for 𝒦n(A, v1) and 𝒦n(AT, w1), respectively.
In the nth iteration, the biconjugate gradient (BCG) method imposes rn ⊥ 𝒦n(AT, w1) and xn ∈ 𝒦n(A, v1). The BCG method efficiently solves two linear systems with coefficient matrices A and AT, whose residual expressions are given by
pn being a nth-degree polynomial3 and .
Since the solution of the linear system involving AT is generally not needed and AT could not even be explicitly available, transpose-free variants are used. Two well-known transpose-free variants are the CGS (Sonneveld 1989) and the BICGSTAB (Van der Vorst 1992) methods. These variants avoid the use of AT replacing the residual expressions (5) by and
, respectively, where
is a nth-degree polynomial recursively defined. Regarding complexity, both CGS and BICGSTAB, in each iteration, require two matrix-vector products involving A, plus O(N) additional floating point operations. Both CGS and BICGSTAB are less robust than GMRES, since their convergence can stagnate for pathological cases (see, e.g., Gutknecht 1993). Moreover, CGS rounding errors tend to be more pronounced with respect to BICGSTAB, possibly resulting in an unstable convergence.
2.3. Preconditioning
Preconditioning the linear system (1) with the nonsingular matrix P ∈ ℝN × N, that is, considering the equivalent system
can significantly increase robustness and convergence speed of stationary iterative methods. This is also particularly true for Krylov solvers. Crucially, the action of P−1 on an input vector has to be computationally inexpensive, while improving the conditioning of P−1A with respect to the one of A. In practice, Krylov methods must perform the product w = P−1Av for an arbitrary vector v. This operation is naively performed in two stages: followed by
. Since P usually depends on A, it is sometimes possible to obtain the action of P−1A in a more efficient way, reducing the number of floating point operations. Since Krylov methods do not require direct access to the entries of the preconditioner P, the action of P−1 (or directly of P−1A) can be also provided as a routine.
We would like to present the most common choices for P that are numerically investigated in Sect. 4. Given the decomposition A = D + U + L, with D, U, and L being respectively the diagonal and the strictly upper and lower triangular parts of A, and 0 < ω < 2, we consider
-
P = D: Jacobi method;
-
P = ω−1D + L or P = ω−1D + U: successive over relaxation (SOR) method;
-
with
and
: symmetric SOR method (SSOR);
-
, where
and
are lower and upper triangular matrices approximating the factors of the LU factorization of A: incomplete LU factorization (ILU).
The Jacobi method is numerically appealing since the action of P−1 is cheaply computed (in parallel). The SOR method usually provides faster convergence than Jacobi, at the price of P−1 being more expensive to apply (only sequentially). The SSOR method typically reduces the run time of SOR, especially if there is no preferential direction in the coupling between unknowns. Moreover, a suitable tuning of ω significantly improves convergence, as shown in Janett et al. (2021) for stationary iterative methods. We remark that, for ω = 1, SOR reduces to the Gauss-Seidel (GS) method and SSOR reduces to symmetric Gauss-Seidel. ILU preconditioners are obtained trough an inexact Gaussian elimination, where some elements of the LU factorization are dropped. Typically, and
are constructed such that their sum has the same sparsity pattern of A (a.k.a. ILU with no fill-in or ILU(0)). As in the (S)SOR case, ILU preconditioners are sequential, but variants capable of extracting some parallelism can be applied.
Finally, the Jacobi and (S)SOR methods can be encoded in matrix-free routines. By contrast, to apply an ILU preconditioner, the matrices A, and
must be assembled and stored.
3. Benchmark problem
In this section, we present the continuous formulation of a benchmark linear transfer problem of polarized radiation. We then consider its discretization and algebraic formulation. Finally, we remark on how the matrix-free Krylov methods presented in Sects. 2.1 and 2.2 are applied in this context. The reader is referred to Janett et al. (2021, Sect. 3) for a more detailed presentation of the problem, along with its discretization and algebraic formulation.
The problem is formulated within the framework of the complete frequency redistribution (CRD) approach presented in Landi Degl’Innocenti & Landolfi (2004). We consider a two-level atom with total angular momenta Ju = 1 and Jℓ = 0, with the subscripts ℓ and u indicating the lower and upper level, respectively. Since Jℓ = 0, the lower level is unpolarized by definition. Stimulated emission it is not considered here, since it is completely negligible in solar applications. For the sake of simplicity, continuum processes are neglected, as are magnetic and bulk velocity fields. A homogeneous and isothermal, one-dimensional (1D) plane-parallel atmosphere is considered. Under the aforementioned assumptions, the spatial dependency of the physical quantities entering the problem is fully described by the height coordinate z. Moreover, the problem is characterized by cylindrical symmetry around the vertical and, consequently, the angular dependency of the problem variables is fully described by the inclination θ with respect to the vertical, or equivalently by μ = cos(θ).
3.1. Continuous problem
The physical quantities entering the problem are in general functions of the height z ∈ [zmin, zmax], the frequency ν ∈ [νmin, νmax], and the propagation direction of the radiation ray under consideration, identified by μ ∈ [ − 1, 1].
Due to the cylindrical symmetry of the problem, the only nonzero components of the radiation field tensor are and
. Consequently, the only nonzero multipolar components of the source function are
and
. Setting the angle γ = 0 in the expression of the polarization tensor
, one finds that the only nonzero source functions are
where and
. The only nonzero Stokes parameters are therefore I and Q. Their propagation is described by the decoupled differential equations
where η is the absorption coefficient. The initial conditions are given by
In order to linearize the problem, we assume that the absorption coefficient η is known a priori and fixed or, equivalently, we discretize the atmosphere through a fixed grid in frequency-integrated optical depth (see Janett et al. 2021, Sect. 3.2).
The explicit expressions of and
are
where the damping constant entering the absorption profile ϕ is fixed to a = 10−3. Finally, the explicit expressions of and
are (see Janett et al. 2021, Eq. (18))
where we set the thermalization parameter to ϵ = 10−4 and ξ = 1 − ϵ. In the equations above, we assumed a negligible depolarizing rate of the upper level due to elastic collisions , and set the Planck function (in the Wien limit) to W = 1. We finally recall that for the considered atomic model
.
3.2. Discrete problem
We discretize the continuous variables z, ν, and μ with the numerical grids ,
, and
, respectively. The quantities of the problem are now approximated at the nodes only. The discrete version of Eqs. (7) and (8) can be expressed in matrix form
where S ∈ ℝ2NsNμ and σ ∈ ℝ2Ns collect the discretized source functions SI(zk, μm) and SQ(zk, μm), and the multipolar components of the source function and
, respectively. The entries of the matrix T depend on the coefficients appearing in Eqs. (7) and (8).
Similarly, the transfer Eqs. (9) and (10) are expressed in matrix form
where I ∈ ℝ2NsNμNν collects the discretized Stokes parameters I(zk, μm, νp) and Q(zk, μm, νp), while t ∈ ℝ2NsNμNν represents the radiation transmitted from the boundaries. The entries of the matrix Λ depend on the numerical method (a.k.a. formal solver) used to solve the transfer Eqs. (9) and (10), on the spatial grid, and on the eventual numerical conversion to the optical depth scale.
Finally, the matrix version of Eqs. (13) and (14) reads
where and, accordingly, c = [ϵ, 0, ϵ, 0, …, ϵ, 0]T. The matrix J depend on the choice of the angular and spectral quadratures used in Eqs. (13) and (14). Janett et al. (2021) provide an explicit description of the matrices T, Λ, and J for a more general radiative transfer setting.
By choosing σ as the unknown vector, Eqs. (15)–(17) can then be combined in the linear system given by
with Id − JΛT being a square matrix of size 2Ns. Referring to (1), we set A = Id − JΛT, x = σ, and b = Jt + c. Alternatively, it is also possible to consider I as unknown of the problem, obtaining the following linear system
with Id − ΛTJ being a block sparse square matrix of size 2NsNμNν with dense blocks.
The linear problem (18) is considered in the following sections, because of its smaller size in the current setting. However, the formulation (19) is likely to be more favorable to Krylov methods, since the corresponding linear system is both larger and sparse. In both cases the coefficient matrix A is nonsymmetric and positive definite (cf. Fig. 1).
![]() |
Fig. 1.
Spectrum of P−1A on the complex plane for different preconditioners for the DELO-linear formal solver, with Ns = 80 and Nμ = Nν = 20. From here on the abbreviation “no prec.” indicates that no preconditioner is used, that is, P = Id. |
3.3. Matrix-free Krylov approach
The explicit assembly of A = Id − JΛT is convenient if the linear system (18) must be solved multiple times, for example in the case of extensive numerical tests with different right-hand sides. However, it is in general expensive to assemble A, especially for large problems. However, Krylov methods can also be applied in a matrix-free context, where the action of A is encoded in a routine and no direct access to its entries is required. For this reason, we provide in Algorithm 1 a matrix-free routine to compute σout = Aσin for any arbitrary input vector σin. Algorithm 1 is modular, allowing preexisting radiative transfer routines to be reused.
We remark that it is possible to assemble A column-by-column, obtaining its jth column applying Algorithm 1 to a point-like source function σi = δij for i, j = 1, …, 2Ns.
4. Numerical experiments
In Sect. 2, we discussed the convergence properties, complexity, and memory requirements of various Krylov methods. In practice, the specific choice among the different methods is usually made according to run time, which is empirically tested.
In this section, different Krylov and stationary iterative methods are applied to the benchmark linear problem (18). In particular, we compare GMRES, CGS, and BICGSTAB Krylov methods, possibly preconditioned, and the Richardson, Jacobi, SOR, and SSOR stationary iterative methods. The stationary iterative methods are described as preconditioned Richardson methods of the form
4.1. Discretization, quadrature, and formal solver
We replaced the spatial variable z by the frequency-integrated optical depth scale along the vertical τ. Moreover, we fixed the logarithmically spaced grid
thus linearizing the problem. We used Nμ Gauss-Legendre nodes and weights to discretize μ ∈ [ − 1, 1] and compute the corresponding angular quadrature. The spectral line was sampled with Nν frequency nodes equally spaced in the reduced frequency interval [ − 5, 5] and the trapezoidal rule was used as spectral quadrature.
A limited number of formal solvers was considered for the numerical solution of transfer Eqs. (9) and (10): the implicit Euler, DELO-linear, DELOPAR, and DELO-parabolic methods. Further details on these formal solvers are given by Janett et al. (2017a,b, 2018).
4.2. Iterative solvers settings and implementation
As initial guess for all the solvers we use and
for all k, corresponding to σin = [1, 0, 1, 0, …, 1, 0]. Since multiple preconditioners are compared, we use the following termination condition
which is based on the unpreconditioned relative residual.
Between the two possible formulations of SOR presented in Sect. 2.3, the structure of A (cf. Fig. 2) suggests the use of P = D + ωU, which in fact provides a better convergence. Moreover, the near-optimal damping parameter ω = 1.5 (see Janett et al. 2021, Sect. 2.3) is used for both SOR and SSOR preconditioners when applied to the Richardson method. By contrast, no damping (that is, ω = 1) is used when the SOR and SSOR preconditioners are applied to Krylov methods, since no benefits are observed for different choices of ω. The GMRES method is employed without restarts, while the ILU factorization is applied with a threshold of 10−2 (a.k.a. ILUT4). Figure 2 exposes an example of sparsity pattern of and
, highlighting the most significant entries of A.
![]() |
Fig. 2.
Sparsity pattern of |
Krylov methods are applied to problem (18) using MATLAB and its built-in functions: gmres, bicgstab, cgs and ilu. In practice, we explicitly assemble the matrix A and store P in sparse format, using mldivide to apply P−1 in (20). The SSOR and ILU preconditioners can be expressed as the product of lower and upper triangular matrices, that is, in the form . Since
, the action of P−1 can be conveniently computed using sparse storage for
and
. We would like to mention that the Eisenstat’s trick (not used here) can be conveniently applied to the SSOR preconditioning, reducing the run time by a factor up to two.
Matrix-free approaches are also adopted, using Algorithm 1 and avoiding the explicit assembling of the matrix A. In this case, for performance reasons, we precompute the action of P−1. Notice that the Krylov routines available in the most common numerical packages (e.g., MATLAB or PETSc) support a matrix-free format of A and P−1.
4.3. Convergence
Figure 1 displays the spectrum of P−1A on the complex plane for different preconditioners. The clustering of the eigenvalues of P−1A corresponds to a better conditioning and, consequently, to a faster convergence of iterative methods when applied to (6). Clustered eigenvalues also yield a reduction of the dimension of the Krylov space 𝒦n(P−1A, P−1b). Table 1 presents the number of iterations required from different methods to converge as a function of the number of nodes in the spectral and angular grids Nμ and Nν, respectively. Analogously to Janett et al. (2021), we observe that, once a minimal resolution is guaranteed, varying Nμ and Nν has a negligible effect on the convergence behavior of the different Krylov solvers. Therefore, the values Nμ = 20 and Nν = 20 remain fixed in the following numerical experiments on convergence.
Iterations to convergence for the GMRES, CGS, and BICGSTAB Krylov methods, with no preconditioning, for the DELO-linear formal solver, with Ns = 40, and varying Nμ = Nν.
Figure 3 presents the convergence history of the Richardson and Krylov methods combined with different preconditioners and the formal solvers under investigation. Krylov methods generally outperform stationary methods in terms of number of iterations to reach convergence. Among the Krylov methods, preconditioned BICGSTAB and CGS methods result in the fastest convergence. As predicted by Fig. 1, SSOR and ILU preconditioners are the most effective. On the other hand, the use of a Jacobi preconditioner also seems ideal, because of its overall fast convergence and its cheap and possibly parallel application.
![]() |
Fig. 3.
Relative residual, up to machine precision, vs iterations for various preconditioners and formal solvers with Ns = 80 and Nν = Nμ = 20. Each row corresponds to a different formal solver, from above: implicit Euler, DELO-linear, DELOPAR, DELO-parabolic. We note that a different scale is used in the horizontal axes of the first column, where no preconditioner is employed. |
Tables 2–6 present the number of iterations required by different methods to converge as a function of Ns for different preconditioners, using the DELO-linear formal solver. The same numerical experiments using the implicit Euler, DELOPAR, and DELO-parabolic formal solvers show similar trends, which are thus not shown. With reference to Table 2, we observe that the unpreconditioned Richardson method never converges in less than 104 iterations. Figure 4 graphically represents the content of Tables 2–6, suggesting a superior scaling of Krylov methods with the problem size in terms of convergence. Since realistic radiative transfer problems are very large, these results are particularly relevant for practical purposes.
Iterations to convergence for the Richardson, GMRES, CGS, and BICGSTAB methods with no preconditioner for the DELO-linear formal solver, with Nμ = Nν = 20, and varying Ns.
4.4. Run times
Since the use of different formal solvers would not change the essence of the results, this section only considers the use of the DELO-linear formal solver.
We first consider the case of A being assembled explicitly. Table 7 reports the time spent to assemble A using Algorithm 1 repeatedly, varying the discretization parameters. We remark that Algorithm 1 can be optimized for this context, where a point-like input is used. In fact, the computational complexity of the formal solution of (9) and (10) with a zero initial condition and a point-like source term can be largely reduced. For example, given j ∈ {1, …, Ns} and a point-like σ in zj, that is, σi = δij, the solution of (9) and (10) for μ < 0 (resp. μ > 0) is identically zero for all z < zj (resp. z > zj) and it is given by an exponential decay for z > zj (resp. z < zj). Once A is explicitly assembled, problem (18) can be solved with a negligible run time using, for example, a LU direct solver (taking 32 milliseconds for Ns = 500, cf. Table 7). We remark that the solve time is almost negligible, with respect to the time spent on assembly, for all the iterative methods under consideration (besides unpreconditioned Richardson). Therefore, Tables 8, 9 and 10 report run times corresponding to a matrix-free approach, for which no time is spent to assemble A. With reference to Table 8, the time spent to compute P is of approximately 0.05, 4 and 9 s for the Jacobi, SOR, and SSOR preconditioners, respectively. The run time of the Richardson method with no preconditioning is not reported, because it exceeds one hour. Similarly, concerning Table 9, the time spent to compute P is of approximately 0.2, 55 and 111 seconds for the Jacobi, SOR, and SSOR preconditioners, respectively. In the matrix-free context, GMRES turns out to be the fastest Krylov method, since it requires only one application of A for each iteration (as discussed in Sect. 2). Consider that the application of A, obtained using Algorithm 1, is computationally expensive.
Figure 5 displays the total run times of the most relevant approaches, highlighting the time spent to assemble A, to precompute the action of P−1, and to solve the linear system. Accordingly, the most convenient methods are, assembling A and then apply an LU direct solver or employing a Jacobi-GMRES matrix-free solver.
![]() |
Fig. 5.
Total run times (in seconds), subdivided in various stages, for the solution of (18) with a DELO-linear formal, Nμ = Nν = 20, Ns = 140 (left), and Ns = 500 (right), according to Tables 9 and 7. The first three bars correspond to various preconditioned Richardson iterations. The following three bars represent run times for the matrix-free GMRES method, again with various preconditioners. Finally, the last bar represents a direct approach, i.e., an LU solver. Only in this case A is explicitly assembled using Algorithm 1 and the “Solve” time can hardly be seen being negligible. |
The best choice between the two approaches mostly depends on the problem size, memory limitations, and accuracy requirements. To this regard, Table 10 reports how the run time of Jacobi-GMRES scales with respect to the problem size (in terms of Ns, Nμ and Nν). The comparison between Tables 7 and 10 suggests that the matrix-free approach scales better as Ns grows. On the other hand, it becomes advantageous to assemble A as Nμ and Nν grow (cf. Fig. 6). To explain this difference, we would like to remark that, in general, the nested loop over Nμ and Nν, containing a formal solver call over Ns, is the most expensive portion of Algorithm 1. On the other hand, when Algorithm 1 is used to assemble A, the computational complexity of the formal solution of (9) and (10) can be largely reduced. In this case, the computational bottleneck of Algorithm 1 is the calculation of and
, for k = 1, …, Ns.
![]() |
Fig. 6.
Total run times (in seconds) for the solution of (18) with a DELO-linear formal, Nμ = Nν = 60, Ns = 140 (left), and Ns = 500 (right). For both plots the first bar corresponds to a stationary Jacobi method. In following three bars the Jacobi method is accelerated with various Krylov methods. Finally, the last bar represents a direct approach, i.e, an LU solver. Only in this case A is explicitly assembled using Algorithm 1. |
We finally remark that both the selected methods can be applied in parallel, since each loop in Algorithm 1 can be replace by a “parallel for” and the application of the Jacobi preconditioner is embarrassingly parallel.
5. Conclusions
We described how to apply state-of-the-art Krylov methods to linear radiative transfer problems of polarized radiation. In this work, we considered the same benchmark problem as in Janett et al. (2021). However, Krylov methods can be applied to a wider range of settings, such as the unpolarized case, two-term atomic models including continuum contributions, 3D atmospheric models with arbitrary magnetic and bulk velocity fields, and theoretical frameworks accounting for PRD effects.
In particular, we presented the convergence behavior and the run time measures of the GMRES, BICGSTAB, and CGS Krylov methods. Krylov methods significantly accelerate standard stationary iterative methods in terms of convergence rate, time-to-solution, and are favorable in terms of scaling with respect to the problem size. Contrary to stationary iterative methods, Krylov-accelerated routines always converge (even with no preconditioning) for the considered numerical experiments.
For run time measures, we also considered the matrix-free application of the iterative methods under investigation. In this regard, the GMRES method preconditioned with Jacobi has proven to be the most advantageous choice in terms of convergence and run time (approximately 10 to 20 time faster with respect to standard Jacobi-Richardson, cf. Figs. 5 and 6). Since GMRES requires only one matrix-vector multiplication in each iteration, it becomes more advantageous when A is less sparse or in the matrix-free case, where the evaluation of Ax is computationally expensive. We remark that parallelism can be extensively exploited both in Algorithm 1 and in the application of the Jacobi preconditioner.
We remark that the fully algebraic formulation of the linear radiative transfer problem allows us to explicitly assemble the matrix A and to apply direct methods to solve the corresponding linear system. In this case, the solving time is negligible with respect to to the assembly time. This approach can be more advantageous than the matrix-free one if high accuracy is required, if Eq. (18) has to be solved for many right hand sides, or if the number of discrete frequency and directions is particularly large. On the other hand, the matrix-free GMRES method preconditioned with Jacobi seems to be preferable if the number of spatial nodes is particularly large (e.g., for 3D geometries).
We finally recall that many available numerical libraries implement matrix-free Krylov routines, enabling an almost painless transition from stationary iterative methods to Krylov methods. In terms of implementation, Krylov methods can simply wrap already existing radiative transfer routines based on stationary iterative methods. The application of Krylov methods to more complex linear radiative transfer problems will be the subject of study of forthcoming papers.
From (2), by definition, we have that xn = pn − 1(A)b. Then
rn = b − Axn = b − Apn − 1(A)b = pn(A)b.
Acknowledgments
Special thanks are extended to E. Alsina Ballester, N. Guerreiro, and T. Simpson for particularly enriching comments on this work. The authors gratefully acknowledge the Swiss National Science Foundation (SNSF) for financial support through grant CRSII5_180238. Rolf Krause acknowledges the funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 955701 (project TIME-X). The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Belgium, France, Germany, Switzerland.
References
- Anusha, L. S., & Nagendra, K. N. 2011, ApJ, 738, 116 [NASA ADS] [CrossRef] [Google Scholar]
- Anusha, L. S., & Nagendra, K. N. 2012, ApJ, 746, 84 [NASA ADS] [CrossRef] [Google Scholar]
- Anusha, L. S., & Nagendra, K. N. 2013, ApJ, 767, 108 [NASA ADS] [CrossRef] [Google Scholar]
- Anusha, L., Nagendra, K., Paletou, F., & Léger, L. 2009, ApJ, 704, 661 [NASA ADS] [CrossRef] [Google Scholar]
- Anusha, L. S., Nagendra, K. N., & Paletou, F. 2011, ApJ, 726, 96 [NASA ADS] [CrossRef] [Google Scholar]
- Badri, M., Jolivet, P., Rousseau, B., & Favennec, Y. 2019, Comput. Math. Appl., 77, 1453 [Google Scholar]
- Castro, R. O., & Trelles, J. P. 2015, J. Quant. Spectr. Rad. Transf., 157, 81 [NASA ADS] [CrossRef] [Google Scholar]
- Gutknecht, M. H. 1993, SIAM J. Sci. Comput., 14, 1020 [CrossRef] [Google Scholar]
- Hubeny, I., & Burrows, A. 2007, ApJ, 659, 1458 [CrossRef] [Google Scholar]
- Ipsen, I. C., & Meyer, C. D. 1998, Am. Math. Mon., 105, 889 [Google Scholar]
- Janett, G., Carlin, E. S., Steiner, O., & Belluzzi, L. 2017a, ApJ, 840, 107 [NASA ADS] [CrossRef] [Google Scholar]
- Janett, G., Steiner, O., & Belluzzi, L. 2017b, ApJ, 845, 104 [Google Scholar]
- Janett, G., Steiner, O., & Belluzzi, L. 2018, ApJ, 865, 16 [NASA ADS] [CrossRef] [Google Scholar]
- Janett, G., Benedusi, P., Belluzzi, L., & Krause, R. 2021, A&A, 655, A87 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Klein, R. I., Castor, J. I., Greenbaum, A., Taylor, D., & Dykema, P. G. 1989, J. Quant. Spectr. Rad. Transf., 41, 199 [NASA ADS] [CrossRef] [Google Scholar]
- Lambert, J., Josselin, E., Ryde, N., & Faure, A. 2015, A&A, 580, A50 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Landi Degl’Innocenti, E., & Landolfi, M. 2004, in Polarization in Spectral Lines (Dordrecht: Kluwer Academic Publishers), Astrophys. Space Sci. Lib., 307 [Google Scholar]
- Meurant, G., & Duintjer Tebbens, J. 2020, Krylov Methods for Nonsymmetric Linear Systems: From Theory to Computations (Springer) [CrossRef] [Google Scholar]
- Nagendra, K. N., Anusha, L. S., & Sampoorna, M. 2009, Mem. Soc. Astron. It., 80, 678 [NASA ADS] [Google Scholar]
- Paletou, F., & Anterrieu, E. 2009, A&A, 507, 1815 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Saad, Y. 2003, Iterative Methods for Sparse Linear Systems (SIAM) [Google Scholar]
- Saad, Y., & Schultz, M. H. 1986, SIAM J. Sci. Stat. Comput., 7, 856 [CrossRef] [Google Scholar]
- Sampoorna, M., Nagendra, K. N., Sowmya, K., Stenflo, J. O., & Anusha, L. S. 2019, ApJ, 883, 188 [NASA ADS] [CrossRef] [Google Scholar]
- Sonneveld, P. 1989, SIAM J. Sci. Stat. Comput., 10, 36 [CrossRef] [Google Scholar]
- Van der Vorst, H. A. 1992, SIAM J. Sci. Stat. Comput., 13, 631 [CrossRef] [Google Scholar]
All Tables
Iterations to convergence for the GMRES, CGS, and BICGSTAB Krylov methods, with no preconditioning, for the DELO-linear formal solver, with Ns = 40, and varying Nμ = Nν.
Iterations to convergence for the Richardson, GMRES, CGS, and BICGSTAB methods with no preconditioner for the DELO-linear formal solver, with Nμ = Nν = 20, and varying Ns.
All Figures
![]() |
Fig. 1.
Spectrum of P−1A on the complex plane for different preconditioners for the DELO-linear formal solver, with Ns = 80 and Nμ = Nν = 20. From here on the abbreviation “no prec.” indicates that no preconditioner is used, that is, P = Id. |
In the text |
![]() |
Fig. 2.
Sparsity pattern of |
In the text |
![]() |
Fig. 3.
Relative residual, up to machine precision, vs iterations for various preconditioners and formal solvers with Ns = 80 and Nν = Nμ = 20. Each row corresponds to a different formal solver, from above: implicit Euler, DELO-linear, DELOPAR, DELO-parabolic. We note that a different scale is used in the horizontal axes of the first column, where no preconditioner is employed. |
In the text |
![]() |
Fig. 5.
Total run times (in seconds), subdivided in various stages, for the solution of (18) with a DELO-linear formal, Nμ = Nν = 20, Ns = 140 (left), and Ns = 500 (right), according to Tables 9 and 7. The first three bars correspond to various preconditioned Richardson iterations. The following three bars represent run times for the matrix-free GMRES method, again with various preconditioners. Finally, the last bar represents a direct approach, i.e., an LU solver. Only in this case A is explicitly assembled using Algorithm 1 and the “Solve” time can hardly be seen being negligible. |
In the text |
![]() |
Fig. 6.
Total run times (in seconds) for the solution of (18) with a DELO-linear formal, Nμ = Nν = 60, Ns = 140 (left), and Ns = 500 (right). For both plots the first bar corresponds to a stationary Jacobi method. In following three bars the Jacobi method is accelerated with various Krylov methods. Finally, the last bar represents a direct approach, i.e, an LU solver. Only in this case A is explicitly assembled using Algorithm 1. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.