Free Access
Issue
A&A
Volume 620, December 2018
Article Number A59
Number of page(s) 14
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201832987
Published online 29 November 2018

© ESO 2018

1. Introduction

Studies of the cosmic microwave background (CMB) anisotropies have been driving progress in our understanding of the universe for nearly a quarter of a century. The current forefront of the CMB research is the characterization of polarization properties of the CMB anisotropies. The next generation of the CMB observatories has been, and is, designed to ensure that the scientific potential of this new probe is fully exploited. This calls for advanced, high-performance data analysis techniques applicable to enormous data sets which will be collected by these new observatories.

The analysis of data from CMB observations commonly involves solutions of large, structured linear systems of equations. Two typical and important examples of such systems are map-making and Wiener-filter systems of equations (see, e.g., Janssen & Gulkis 1992; Bunn et al. 1994, respectively, for early pioneering work and Poletti et al. 2017; Seljebotn et al. 2017 for examples of more recent applications). These systems are solved either as a stand-alone task or as part of a more involved process, such as a power spectrum estimation, which commonly requires multiple solutions of such systems. In this work we study, from a theoretical and practical perspective, two specific algorithms for solving such systems of equations: a preconditioned conjugate gradient (PCG) approach and a messenger-field (MF) technique. Both these approaches have been applied in the context of the applications considered here. Of the two, the PCG approach has been more popular and more broadly used to date. Nevertheless, it has been argued in a number of recent papers (e.g., Elsner & Wandelt 2013; Ramanah et al. 2017; Huffenberger & Næss 2018) that the messenger-field approach can be highly efficient for both these applications and can deliver performance in some cases exceeding that of some specific PCG approaches, while at the same time being more generally feasible and straightforward to implement and apply (Ramanah et al. 2017; Huffenberger & Næss 2018; Huffenberger 2018). We note that it is the combination of all these features that makes the MF approach potentially attractive. Indeed, performance of the PCG technique is hinged on a choice of a preconditioner matrix, M, and while very efficient preconditioners can be constructed, in principle outperforming other methods, typically the construction quickly becomes difficult and potentially prohibitive from a computational point of view.

Specifically, let us consider a linear system of equations,

(1)

where the system matrix A is symmetric and positive definite (SPD). Instead of solving this equation directly, in the PCG approach one solves the preconditioned system,

(2)

applying the conjugate gradient (CG) technique (Golub & Van Loan 1996). If the preconditioner is chosen in a way such that M−1A is better conditioned than the original system matrix A, then the solution can be derived in (often significantly) fewer iterations. Hereafter, we define the condition number κ as

(3)

where ∥A2 is the spectral norm1 of the matrix A. For a good preconditioner κ(A)≫κ(M−1A)≥1. A preconditioner is therefore better when its inverse succeeds in capturing more essential features of the inverse system matrix A−1, which cannot be computed directly by assumption. The choice of a preconditioner is a key factor in determining the performance of a PCG solver. There exist both generic and case-specific approaches proposed for their construction. Moreover, for many advanced preconditioners, significant savings in terms of the number of iterations to a solution come at the cost of an overhead related to their construction and/or application on every step of iteration.

While these observations make the method comparison cumbersome and potentially limited to very specific cases and concrete implementations, the question of which class of methods is more promising in ensuring sufficient performance for forthcoming data sets, for example in the context of the CMB field, is of actual practical importance. This is the question we tackle in this work in the context of the MF and PCG solvers.

Our methodology is as follows. We first show that any MF method applied to a linear system involves preconditioning of the original set of equations with a specific preconditioner. Then we argue on theoretical grounds, and later demonstrate using a number of study cases that the corresponding PCG algorithm is at least as efficient as, and often much better than, the MF technique, while featuring similar computational complexity. Combining the MF method with a so-called cooling technique can further improve its performance at least in some cases. Nonetheless, this does not seem to affect the overall assessment at least for the specific cooling prescriptions studied in this paper and motivated by earlier work. Consequently, while the MF technique may still provide an interesting alternative in some specific applications, in general the PCG approach seems more promising and can be a better first choice. Though we demonstrate our conclusions using specific examples from applications to the Wiener filter and map-making procedures, we expect them to hold more generally.

This paper is organized as follows. In Sect. 2 we provide a general discussion of the messenger-field technique as a more general class of solvers and compare it with the PCG solvers as far as its convergence and computational aspects are concerned. Later, we illustrate the general conclusion of this section with the help of numerical experiments applied to simulated CMB data, involving applications of both these techniques to polarized Wiener filtering (Sect. 3) and map-making (Sect. 4). We conclude in Sect. 5. Some more technical considerations are deferred to the Appendices.

2. Messenger-field iterative solver

In this section we first present a consistent, general algebraic framework, which encompasses, as specific cases, all implementations of the basic messenger field solver proposed to date in the literature. Subsequently, in Sect. 2.2 we describe the cooling technique proposed to improve the performance of the messenger method, and in Sects. 2.3 and 2.4 we discuss the general properties of this broad class of solvers, contrasting them with those of the PCG technique. We develop this general discussion, referring to the specific solvers developed in the context of the applications described in the following sections of this paper.

2.1. Basic approach

Let us consider a system of linear equations as in Eq. (1). In general, the messenger-field approach involves a split of the system matrix A, such as

(4)

where C is invertible by construction and its inverse is easy to compute. I is an identity matrix. After multiplying Eq. (1) by C−1 from the left (which corresponds to preconditioning the original system), we get the system

(5)

The MF method introduces an extra data object t, the messenger field, which can be defined as

(6)

meaning that Eq. (5) can be represented as

(7)

(8)

This can be used to define an iterative scheme

(9)

(10)

We note that the messenger field t introduced in this way is a dummy object. Therefore, barring some implementational advantages, the equations above are equivalent to a reduced system from which the messenger field has been explicitly eliminated and which can be directly derived from Eq. (5). This reads,

(11)

and the corresponding iterative scheme, see also (Elsner & Wandelt 2013; Huffenberger & Næss 2018), is given by

(12)

This is a fixed-point iteration scheme (e.g., Saad 2003) and its derivation is analogous to the derivation of the classical iteration methods that also rely on the splitting of the system matrix as in Eq. (4). The Jacobi iterative method takes C as the diagonal of A, while in the Gauss–Seidel method C is equal to the lower triangular part (including the diagonal) of A.

We emphasize that whether we choose to implement the single equation version as in Eq. (12) or the double equation one as in Eqs. (9) and (10), the result will be the same to within numerical precision as in both these cases we solve the same linear system, Eq. (5), performing equivalent iterations. Consequently, the messenger-field approach is a fixed-point iteration technique applied to a preconditioned system in Eq. (5). However, this equation can be solved using other means, such as for instance a conjugate gradient (CG) approach, which is typically more efficient than the fixed-point iterations (see, e.g., Sects. 5.5 and 2.3 of Liesen & Strakoš (2013)) Moreover, solving Eq. (5) with the help of the CG technique is equivalent to solving the initial set of equations, Eq. (1), using a PCG technique with the preconditioner set to M  ≡  C. In cases when the fixed-point method is expected to converge very efficiently, that is, when A ≃ C, the PCG solver will also perform well since C−1A ≃ I, a hallmark of a good preconditioner. Similarly, the MF solver based on the split involving a good preconditioner will likely be efficient. From a computational point of view, both techniques require multiple applications of the inverse preconditioner M−1 to a vector, thus resulting in similar numerical cost.

The main message of this section is that the messenger-field method involves fixed-point iterations applied to a preconditioned system of linear equations. Its performance is determined by an adopted split of the system matrix, which also defines the preconditioner applied to precondition the initial system. This preconditioner can be used alternately in a PCG solver employed to directly solve the initial system, and is expected to ensure performance as good as or better than that of the MF technique, as far as the number of iterations as well as time to convergence are concerned.

2.2. Cooling technique

The convergence of the fixed-point method, Eq. (14), depends on the components of the initial error x − x(0) in the invariant subspaces associated with the eigenvalues of C−1D, especially with the dominant (largest) ones. The cooling technique proposed in Elsner & Wandelt (2013) aims at providing, iteratively, a good initial guess x(0). In the general setting considered above, the cooling technique replaces the original problem, Eq. (1), represented in a split form as in Eq. (4) by

(13)

where the cooling parameter λ is defined so that: (a) for λ = 1, the above problem is equivalent to the original problem in Eqs. (1) and (4); and (b) for λ → ∞, D(λ)→0 and (C(λ))−1D(λ)→0. In the cooling method, λ is progressively adapted in the course of the iterations with its value gradually decreasing from an initial and typically rather large value down to 1. While no general prescription is given in the literature, it has been claimed (Elsner & Wandelt 2013; Huffenberger & Næss 2018) that at least in some applications significant gains can be derived as compared to the fixed-point iterations, if the rate of change of λ is appropriately tuned. In general, an MF method combined with the cooling is no longer a fixed-point method. However, as this is often the case, if λ does not change with each iteration but rather is kept constant for some number of iterations before assuming a new value, the iterations for each of the fixed values of the parameters are fixed-point (though not of the original system). In such cases, for large values of λ it is expected that an accurate solution of the modified system, Eq. (13), that is, x(λ), can be recovered within a few (fixed-point) iterations. Naturally, x(λ) can be far from the desired solution of the actual system, x(1), however, it can be a good starting vector for the next round of fixed-point iterations, this time with a smaller value of λ. The relative performance of the cooling method compared to that of the PCG solver of the initial equation, i.e., with λ = 1, is unclear, and the freedom in defining the rate at which λ is changed makes the mathematical analysis of this method difficult; its potential advantages over others are therefore also difficult to anticipate. Consequently, in this work we resort to numerical experiments to investigate the pros and cons of this technique in the specific cases of interest (Sects. 3 and 4).

We note however that a PCG solver could be used instead of the fixed-point iterations within the cooling scheme. Though, the fixed-point iterations would still be preferable whenever the value of λ is adjusted after each iteration, or every few iterations, for example, as in the cooling scheme proposed in Sect. 2.2 of Elsner & Wandelt (2013). However, in the cases when the value of λ is kept unchanged over a number of iterations, as in the numerical experiments presented in Huffenberger & Næss (2018) and in Ramanah et al. (2017), replacing the fixed-point iterations by a PCG method is expected to result in some performance gain accumulated from all the gains obtained from the solutions for a fixed value of λ.

For clarity, hereafter we use the term “messenger-field method” to denote a method which implements the basic MF algorithm as defined in Sect. 2.1. Whenever cooling is involved, be it combined with the MF method or the PCG one, we explicitly point this out; for example, we refer to the “cooled MF” or the “PCG with cooling”, and vice versa.

2.3. Convergence

The convergence properties of the classic, fixed-point iteration methods have been studied extensively in the literature (see, e.g., Sects. 4.1–4.2 of Saad 2003, or Sect. 10.1 of Golub & Van Loan 1996). Given our discussion in Sect. 2.1 those results can be directly applied to the messenger field technique. In particular, from Eqs. (11) and (12) (see also, Elsner & Wandelt 2013) the error of the ith approximation satisfies the following relation.

(14)

This implies (see, e.g., Golub & Van Loan 1996; Saad 2003) that ∥ϵ(i)∥ converges asymptotically to zero as long as the spectral radius of C−1D is smaller than unity. Here ∥ ⋅ ∥ denotes the Euclidean norm, and the spectral radius, hereafter denoted by ρ(⋅), is defined as the largest (in magnitude) eigenvalue of the matrix. This observation generalizes to other norms given their equivalence on finite dimensional spaces; see Appendix B for more details.

If matrix C−1D is also normal, then from (14) it follows that,

(15)

in the Euclidean norm. Therefore, in this case the convergence is not only asymptotic but also monotonic. The normality of C−1D is also typically necessary (see, e.g., Sect. 4.1.6 of Björck 2015 for relevant examples). Consequently, in general some care may need to be exercised in choosing a specific split of the system matrix, Eq. (4), to ensure that it satisfies both these conditions. This is indeed the case for the Wiener filter application (see, Elsner & Wandelt 2013 and Appendix B), as the spectral radius of C−1D is always smaller than 1, assuming that the corresponding system matrix A is non-singular (see below) and C−1D is normal. For the map-making application in the rendition of Huffenberger & Næss (2018), arguments similar to those given in Appendix B can be used to show that also in this case ρ(C−1D) < 1 for non-singular systems, however the normality of C−1D remains unclear at this stage.

The character of the convergence will in general depend on the choice of the norm. We show here that it remains monotonic in the A-norm (often called energy norm) of the error, if C−1 and D are real and symmetric as is indeed the case in the applications studied here. The A-norm is hereafter defined as,

(16)

This is one of the norms we use in the follow-up numerical examples.

Using Eq. (14) and ∥vA = ∥A1/2v∥, we obtain,

(17)

where,

(18)

and ∥ ⋅ ∥2 denotes the spectral norm, defined as in Eq. (3).

To ensure monotonic convergence of the iterative scheme, Eq. (12), in the energy norm it is therefore enough to require that

(19)

If ρ(C−1D) < 1, then Eq. (19) is satisfied whenever matrix B is normal, that is, BB  =  BB, and therefore it holds that

(20)

Here the leftmost equality uses the fact that a normal matrix can be diagonalized using a unitary matrix, and the rightmost follows from the fact that a non-singular similarity transformation, here with A1/2, preserves the eigenvalues.

Assuming that A is real and symmetric and observing from Eq. (4) that A (C−1D)=(DC−1) A, we can write,

(21)

(22)

and therefore, B is normal if and only if (C−1D)DC−1 = DC−1 (C−1D). This is satisfied, for example, whenever C and D are real and symmetric as indeed is the case in the setting of Elsner & Wandelt (2013) and Huffenberger & Næss (2018).

Consequently, in both these applications we expect monotonic convergence of the MF errors in the energy norm. This is analogous to the PCG technique where the error is bound to decrease monotonically in the energy norm. This may not however be the case for other norms or the residuals.

The convergence rate of the MF solver is then determined by the eigenspectrum of C−1D. In particular the eigenmode with the largest eigenvalue, that is, the one closest to 1, will be the slowest to converge.

If the system matrix, A, is singular then

(23)

This is because if x denotes a singular eigenvector of A, that is, Ax  =  0, and x  ≠  0 then,

(24)

and hence

(25)

and x is also an eigenvector of C−1D but with a unit eigenvalue. In such cases, the convergence of the MF solver will typically stall with the norm of the residuals saturating on a level depending on the right-hand side of the system as well as the initial guess. This behavior is analogous to that of other solvers, such as PCG, and it simply reflects the fact that if A is singular, then there is no unique solution to the linear system.

We assume from now on that the problem is non-singular and show that the PCG method is typically superior to, and never worse than, the fixed-point method in terms of minimizing the energy norm of the error. We first recall key properties of the (P)CG approach; see, for example, Saad (2003), Lemma 6.28.

Let x(CG, i) be the ith approximation given by the CG method for solving Ax = b with the initial guess x(0). Subsequently,

(26)

where is a polynomial with , , which we write succinctly as , and,

(27)

Similarly, when x(PCG, i) is the ith approximation given by the PCG method for solving the system Ax = b preconditioned by C, using the initial guess x(0), we have

(28)

with and

(29)

Let us now consider the fixed-point method as defined in Eq. (12) assuming the same initial guess, x(0). From Eqs. (29) and (14),

(30)

(31)

as (C−1D)i = (I − C−1A)i and . This means that, in terms of the energy norm of the error, the PCG method converges at least as fast as the fixed-point method. In practice, one can however expect significantly faster convergence, as suggested by Eq. (29). On the other hand, as emphasized earlier, the performance of the MF solvers can be improved by invoking the cooling technique. The convergence of the cooled MF approach is more difficult to study theoretically. Even in the cases when the cooling parameter, λ, is kept constant over some number of iterations, and the method performs fixed-point iterations within each such interval, these are fixed iterations of the modified, not the original, system and the results concerning the error specified earlier in this section apply only when replacing x by the modified solution x(λ) (and the energy norm ∥ ⋅ ∥A by ∥ ⋅ ∥A(λ)). The convergence of the iterates to the true solution x then should be properly discussed and justified for a particular MF application and/or cooling scheme. In the absence of theoretical results concerning this last method we assess the relative merits of the different solvers via numerical experiments. This is described in the follow-up sections.

2.4. Computational complexity

In actual applications, the computational and memory cost per iteration is often as important as algorithmic efficiency. From this perspective, the fixed-point scheme, Eq. (12), is the cheapest method as it requires an evaluation of C−1Dx(i) only once per iteration and storing of only two vectors, x(i), C−1b. The PCG method requires more memory needed to store up to four or five vectors, depending on the implementation, and each iteration requires two additional inner products plus some scalar multiplications and vector updates. However, typically, and in particular in the applications considered in this paper, the most time-consuming operations are the multiplications by matrix A and by C−1, rendering these additional costs mostly irrelevant. As an example, in Appendix A we describe an implementation of the PCG algorithm in the context of the Wiener filter that allows for a single PCG iteration to be performed, with a computational cost comparable to the cost of one fixed-point iteration, Eq. (12).

We can further capitalize on using the PCG method whenever the relative residual or an error measure corresponding to the A-norm of the error need to be frequently evaluated; in the extreme case, at each iteration. The residual r(i) is updated on each PCG iteration and it is therefore at our disposal; this is not the case for the fixed-point iterations, Eq. (12). Similarly, there is a numerically stable way to evaluate the problem-related error measure corresponding to the A-norm of the error; see also Appendix A. This evaluation involves only scalar quantities that are already at our disposal during the PCG iterations.

We conclude that in terms of time per iteration, both approaches, the MF and the corresponding PCG, are comparable, and therefore the number of iterations to convergence is a sufficient comparison metric.

3. Application to Wiener filtering

3.1. The problem

Let us consider a sky map m composed of a sky signal s and some noise n due to our instrument, thus

(32)

We assume that the sky signal is Gaussian over an ensemble of sky realizations with zero mean and known covariance given by S. The noise is also Gaussian with zero mean and the covariance given by N over the ensemble of noise realizations. We further assume that the noise is uncorrelated and therefore its covariance N is block-diagonal. The minimum variance estimate of the sky signal, that is, its Wiener filter, is then given by (e.g., Bunn et al. 1994),

(33)

Computing the Wiener filter of the measured map, m, requires an inversion of the system matrix, S−1 + N−1. As modern CMB maps may contain up to many millions of pixels this task can indeed be daunting. This is because in general there is no obvious domain in which both the signal and noise covariances are simultaneously diagonal. Indeed, the signal covariance S is diagonal in the harmonic domain, where the pixel-domain map m is described by a vector of coefficients mm obtained as a result of a spherical harmonic transform applied to the map, while the noise covariance is diagonal in the pixel domain, and only diagonal in the harmonic one if the noise is homogeneous, which is unlikely in practice. Consequently, a standard way to tackle this problem is to rewrite Eq. (33) as a linear set of equations,

(34)

and solve these using some iterative method (e.g., Smith et al. 2007). Both CG and PCG techniques have been applied in this context and while the former was found to show a rather unsatisfactory convergence rate, it was demonstrated that this could be improved significantly albeit with the help of a rather advanced and involved (from the implementation point of view) preconditioner borrowed from multi-grid techniques (Smith et al. 2007).

The MF method originally proposed in this context by Elsner & Wandelt (2013) involves splitting the noise covariance into homogeneous and inhomogeneous parts by representing , where T = τI is a homogeneous part, and τ = min(diag(N)). This leads to a split of the system matrix S−1  +  N−1 owing to the fact that,

(35)

(see, e.g., Higham 2002 p.258). Subsequently, taking

(36)

(37)

and introducing the messenger field t, Eq. (6), we can rewrite Eq. (33) in its messenger-field representation, that is,

(38)

with the former equation solved in the pixel and the latter in the harmonic domain and with the spherical harmonic transforms used to switch between these domains. These equations are equivalent to Eqs. (3) and (4) of Elsner & Wandelt (2013). Their numerical experiments showed that the solver tended to converge quickly to the solution given the desired precision and therefore the method was proposed as an efficient way to resolve the slow convergence problem of the CG method without the need for potentially complex preconditioners needed for an efficient PCG solver where both these methods should be applied directly to the initial problem, Eq. (34).

As argued earlier, Eqs. (38) are equivalent to a fixed-point iteration solver applied (see Eq. (5)),

(39)

which can be rewritten in an explicitly iterative form as

(40)

In the following section, we compare the performance of different solvers applied to Eqs. (34), (39), and (40). From the general consideration of the previous section our expectation is that the CG solver applied to Eq. (39), and equivalent to the PCG solution of Eq. (34) with M ≡ C = S−1  +  T−1, should perform better than the messenger-field solver, Eq. (40).

3.2. Simulated cases

To demonstrate and validate our analytical expectation we apply both these solvers to simulated data sets. These are obtained as follows. We first generate maps of three Stokes parameters, I, Q and U, in the Healpix pixelization (Górski et al. 2005) with the Healpix resolution parameter nside set to 512. These maps are computed using a HEALpy routine, synfast, providing CMB power spectra as the input, as computed for the standard cosmological model with the current best values of the parameters (Planck Collaboration XIII 2016). In the following calculations we set the band-limit of the sky signal ℓmax to 2nside. This is low enough to ensure the orthogonality of the relevant spherical harmonics over the grid of Healpix pixels. However, it leads to a rank-deficient signal covariance matrix. Consequently, hereafter, its inverse, S−1, is to be understood as a pseudo-inverse. We verified that in selected cases setting ℓmax to 3nside did not impact our conclusions. We add to these sky maps inhomogeneous, albeit uncorrelated noise with root mean square (rms) changing over the sky as in the case of the WMAP observations2.

We consider two cases with either full or partial sky observations. In this latter case, only 20% of the sky is observed corresponding to the polar cap regions as defined by the Planck HFI mask3.

3.3. Numerical results

We consider the following solvers.

  • CG applied to the redefined system, Eq. (39), which is equivalent to PCG applied to the original system Eq. (34) with a preconditioner given by M = S−1  +  T−1 (in figures labeled “PCG”);

  • MF solver, Eq. (40) (in figures labeled “MF”);

  • MF method within three different cooling schemes as proposed in Elsner & Wandelt (2013), Huffenberger & Næss (2018), and Ramanah et al. (2017). In the first one, the value of the cooling parameter λ is adjusted adaptively after each iteration. This scheme requires an a priori knowledge (estimate) on the error measure (see Eq. (41) below) of the solution. For the purpose of the experiments, this is tightly approximated using the solution of the PCG solver. The scheme of Huffenberger & Næss (2018) defines a discrete grid of logarithmically spaced values of λ, which spans the range from 1 up to some suitable maximal value, which in our runs we set to λmax = 104. For each value of λ, a fixed number of iterations is performed. Though this scheme was suggested specifically for the map-making problem in order to avoid multiple time-consuming reads of the time-ordered data, for the sake of comparison we use it also for the WF experiments. Hereafter, we perform 10 iterations for each of the 16 values of λ and refer to this scheme as “16 × 10”. For the case with partial sky observations, we continue with the fixed-point iterations (40) for λ = 1. The cooling scheme of (Ramanah et al. 2017, Algorithm 1) reduces λ by a constant factor, η, so λ → λ × η and iterates as long as two consecutive approximations satisfy ∥s(i) − s(i − 1)∥/∥s(i)∥< ϵ. In our experiments we start with λmax = 104, and we set η = 3/4 and ϵ ≡ 10−4.

We start the iterations with a vector of zeros as an initial guess.

The signal covariance, S, is computed assuming the CMB power spectra as used for the simulations. The noise covariance is block-diagonal in the pixel domain but not proportional to I as the noise is assumed to be inhomogeneous. It is taken to be exactly the same as the noise covariance used for the simulations.

For all solvers, the inverse signal covariance is applied to a map-length pixel domain vector in the harmonic domain, when first the vector is represented by a vector of its harmonic coefficients computed with the help of a HEALpy routine, map2alm; these are subsequently weighted by the inverse of the power spectra and transformed back to the pixel domain using HEALpy’s alm2map routine. The inverse noise covariance is applied to any pixel-domain vector directly with the elements corresponding to unobserved pixels set to zero.

We note that we always estimate the Wiener-filtered sky signal over the full sky. In all cases shown below we apply all the solvers to exactly the same input data sets.

We have validated our implementations by considering a simplified data set with white noise. In this case, the noise covariance N is proportional to the unit matrix, and the PCG solver with preconditioner M = S−1  +  T−1, and the MF solver converged to within the numerical precision in a single step as expected, as in this case T = N. Moreover, in the cases of the actual simulated data sets used in our test, the results obtained with the different solvers are consistent.

3.3.1. Convergence metric

The Wiener-filter problem, Eq. (34), can be recast as a minimization of the functional

(41)

Indeed, we have

Algebraic manipulations show that χ2 is directly related to the energy norm, as we have

We can therefore use physically motivated χ2 as a convergence measure instead of the energy norm. We note that we expect that

(42)

As with the energy norm, Eq. (16), this asymptotic value of χ2 cannot be straightforwardly computed without knowing the Wiener filter estimate precisely. However, we expect that it should be close to ⟨χ2(sWF)⟩ = nStokesnpix ≡ nDOF within a small scatter on order of , if our assumptions about the sky signal and the noise are correct. Here the angle brackets denote the average over an ensemble of sky and noise realizations and nStokes stands for the number of considered Stokes parameters and is equal to 3 for most of our tests. We can therefore define the convergence in this case by requiring that the incremental change of χ2 between consecutive iterations is not larger than some small fraction of ⟨χ2(sWF)⟩ (Elsner & Wandelt 2013). If the absolute value of the final χ2 is statistically inconsistent with the expected one, this could be an indication of prematurely stalled convergence or of a problem with the model assumed for the measured data, m.

Given the discussion of Sect. 2.3, we expect that in terms of minimizing the χ2-measure, the PCG method with preconditioner M = S−1 + T−1 should be superior to the fixed-point iterations, Eq. (40).

In addition to the χ-measure we also plot the norm of the residual corresponding to the (preconditioned) problem as suggested in Ramanah et al. (2017). This is given by,

(43)

(Ramanah et al. 2017, Appendix C). This system is significantly better conditioned than the original one, Eq. (34). The corresponding relative norm of the residual then reads

(44)

3.3.2. Performance

Figure 1 shows a comparison between the PCG (Eq. (39)) and the MF (Eq. (40)) solvers as applied to the Wiener-filter problem. As expected, PCG indeed reduces the error significantly faster.

thumbnail Fig. 1.

Convergence of PCG and MF methods using two different convergence measures: χ2 (Eq. (41), left panels), and the S-weighted relative residual (Eq. (44), right panels). The top and bottom rows show the case with the full and partial sky coverage, respectively.

In Figs. 2 and 3, we compare PCG (with the MF preconditioner S−1  +  T−1 and λ = 1) with the MF method using the adaptive cooling schemes described above. We can see that PCG yields robust performance in all the test cases. In the case with full sky observations, the MF solvers (with or without cooling) reach their asymptotic convergence rate and exhibit a plateau of convergence on the level 10−7 of the relative S-norm of the residual. This is not the case for the PCG solver, which converges to the machine precision level. In the experiment with partial sky coverage, we observe a decrease of the convergence rate for the PCG as well as the MF solvers due to significantly worse conditioning of the problem. However, even in this case, the PCG method is superior to the MF solvers. We expect that using more advanced preconditioners, which can alleviate the effect of very small eigenvalues, can bring a further significant improvement.

thumbnail Fig. 2.

Comparison of the convergence of the PCG solver and the MF technique with an adaptive cooling for different cooling prescriptions and using different convergence criteria: the χ2, left, the S-weighted relative residual, middle. Right panel: values of λ as a function of the iteration as adapted by the different cooling schemes. These results are for the data sets with the full sky coverage.

We note, however, that PCG appears to be outperformed by the MF method with the Elsner & Wandelt (2013) cooling proposal within the first ten or so iterations. As both methods solve different linear systems in this latter case, due to different values of λ, this does not contradict our conclusions in Sect. 2.3. This also does not change the overall assessment of the relative merits of both these techniques as no convergence is then ever reached in terms of any of the considered metrics. We discuss a possible origin of this behavior in Sect. 4.3, in the map-making context, where we suggest a simple antidote that could potentially further improve the performance of the PCG approach.

4. Application to map making

4.1. The problem

Data collected by modern, single-dish CMB experiments are modeled as

(45)

where d stands for a vector of all measurements, m is a pixelized map of the sky signal and n is the instrumental noise. We assume for simplicity that experimental beams are axially symmetric and that the sky signal m is already convolved with the beam. In this case, pointing matrix P simply defines which pixel of the map, m, is observed at each measurement and with what weight it contributes to the measurement. In such cases, the pointing matrix is very sparse as it contains only one non-zero element per row for the total intensity measurements, or three for the polarization-sensitive ones. Moreover, PtP is either diagonal or block-diagonal with 3 × 3 blocks. If we assume that the instrumental noise is Gaussian with the covariance given by N, a maximum-likelihood estimate of the sky signal can be written as

(46)

and therefore requires a solution of large linear system. The sizes of the involved object vary significantly, depending on the experiment, but the number of pixels in the map m can easily reach 𝒪(106), while the number of measurements, 𝒪(1012 − 15). Consequently, the system can only be solved iteratively, explicitly capitalizing on the structure and sparsity of the involved data objects.

Traditionally (e.g., de Gasperis et al. 2005; Cantalupo et al. 2010) the iterative method of choice was a PCG technique with a simple preconditioner given by

(47)

Hereafter we refer to this standard preconditioner as block-diagonal or Jacobi. However, more involved preconditioners have also been considered and found to be successful (e.g., Grigori et al. 2012; Szydlarski et al. 2014; Næss & Louis 2014; Puglisi et al. 2018).

More recently, Huffenberger & Næss (2018; see also Huffenberger 2018) proposed the application of the messenger-field technique to the map-making problem. Below, we discuss the approach of this former work. The proposal here is again to split the noise covariance into two parts, , where T  =  τI, τ = min(diag(N)). Subsequently using Eq. (35) we can rewrite the system matrix of the map-making equation, Eq. (46), as

(48)

where the first term on the right-hand side corresponds to matrix C and the second one to matrix D as defined in Eq. (4). Following the formalism from Sect. 2.1 we can now write the messenger-field equations for this system, which read

(49)

(50)

with the messenger field t appearing explicitly, or

(51)

without it. We note that unlike in Eq. (6) the matrix Pt is taken out of the definition of the messenger field. Solving any of these two sets of equations using fixed-point iterations is equivalent to the messenger-field solver. For comparison we also solve the last equation using the CG technique. The latter is equivalent to solving the map-making equation, Eq. (46), using a PCG method with the preconditioner taken to be M  =  PtT−1P, which in the case under consideration is equivalent to the standard preconditioner.

We note, following Huffenberger & Næss (2018), that if the ith approximation issued by the fixed point method is unbiased, that is, ⟨x(i)  −  m⟩  =  0, where ⟨…⟩ denotes an average over noise realizations, then all the subsequent approximations will also be unbiased. In particular, if the initial guess is chosen to be an unbiased (e.g., simple binned, Eq. (61)) estimate of the sky signal, then all the following up estimates will be unbiased and the entire point of the iterations will be to converge on estimates with minimal statistical uncertainty. This is unlike the case of the PCG, where both statistical and systematic uncertainties are simultaneously improved on during the iterations.

4.2. Simulated data

We simulate mock time-ordered data d as a sum of two terms, one corresponding to the sky signal and the other to instrumental noise. These are computed as

(52)

where p(t) denotes the sky pixels observed at time t and φ(t) is the corresponding orientation of the polarizer. The signal terms are read off from the signal-only maps of Stokes parameters I, Q, and U, following the assumed scanning strategy defined by p(t) and φ(t). These maps are produced in the Healpix pixelization with the resolution parameter nside set to 1024. These signals are random realizations of the CMB anisotropies corres-ponding to the currently preferred cosmological model (Planck Collaboration XIII 2016).

We produce three data sets with different statistical properties. Each data set comprises the data of a single detector that are however scaled to represent the performance of an entire detector array and thus are more representative of the current data. In all cases, we assume a raster scan pattern in the sky coordinates, when a rectangular sky patch made of 256 × 256 Healpix pixels is scanned either horizontally, that is, in right ascension, or vertically, in declination. The patch is centered at the equator. The length of the simulated data vector is the same and roughly equal to 108.

In the first data set, the sky patch is first scanned horizontally and later vertically. The horizontal scanning assumes 256 complete sweeps (i.e., left-to-right followed by right-to-left), each pixel being sampled on average four times on each sweep. Once this is done, the declination is changed and the new horizontal scan commences. This is repeated 256 times with each horizontal scan corresponding to a different row of the Healpix pixels. The vertical scan is implemented in a similar way.

In this case we assume that the polarizer direction is quickly modulated so the full 2π angle is sampled within each single crossing of each sky pixel. This is ensured by setting the polarizer angle in the sky coordinates to follow a repeating sequence of 0, π/4, π/2, 3π/4. In practice, this could mimic the case of an experiment using a smoothly rotating half wave plate.

For the second data set, we divide it into four equal consecutive subsets, each of which implements the same raster scan made of horizontal scanning within the first half of the subset followed by the vertical scan in the second half. However, the scanning is assumed to be faster and there is only one sample taken per pixel for each pixel crossing. This ensures the same data length. For each subset, the angle of the polarizer in the sky coordinates is fixed and equal to 0, π/4, π/2, 3π/4. This scanning strategy mimics an experiment where the polarizer is stepped discretely only after each of the four subscans.

In the case of the third data set we progressively change the throw of the scan chop decreasing it gradually to half of the full scan width. We do so for both horizontal and vertical subscans. The scan speed is assumed fixed and tuned in a way that we obtain four observations of each pixel on a single pixel crossing. This produces a deeply observed core region where the number of observations per pixels can be as much as three orders of magnitude higher than the number of observations of the outer pixels. We also assume a smooth polarization angle rotation with the rotation speed fixed in such a way that the polarizer angle changes by 22.5° on a single pixel crossing.

This scan strategy is the most realistic from the three considered here reflecting the inhomogeneity of the sky sampling and allowing for imperfect sampling of the polarization angles per pixel. Consequently, for the scan parameters adopted in our simulations, the 3 × 3 blocks of the block-diagonal preconditioner display a range of condition numbers from 2 (perfect sampling) to over 20. The overall condition number of the block-diagonal preconditioner, which accounts for both sky and angle sampling inhomogeneities, is equal to ∼1.5 × 104.

We simulate the instrumental noise as a correlated noise with a power spectrum given by

(53)

where 1/fknee is taken to be approximately 500 times longer than the sampling rate, corresponding to the length of a single full sweep of the sky. We further apodize the low frequency noise effectively flattening the noise spectrum for frequencies lower than a tenth of the knee frequency.

We consider two different noise levels, one ensuring a relatively high signal-to-noise ratio (S/N) of the resulting maps, with rms noise K for the recovered Q and U maps, and the other with lower S/N, corresponding to K. We refer to these cases as the low- and high-noise data.

We note, that if the instrumental noise were white then the two first scanning strategies would have been equivalent and the standard, block-diagonal preconditioners in both these cases would have been identical. This is however not the case in the presence of the correlated noise. In fact we expect that the off-diagonal noise correlation of the recovered Q and U maps should be small for the first data set with the quickly rotating HWP while they should be non-negligible in the case of the second data set with the stepped polarizer, leading to different convergence patterns of the studied solvers.

4.3. Numerical results

4.3.1. Convergence metric

For measuring the error of an approximation x we consider, following Huffenberger & Næss (2018), a χ-measure

(54)

Analogously to the Wiener-filtering application, this measure is minimized by the maximum-likelihood estimate (46) and is equivalent to the energy norm of the error, Eq. (16), with respect to the system matrix, A  ≡  PtN−1P. Indeed,

(55)

and thus,

(56)

As before this value is not directly available. However, we can compute the average value of χ2(mML) over the statistical ensemble of the input data realizations and use it as a benchmark for the convergence using the χ2-measure. This can be done analytically, observing that the matrix on the right-hand side of Eq. (56) is a projection operator, which projects out all time-domain modes, which are sky stationary, that is, they are objects of the form Py for some arbitrary pixel-domain object y. If so,

(57)

and

(58)

where nt and npix denote the sizes of the data set in time and pixel domains, respectively, and assuming that the system matrix, PtN−1P, is non-singular, and considering that N−1/2 ⟨nnt⟩ N−1/2  =  I. With this value, we then define the convergence criterion in terms of the χ2-measure by requiring that the incremental change of χ2 between two consecutive iterations is sufficiently small as compared to nDOF.

We note that in the figures, in order to make the behavior of the χ-measure more conspicuous, instead of the χ2 itself, we plot its relative difference with respect to the minimal value of χ2 derived within the PCG iterations, which we denote as χ2(mmin). The plotted quantity is then given by,

(59)

For completeness we also plot the standard relative residual defined as

(60)

4.3.2. Performance

We reconstruct the sky signal from the data using different solvers as discussed here and compare their relative performances. In all cases, we use the same pixelization for the recovered maps as we used for the simulations, that is, Healpix with nside = 1024. We validated our implementation by running cases with noise-free data and recovering the input maps within the numerical precision. We also found that the results produced by different solvers for each of the data sets agree.

We first compare the convergence of the PCG solver, of the MF iterations without cooling, and of the PCG and MF with the 8 × 5 cooling scheme. The results for the low-noise and high-noise cases are given separately in Figs. 4 and 5.

thumbnail Fig. 3.

As in Fig. 2 but for the data set with the partial sky coverage, fsky = 0.2.

thumbnail Fig. 4.

Comparison of the convergence for the PCG, the messenger-field methods stand-alone and incorporated within a cooling scheme, for the first, second, and third simulated data sets (from left to right panels, respectively), assuming a low noise level. The cooling scheme is 8 × 5 for the first and second data sets and 16 × 10 for the third.

thumbnail Fig. 5.

As in Fig. 4 but for the high noise level.

We can see that as claimed in Huffenberger & Næss (2018) the MF with cooling technique indeed reaches higher accuracy in comparison to MF without cooling. However, the standard PCG is in these experiments still superior. As in the Wiener-filter application (Sect. 3.3), we observe that in the low-noise cases the cooling technique, used either with PCG or with MF, improves more rapidly on the solution within the first iterations than the PCG method with no cooling. We attribute this to the fact that during those initial iterations the cooling method solves a modified system of the initial equations with an assumed large value of λ. The approximate solution derived on these first iterations is then equivalent to a simple binned map. This for the low noise cases provides a good rendition of the sky signal, thus leading to an abrupt decrease of the residuals. In the absence of cooling the PCG technique initiated with vectors of zero needs to perform at least a few iterations to reach a comparably good solution. We can however improve on the performance of the stand-alone PCG by using a simple binned map, given by

(61)

as the starting vector for the PCG solver. Such a map is quickly computable and thus can be readily available at the onset of the solution.

We illustrate these considerations in Fig. 6, where we compare the convergence of the PCG run with the initial vector made of zeros and the convergence of the solvers: stand-alone PCG, MF with cooling, and PCG with cooling, assuming m(0) (Eq. (61)) as the initial vector. The results shown are for the low-noise case. As expected there is a significant improvement in the overall performance of the PCG method relative to the other solvers but also as compared to the case of the vanishing initial guess. This showcases the importance of the appropriate choice of the initial guess for the PCG approaches in the cases of high-S/N solutions. As the new CMB data sets target predominantly CMB B-mode polarization, the maps of Stokes parameters will increasingly have very high S/N and this observation may be therefore of importance for their data analysis.

thumbnail Fig. 6.

Comparison of the convergence rates of different iterative solvers for a non-zero starting vector, m(0), as given in Eq. (61). For comparison the blue curve shows the case of the PCG with a starting vector of zeros. The results are for the second scanning strategy and the low-noise case.

We note that all numerical experiments considered here involve non-singular linear systems of equations. If singularities are present then both PCG and MF solvers will typically saturate before reaching the convergence. For the cooled MF this may however not be the case. In particular, if the modified linear systems with λ ≳ 1 are singularity free then λ is effectively a regularization parameter. In such cases the cooled MF may reach the residual level better than the other methods thanks to its ability to adapt amplitudes of the singular modes present in the solution. This however does not change the fact that if these modes are truly singular then their true amplitudes cannot ever be recovered. If the regularization is the appropriate approach to adapt in a given application, then this could also be done in the case of the other solvers.

5. Conclusions

We have shown that the messenger-field solvers of sets of linear equations perform fixed-point iterations of an appropriately preconditioned system of equations. Consequently, in general they are expected to display inferior performance to that of a conjugate gradient solver applied to the same preconditioned systems or, equivalently, to that of a PCG solver with the same preconditioner as implicitly used in the messenger-field method in order to precondition the initial problem. We have backed up this contention with analytic arguments and demonstrated it using numerical experiments involving two applications drawn from modern CMB data analysis practice: Wiener filters and the CMB map-making problem. In addition to the basic implementations of the MF method (Sect. 2.1), we have considered MF solvers combined with the cooling technique (Sect. 2.2, Elsner & Wandelt 2013; Huffenberger & Næss 2018), and have shown via numerical results that the cooled MF methods with the cooling schemes as proposed in the literature outperform the standard MF approach. However, the PCG solvers with the preconditioner motivated by the MF methods tend to reach convergence the quickest.

We have compared the performance of the studied methods from the perspective of the number of iterations needed to reach convergence. However, our conclusions are expected to also be directly applicable to considerations involving time-to-solution, as the computational cost per iteration incurred in the different methods is found to be roughly comparable.

We therefore conclude that looking towards the future, advanced preconditioning coupled with the conjugate gradient technique offers the most promise as an expeditious solver, ahead of the messenger-field approach. While at this time, the PCG solvers, with the standard block-diagonal preconditioner (Eq. (47)) in the map-making case, and the preconditioner given by S−1 + T−1 (Eq. (36)) in the Wiener filter case, with a potentially appropriately adapted initial guess, should outperform the currently proposed messenger-field approaches. We also note that better preconditioners have already been proposed in particular in the map-making context (e.g., Grigori et al. 2012; Szydlarski et al. 2014). This notwithstanding, the messenger-field approach may be found of interest in some specific applications.

In the context of the PCG methods, we have found that the convergence may be sped up by an appropriate choice of initial vector. While the gain is largely negligible for the cases with a low-S/N solution, it can become significant if the solution is expected to have high-S/N content. We have found this effect particularly relevant for the map-making procedure, where we have shown that the choice of the simple binned map as the initial vector can result in a significant improvement of the map-making solver convergence.


1

The spectral norm of A is equal to the largest singular value of A defined as the square root of the largest eigenvalue of AA, where † denotes a Hermitian conjugate. If matrix A is normal, i.e., AA  =  AA, then A can be diagonalized with the help of a similarity operation employing a unitary matrix and the spectral norm of A is equal to its largest (in magnitude) eigenvalue and the condition number to the ratio of the largest and the smallest eigenvalues.

2

We use specifically the noise pattern for the 9-year observation of the V-band, available from https://lambda.gsfc.nasa.gov/product /map/dr5/maps_band_r9_iqu_9yr_get.cfm.

4

With the typical computational cost significantly smaller than twice the cost of one single inverse spherical harmonic transform.

Acknowledgments

We thank Dominic Beck and Josquin Errard for their help with simulations and insightful discussions and Kevin Huffenberger and Sigurd Næss for their comments on the manuscript. We acknowledge use of HEALpy. The first two authors’ work was supported by the NLAFET project as part of European Union’s Horizon 2020 research and innovation program under grant 671633. RS acknowledges support of the French National Research Agency (ANR) contract ANR-17-C23-0002-01 (project B3DCMB). This research used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the US Department of Energy under Contract No. DE-AC02-05CH11231.

References

  1. Björck, A. K. 2015, Numerical Methods in Matrix Computations (Cham: Springer), XVI, 800 [Google Scholar]
  2. Bunn, E. F., Fisher, K. B., Hoffman, Y., et al. 1994, ApJ, 432, L75 [NASA ADS] [CrossRef] [Google Scholar]
  3. Cantalupo, C. M., Borrill, J. D., Jaffe, A. H., Kisner, T. S., & Stompor, R. 2010, ApJS, 187, 212 [NASA ADS] [CrossRef] [Google Scholar]
  4. de Gasperis, G., Balbi, A., Cabella, P., Natoli, P., & Vittorio, N. 2005, A&A, 436, 1159 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  5. Elsner, F., & Wandelt, B. D. 2013, A&A, 549, A111 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  6. Golub, G. H., & Van Loan, C. F. 1996, Matrix Computations, 3rd edn. (Baltimore, MD: Johns Hopkins University Press), XXX, 698 [Google Scholar]
  7. Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759 [NASA ADS] [CrossRef] [Google Scholar]
  8. Grigori, L., Stompor, R., & Szydlarski, M. 2012, Proc. Int. Conf. on High Performance Computing, Networking, Storage and Analysis, SC ’12 (Los Alamitos, CA, USA: IEEE Computer Society Press), 91:1 [Google Scholar]
  9. Higham, N. J. 2002, Accuracy and Stability of Numerical Algorithms, 2nd edn. (Philadelphia, PA: SIAM), XXX, 680 [Google Scholar]
  10. Horn, R. A. 2013, Matrix Analysis, 2nd edn. (Cambridge: Cambridge University Press), XVIII, 643 [Google Scholar]
  11. Huffenberger, K. M. 2018, MNRAS, 476, 3425 [NASA ADS] [CrossRef] [Google Scholar]
  12. Huffenberger, K. M., & Næss, S. K. 2018, ApJ, 852, 92 [NASA ADS] [CrossRef] [Google Scholar]
  13. Janssen, M. A., & Gulkis, S. 1992, NATO Advanced Science Institutes (ASI) Ser. C, 359, 391 [Google Scholar]
  14. Liesen, J., & Strakoš, Z. 2013, Krylov Subspace Methods: Principles and Analysis, Numerical Mathematics and Scientific Computation (Oxford: Oxford University Press), XVI, 391 [Google Scholar]
  15. Næss, S. K., & Louis, T. 2014, JCAP, 8, 045 [NASA ADS] [CrossRef] [Google Scholar]
  16. Planck Collaboration XIII. 2016, A&A, 594, A13 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  17. Poletti, D., Fabbian, G., Le Jeune, M., et al. 2017, A&A, 600, A60 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  18. Puglisi, G., Poletti, D., Fabbian, G., et al. 2018, A&A, 618, A62 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  19. Ramanah, D., Lavaux, G., & Wandelt, B. 2017, MNRAS, 468, 1782 [NASA ADS] [CrossRef] [Google Scholar]
  20. Saad, Y. 2003, Iterative Methods for Sparse Linear Systems, 2nd edn. (Philadelphia, PA: SIAM), XVIII, 528 [Google Scholar]
  21. Seljebotn, D. S., Bærland, T., Eriksen, H. K., Mardal, K. A., & Wehus, I. K. 2017, A&A, submitted, [arXiv:1710.0062] [Google Scholar]
  22. Smith, K. M., Zahn, O., & Doré, O. 2007, Phys. Rev. D, 76, 043510 [NASA ADS] [CrossRef] [Google Scholar]
  23. Strakoš, Z., & Tichý, P. 2005, BIT, 45, 789 [CrossRef] [Google Scholar]
  24. Szydlarski, M., Grigori, L., & Stompor, R. 2014, A&A, 572, A39 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]

Appendix A: Implementation of PCG for Wiener filter

In the context of solving the Wiener-filter problem (Eq. (33)), each step of the fixed-point method (Eq. (12), resp. Eq. (40)) requires one direct and one inverse spherical harmonic transforms, which are assumed to be the most time-consuming elements of the solution process. To keep the same number of transforms in each PCG iteration, we cannot apply a first matrix, A = S−1 + N−1, and then precondition the residual by C−1 ≡ (S−1 + T−1)−1 as is done in one of the standard PCG implementations listed below in Algorithm A.1. This implementation involves two direct and two inverse transforms: in the evaluation of Ap(i − 1) and in C−1r(i).

Algorithm A.1PCG for As = b with the preconditioner C

Given s (0), r (0) = b − A s (0), , .

For i = 1, 2, …

Using the formula for p(i), and the form of the matrix A = C − D, we can write

Therefore the vector Ap(i) can be computed recursively without spherical harmonic transforms and the cost of one PCG iteration is the same (in terms of spherical harmonic transforms) as the cost of one iteration of the fixed-point method, Eq. (12).

Another formula that proved in our numerical experiments to be more stable (yet slightly more costly) is to simultaneously evaluate the vectors C−1r(i) and S−1C−1r(i) (recall that AC−1r(i) = (S−1 + N−1)C−1r(i)). This can be done using direct spherical harmonic transform of one vector, r(i), and inverse spherical harmonic transform of two vectors4. We then simply update Ap(i) = AC−1r(i) + δ(i)Ap(i − 1).

Moreover, the properties of (P)CG also allow to evaluate the decrease of the χ-measure without computing it explicitly using Eq. (41) (this computation involves S−1 and therefore also direct and inverse spherical harmonic transforms). The evaluation proposed below is numerically stable; see a thorough analysis in Strakoš & Tichý (2005). There holds

Using the above discussion on the relationship between the energy norm and the χ-measure, we have

After computing χ2(s(0)) (that is for zero initial approximation s(0) = 0 equal to mtN−1m) we can therefore simply evaluate the χ-measure in every PCG iteration using already computed scalar quantities without any additional spherical harmonic transforms.

Appendix B: Proof of convergence of the messenger-field method

In this appendix we prove that the messenger-field method for Wiener filter is (asymptotically) converging. Following the discussion in Sect. 2.3, we prove the convergence by showing that the eigenvalues of C−1D are, in the absolute value, smaller than unity. First, we note that, since T = τI,

is given by multiplication of two symmetric matrices (in brackets).

Algebraic manipulations then yield

see also (Elsner & Wandelt 2013, Eq. (5)). Since S, are symmetric positive semidefinite and τ >  0, the eigenvalues of the matrices above are in the interval [0, 1). For a symmetric matrix B there holds ∥B2 = ρ(B).

Finally,

In order to present a close relationship of the derivation of the messenger-field with the Schur complement methods, we present an alternative proof below. An analogous derivation can be used also for proving the convergence of MF in the map-making application.

We start by rewriting Eq. (38) as the system

The reduced system (after the elimination of the messenger field t) then corresponds to forming the Schur complement 𝕊 of 𝔸,

and solving

The MF iterations are obtained by multiplying (preconditioning) the above system by (S−1 + T−1)−1 from the left.

Now we show the bounds on the eigenvalues of

Since 𝔸 is an SPD matrix, its Schur complement 𝕊 is also SPD. Moreover, the spectrum of (S−1 + T−1)−1𝕊 satisfies

and therefore the eigenvalues of (S−1 + T−1)−1𝕊 are positive. Plugging into the above equation the formula for the Schur complement 𝕊, we have

The matrix is symmetric positive semidefinite. Altogether,

Consequently,

Finally, we show that the asymptotic convergence of the error in the Euclidean norm ∥ϵ(i)∥, which is assured by the fact that ρ(C−1D) < 1 (see, e.g., Saad 2003, Sect. 4.2), proves the asymptotic convergence of the error ϵ(i) in any norm ∥ ⋅ ∥*. Here we use the equivalence of norms on finite-dimensional spaces; see, e.g., Horn (2013, Corollary 5.4.6 and Definition 5.4.7). In particular, given any norm ∥ ⋅ ∥*, there exist positive constants c* , C*, such that

Consequently,

All Figures

thumbnail Fig. 1.

Convergence of PCG and MF methods using two different convergence measures: χ2 (Eq. (41), left panels), and the S-weighted relative residual (Eq. (44), right panels). The top and bottom rows show the case with the full and partial sky coverage, respectively.

In the text
thumbnail Fig. 2.

Comparison of the convergence of the PCG solver and the MF technique with an adaptive cooling for different cooling prescriptions and using different convergence criteria: the χ2, left, the S-weighted relative residual, middle. Right panel: values of λ as a function of the iteration as adapted by the different cooling schemes. These results are for the data sets with the full sky coverage.

In the text
thumbnail Fig. 3.

As in Fig. 2 but for the data set with the partial sky coverage, fsky = 0.2.

In the text
thumbnail Fig. 4.

Comparison of the convergence for the PCG, the messenger-field methods stand-alone and incorporated within a cooling scheme, for the first, second, and third simulated data sets (from left to right panels, respectively), assuming a low noise level. The cooling scheme is 8 × 5 for the first and second data sets and 16 × 10 for the third.

In the text
thumbnail Fig. 5.

As in Fig. 4 but for the high noise level.

In the text
thumbnail Fig. 6.

Comparison of the convergence rates of different iterative solvers for a non-zero starting vector, m(0), as given in Eq. (61). For comparison the blue curve shows the case of the PCG with a starting vector of zeros. The results are for the second scanning strategy and the low-noise case.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.