A&A 434, 795-800 (2005)
DOI: 10.1051/0004-6361:20035754

Multiple-image deblurring with spatially-variant point spread functions[*]

R. Vio1,2 - J. Nagy3 - W. Wamsteker4


1 - Chip Computers Consulting s.r.l., Viale Don L. Sturzo 82, S.Liberale di Marcon, 30020 Venice, Italy
2 - ESA-VILSPA, Apartado 50727, 28080 Madrid, Spain
3 - Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322, USA
4 - ESA-VILSPA, Apartado 50727, 28080 Madrid, Spain

Received 27 November 2003 / Accepted 3 January 2005

Abstract
We generalize a reliable and efficient algorithm, to deal with the case of spatially-variant PSFs. The algorithm was developed in the context of a least-square (LS) approach, to estimate the image corresponding to a given object when a set of observed images are available with different and spatially-invariant PSFs. Noise is assumed additive and Gaussian. The proposed algorithm allows the use of the classical LS single-image deblurring techniques for the simultaneous deblurring of the observed images, with obvious advantages both for computational cost and ease of implementation. Its performance and limitations, also in the case of Poissonian noise, are analyzed through numerical simulations. In an appendix we also present a novel, computationally efficient deblurring algorithm that is based on a Singular Value Decomposition (SVD) approximation of the variant PSF, and which is usable with any standard space-invariant direct deblurring algorithm. The corresponding Matlab code is made available.

Key words: methods: data analysis - methods: statistical - techniques: image processing

1 Introduction

Vio et al. (2004) presented a computationally efficient solution for the problem of simultaneous deblurring of a set of observed images (of a fixed object), where each observed image is degraded by a different spatially-invariant point spread function (PSF) and the noise is additive and Gaussian. The most serious limitation of the other techniques available in the literature is the use of algorithms that work cyclically on one image at a time (see Bertero & Boccacci 2000). For this reason they are computationally expensive and demanding of computer memory. The main idea of Vio et al. (2004) is to combine the observed images in a single one containing all the relevant information. In this way, apart from the obvious benefit concerning the memory requirements, it is possible to improve the convergence rate of Least-Squares (LS) iterative algorithms. Moreover, there is the additional benefit that standard algorithms (and software) for image restoration can be used.

There are important fields where potentially this methodology can be exploited. An example is the multi-frequency observations of the Cosmic Microwave Background (CMB) obtained from satellites such as PLANCK. Although some other physical components are present at the microwave frequencies, the images taken at high galactic latitudes are expected to be almost entirely dominated by the CMB contribution over a large range of observing frequencies (e.g., see Stolyarov et al. 2002). Since the PSFs corresponding to the images obtained at the various frequencies can be quite dissimilar, simultaneous deblurring can be useful to improve the extraction of CMB information from the data. Unfortunately, since the PSF of PLANCK's instruments is expected to be non-circular and, because of the observational strategy, to rotate during the observations (Burigana & Saez 2003; Arnau et al. 2002), the computational advantages provided by the methodology of Vio et al. (2004) cannot be exploited.

A similar situation can also hold for the restoration of images that will be obtained by the Large Binocular Telescope (LBT). This instrument consists of two $8.4~{\rm m}$ mirrors on a common mount with a spacing of $14.4~{\rm m}$ between the centers (Angel et al. 1998). For a given orientation of the telescope, the diffraction-limited resolution along the center-to-center baseline is equivalent to that of a $22.8~{\rm m}$ mirror, while the resolution along the perpendicular direction is that of a single $8~{\rm m}$ mirror. A possible way to obtain an image with an improved and uniform spatial resolution is to simultaneously deconvolve the images taken with different orientations of the telescope. However, because of the different corrections of the adaptive optics across the image domain, the PSF for LBT can be space-variant (Carbillet et al. 2002; Bertero & Boccacci 2000).

To our knowledge, multiple-image deblurring in the case of a spatially-variant PSF has not been considered in the current literature. Given that the single-image technique undoubtedly offers some important benefits, in the present paper we extend it to deal with the case of PSFs that change across the images. In particular, in Sect. 2 we formalize the problem and propose an efficient solution in Sect. 3. In Sect. 4 its performance is studied through numerical experiments. In Sect. 5 we close with final comments and conclusions.

   
2 Formalization of the problem

Suppose one has p observed images, gj(x,y), $j=1,2, \ldots,p$, $x, y \in \mathbb{R}$, of a fixed two-dimensional object f0(x,y), each of which is degraded by a different blurring operator. In the case of a generic blurring operator, gj(x,y) can be modeled as

 \begin{displaymath}
g_j(x,y) = \iint K_j (x,y,x'y') f_0(x',y') {\rm d}x' {\rm d}y',
\end{displaymath} (1)

where, for each j, Kj(x,y) is a (possibly) spacially-variant PSF.

In practice, the experimental images are discrete $N \times N$ arrays of pixels (for simplicity, we assume square images) contaminated by noise. Therefore model (1) has to be recast in the form

 \begin{displaymath}
g_j(m,n) = \sum_{m',n'=0}^{N-1} K_j(m,n,m',n') f_0(m',n') + w_j(m,n),
\end{displaymath} (2)

where $0 \leq m, n < N$, and wj(m,n) is additive white noise. For the moment, we assume constant standard deviations across the images: $\sigma_{w_j}=\sigma_w$. The image restoration problem of interest is to find an estimate f(m,n) of f0(m,n) given the observed images gj(m,n) and the known PSFs  Kj(m,n,m',n').

In the case of a spatially-invariant PSF, with a support much smaller than the image, model (2) can be well approximated by a cyclic convolution

 \begin{displaymath}
g_j(m,n) = \sum_{m',n'=0}^{N-1} K_j(m - m',n - n') f_0(m',n') + w_j(m,n),
\end{displaymath} (3)

or equivalently

 \begin{displaymath}
\widehat g_j(m,n) = \widehat K_j(m,n) \widehat f_0(m,n) + \widehat w_j(m,n),
\end{displaymath} (4)

where the symbol " $~\widehat{}~$'' denotes the two-dimensional Discrete Fourier Transform (DFT). One approach to compute an estimate of f0(m,n) is to pose the image restoration problem as a least-squares (LS) problem, which is equivalent to a maximum likelihood approach in the case of white Gaussian noise. In order to outline the LS approach, it is helpful to use matrix-vector notation: Eq. (3) can be rewritten as

 \begin{displaymath}
\vec{g}_j = \vec{A}_j \vec{f}_0 + \vec{w}_j,
\end{displaymath} (5)

where the $N^2 \times 1$ arrays $\vec{g}$, $\vec{f}_0$ and $\vec{w}_j$ are obtained by column stacking gj(m,n), f0(m,n), and wj(m,n), respectively. $\vec{A}_j$ is the $N^2 \times N^2$ block-circulant matrix defined by cyclic convolution with the jth PSF. The LS estimate $\vec{f}$ can then be expressed in the form:

 \begin{displaymath}
\vec{f}_{{\rm LS}} = \arg\min_{\hskip-5mm\vec{f}} ~\sum_{j=1}^{p} \Vert \vec{A}_j \vec{f}- \vec{g}_j \Vert^2 .
\end{displaymath} (6)

It is not difficult to see that $\vec{f}_{{\rm LS}}$ is the solution of the normal equations

 \begin{displaymath}
\sum_{j=1}^p \vec{A}_j^T \vec{A}_j \vec{f}= \sum_{j=1}^p \vec{A}_j^T \vec{g}_j,
\end{displaymath} (7)

where $\vec{A}_j^T$ denotes the transpose of the block-circulant matrix $\vec{A}_j$. The DFT of both sides of Eq. (7) provides

 \begin{displaymath}
\widehat f(m,n) \sum_{j=1}^p \vert \widehat K_j(m,n) \vert^2 = \sum_{j=1}^p \widehat K_j^*(m,n) \widehat g_j(m,n),
\end{displaymath} (8)

with the symbol " * '' denoting complex conjugation.

The classical unbiased LS estimate given by the inverse DFT of

 \begin{displaymath}
\widehat f(m,n) = \frac{\sum_j \widehat K_j^*(m,n) \widehat g_j(m,n)}{\sum_j \vert \widehat K_j(m,n) \vert^2}
\end{displaymath} (9)

can be very unstable. In fact, for white noise, the mean square error (MSE) of Eq. (9) is

 \begin{displaymath}
{\rm E} [ \Vert \vec{f}- \vec{f}_0 \Vert^2 ] = \sigma^2_w \sum_{m,n} \frac{1}{\sum_j \vert \widehat K_j(m,n) \vert^2}\cdot
\end{displaymath} (10)

The quantities $ \widehat K_j(m,n) $ can be very close to zero at high frequencies because the PSFs are almost band-limited in practice. Consequently, the MSE can be very large, which indicates that the LS estimate is unstable. This implies the necessity of stabilizing the solution by limiting the values of such terms. A classic approach is to exploit the semiconvergence property of the iterative inversion algorithms. The use of such algorithms is indicated whenever it is necessary to incorporate constraints on the solution (e.g., the non-negativity). The main limitation of the use of the normal equations with the iterative algorithms is that the resulting deblurring problem is much more ill-conditioned than implied by each of the models (3) because the coefficients $\widehat K(m,n)$ in Eq. (8) are squared. The most important consequence is that the convergence of the iterative algorithms can be remarkably slowed with obvious consequence for the computational cost.

In order to avoid this problem, Vio et al. (2004) proposed to use the normal Eq. (8) in the form

 
$\displaystyle \widehat K_{j_0}(m,n) \widehat f(m,n) =
\frac{\widehat g_{j_0}(m,...
..._j(m,n) \vert^2 / \vert \widehat K_{j_0}(m,n) \vert^2]}
= \widehat{\zeta}(m,n),$     (11)

where $\widehat K_{j_0}(m,n)$ is given by

\begin{displaymath}\widehat K_{j_0}(m,n) = \max [ \widehat K_1(m,n), \widehat K_2(m,n), \ldots, \widehat K_p(m,n) ].
\end{displaymath} (12)

Here, for each coordinate (m,n), the operator $\max[~]$ extracts the element in the array with the largest modulus, and j0 is the value of the corresponding index j. This equation shows that the multi-image LS deblurring problem (6) is equivalent to classical one-image LS deblurring where the "observed'' image is $\widehat{\zeta}(m,n)$ (called the mean image) and the PSF is $\widehat K_{j_0}(m,n)$(called the mean PSF). Model (11) permits one to improve the performance of the iterative algorithms. In fact, $\widehat{\zeta}(m,n)$ represents the DFT of the "observed'' image with the PSF whose energy distribution is maximally concentrated in the spatial domain. In other words, the conditioning of system (11) is much better than that of system (8) (this is because $\widehat K_{j_0}(m,n)$ is not obtained by squaring the coefficients $ \widehat K_j(m,n) $). It is well known that this means a faster convergence rate for the iterative algorithms (Vio et al. 2004). Moreover, the composition of the images gj(m,n) in the single-image $\zeta (m,n)$ permits the use of standard software for the deblurring operation.

From Eq. (11) it is evident that the single-image $\zeta (m,n)$ is obtained in the Fourier domain. From the computational point of view, this is quite advantageous since it permits us to exploit fast implementations of the DFT. However, in the case of spatially-variant PSFs some problems arise because model (2) has no representation in the Fourier domain. Consequently, model (11) has to be modified in such a way to take into account the changes of the PSF across the images. In the next section, a possible solution is provided.

   
3 The construction of the single-image in the case of a spatially-variant PSF

If $\vec{s}$ is an image,

\begin{displaymath}\vec{s}= \sum_{i=1}^{N_D}\vec{D}_i \cdot \vec{s},
\end{displaymath} (13)

where "$\cdot$'' denotes element-wise multiplication, and $\vec{D}_i$ are $N \times N$ matrices with entries in the range [0, 1] such that $\sum_i D_i(m,n) = 1$ for each value of (m,n). Now, since the DFT is a linear operator, if

\begin{displaymath}\vec{\widehat{s}}= \textrm{DFT}\left[\vec{s}\right],
\end{displaymath} (14)

we have that

\begin{displaymath}\vec{\widehat{s}}= \textrm{DFT}\left[\sum_{i=1}^{N_D} \vec{D}...
...i \cdot \vec{s}\right] = \sum_{i=1}^{N_D} \vec{\widehat{s}}_i,
\end{displaymath} (15)

where $\vec{\widehat{s}}_i = \textrm{DFT} [ \vec{D}_i \cdot \vec{s}]$. By using this equation with $\vec{s}= \vec{g}_j$, Eq. (11) can be written as
 
$\displaystyle \widehat{\zeta}(m,n) = \sum_{i=1}^{N_D} \frac{\widehat g_{i j_0}(...
...j_0} [ \vert \widehat K_j(m,n) \vert^2 / \vert \widehat K_{j_0}(m,n) \vert^2]},$     (16)

with $\vec{\widehat{g}}_{ij} = \textrm{DFT}\left[ \vec{D}_i \cdot \vec{g}_j \right]$. Of course, this version of Eq. (11) is of no practical interest. However, if the matrices Di(m,n) have non-zero entries only in correspondence to square or rectangular regions of the image domain, and if in correspondence to the ith region the jth image gj(m,n) has its own spatially-invariant PSF Kij(m,n)(i.e., the blur is spatially-invariant in the subregions corresponding to the images $\vec{g}_{ij}$), then Eq. (16) can be written in the form
 
                               $\displaystyle \widehat{\zeta}(m,n)$ = $\displaystyle \sum_{i=1}^{N_D} \frac{\widehat g_{i j_0}(m,n) + \sum_{j \neq j_0...
...[ \vert \widehat K_{i j}(m,n) \vert^2 / \vert \widehat K_{i j_0}(m,n) \vert^2]}$  
  = $\displaystyle \sum_{i=1}^{N_D} \widehat{\zeta}_i(m,n).$ (17)

In practical applications the PSFs change smoothly across the image. In this case, the image given by Eq. (17) provides an approximation whose accuracy is regulated by ND. At this point, the problem becomes a definition of the matrices Di(m,n) that permits us to get satisfactory results.

The simplest form for Di(m,n) is obtained through piecewise constant interpolation, when the permitted values are either 0 or 1. In this case, the non-zero entries constitute a partitioning of the image domain. Although in many situations such a choice provides satisfactory results, sometimes the resulting  $\zeta (m,n)$ suffers from blocking artifacts that are problematic for the deblurring operation. A more sophisticated alternative, able to avoid this kind of problem, is based on a linear interpolation approach (see Nagy & O'Leary 1998). The basic idea is simpler to understand in the one-dimensional case.

If a one-dimensional domain is partitioned into NDsubregions with dimension Ns, a useful set of overlapping masks, with dimension  $2 \times N_s$, is shown in Fig. 1. In fact, since they go smoothly to zero, the boundary artifacts that affect each of the $\widehat s_i(m,n)$ are strongly weakened. Moreover, the overlap works in such a way as to further reduce these effects. The only exception is represented by the regions on the border of the domain.

In the two-dimensional case the argument is similar although made slightly more difficult by the larger number of masks that overlap at a given point of the image domain, and by the fact that there are two different situations for the subregions on the border of the frame. However, it is not difficult to see that, independent of the partitioning of the image domain, the resulting masks must have fixed values at the points marked in Fig. 1. Computationally, they can be obtained by interpolating these points through a bilinear interpolation (Press et al. 1992).

   
3.1 Two additional problems

So far we have assumed that the noise level is the same in all of the images ( $\sigma_{w_j}=\sigma_w$), a condition that allows the combination of different images to provide the most interesting results. Moreover, this requirement is often satisfied in practice. The extension of the method to images with different noise levels is straightforward (Vio et al. 2004). If the different $\sigma_{w_i}$ are known, one may account for the different variabilities by substituting Eq. (17) with

 
                             $\displaystyle \widehat{\zeta}(m,n)$ = $\displaystyle \sum_{i=1}^{N_D} \frac{\widehat{\varrho}_{i j_0}(m,n) + \sum_{j \...
...at {\cal K}_{i j}(m,n) \vert^2 / \vert
\widehat {\cal K}_{i j_0}(m,n) \vert^2]}$  
  = $\displaystyle \sum_{i=1}^{N_D} \widehat{\zeta}_i(m,n).$ (18)

Here, $\widehat {\cal K}_{ij}(m,n) = \widehat K_{ij}(m,n)/\sigma_{w_j}$ and $\widehat{\varrho}_{ij}(m,n) = \widehat g_{ij}(m,n) / \sigma_{w_j}$. The rest of the procedure remains unaltered.

A last question regards the images that present features close to their boundaries. This causes the appearance of edge effects. In the present context, the situation is made more troublesome by the fact that the masks Di(m,n) can overlap, thus causing a propagation of artifacts to the central regions of  $\widehat{\zeta}_i(m,n)$. For this reason, as explained in Vio et al. (2004) (see also Von der Lühe 1993), before the calculation of $\widehat{\zeta}_i(m,n)$, all the images gj(m,n) should be windowed through the matrix

 \begin{displaymath}
h(m,n) = \left\{
\begin{array}{ll}
0.25 \times \alpha \times...
...N-N_w \le m,n < N; \\
1 & {\rm otherwise}.
\end{array}\right.
\end{displaymath} (19)

The parameters $\alpha = [1-\cos(m \pi/N_w)]$ and $\beta = [1-\cos(n \pi/N_w)]$are chosen so that the pixels in the central subimage are not modified and the image has continuous first derivatives at the edges. This window approaches the classical two-dimensional Hanning window as $N_w \rightarrow N/2$ and tends to the rectangular window as $N_w \rightarrow 0$. The parameter Nw thus determines the filtering characteristics, in particular the frequency pass-band, of the window. Its "optimal'' value depends on many factors such as the noise level, the form of the PSF and the specific realization of the process. For a Gaussian PSF, simulation results (Vio et al. 2003b) show that values of three or four times the dispersion of the PSF provide reasonably good results. The only unavoidable consequence of the procedure is that a border of thickness Nw has to be removed from the final deblurred image.

   
4 Some numerical experiments

We present results of some numerical experiments to show the reliability and the good performance of the single-image approach for the multiple-image deblurring in case of spatially-variant PSF. As claimed in the previous sections, in principle the deblurring of the image $\widehat{\zeta}(m,n)$ can be carried out by using any available algorithm able to deal with PSFs that are not constant across the images. Here we use two particular techniques. The first is based on an iterative method, which is a modified residual norm steepest descent (MRNSD) method that minimizes a least squares fit to data, using an exponential function reparameterization to enforce a non-negativity constraint; see Nagy & Strakos (2000) for details. The MRNSD method is equivalent to an approach studied by Kaufman (1993), but the numerical results reported here are obtained using the implementation described by Nagy & Strakos (2000), which views the approach as a preconditioned steepest descent method. In many respects (such as computational cost per iteration) MRNSD is similar to the Richardson-Lucy algorithm (Vogel 2002). The difference between the two methods is that MRNSD is based on a least squares criteria, where as Richardson-Lucy is based on a maximum likelihood criteria. Furthermore, MRNSD typically converges much more quickly than the Richardson-Lucy method. In either case, the main cost of the algorithm is matrix vector multiplications at each iteration. Efficient matrix vector multiplications of the space variant PSF, using the sectioning ideas given above, is described in Nagy & O'Leary (1998). The second approach uses standard Tikhonov regularization (Vogel 2002), efficiently implemented as a filtering-based method (Vio et al. 2003a), using a novel Singular Value Decomposition (SVD) approximation of the variant PSF (see Appendix A). In this last case, the use of the single-image $\zeta (m,n)$ has no effect on the efficiency of the algorithm, but it makes coding the scheme much easier. As is well known, direct methods are often much more efficient than iterative methods. However, given the difficulty concerning the incorporation of constraints on the solution (e.g., nonnegativity), they have to be expected to provide results of inferior quality.

For each experiment we use a set of eight images of $400 \times 400$ pixels. Figure 2 shows the PSF corresponding to a set of pixels uniformly distributed across the first image. All the displayed PSFs are bidimensional Gaussian with dispersion along the major axis set to twelve pixels, and to four pixels along the minor axis. For the remaining pixels, the PSF is given by a bilinear interpolation of the displayed pattern. The PSF of the other figures is obtained by rotating this PSF by $22.5^{\circ}, 45^{\circ}, 67.5^{\circ}, 90^{\circ}, 112.5^{\circ},
135^{\circ}, 157.5^{\circ}$ degrees, respectively. The non-stationary pattern of the PSF has been deliberately chosen quite peculiar in order to test the capabilities of the algorithm in extreme situations. The images $\vec{g}_j$ have been obtained by the direct computation of the discrete model (2).

Figures 3 and 4 show the deblurring results for a set of 169 identical point-like objects uniformly distributed across the image. Noise is additive and Gaussian with a standard deviation of $2\%$ of the maximum value of the blurred images. We have chosen these particular examples since, the sources being of non-smooth functions, deblurring is a difficult problem. The image $\widehat{\zeta}_i(m,n)$ has been calculated by partitioning the observed images in 25 square subregions (i.e., ND=25), and by using piecewise constant interpolation for Di(m,n). For comparison, the deblurring of the first image of the set (that with the PSF shown in Fig. 2) is also presented; an MRNSD algorithm for spatially-invariant PSF has been used. In each case the improvement due to the use of the eight images is evident. The superiority of the results provided by the MRNSD approach is due to the fact that the Tikhonov method does not enforce a non-negativity constraint on the solution.

Although the use of piecewise constant interpolation for Di(m,n) can provide reasonable results, very often the linearly interpolated version is able to remarkably improve them. Figures 5-7 show the results of an experiment, carried out under the same conditions as the previous one, that confirms the superiority of the interpolated method (here Ns=80).

In various situations the choice of the linear interpolation mask is a necessity. When the constant interpolation mask is applied to the object shown in Fig. 8 (NB. here noise is additive and Gaussian with a standard deviation of $2\%$, the standard deviation of the dispersion of the blurred images), the results are not so good since $\widehat{\zeta}_i(m,n)$ contains blocking artifacts that later are magnified by the deblurring operation. Consequently, the use of the linearly interpolated mask becomes necessary. Although this approach produces better results, some problems still remain in the form of fringes that contaminate $\zeta_i(m,n)$. This is linked to edge effects because the original image contains significant features close to its boundaries. For this reason, before the calculation of $\widehat{\zeta}_i(m,n)$, all the images gj(m,n) should be windowed as explained at the end of Sect. 3.1. Figure 9 shows the results obtained after this operation with Ns=80 and Nw=40. A border of 40 pixels was removed by the windowing operation (see Vio et al. 2004). Again, the improvement due to the use of the eight images is evident. In this specific case, since the non-negativity of the solution is not as critical as in the experiments presented above, Tikhonov regularization is able to provide results as good as those provided by MRNSD, but at a much lower computational cost (and thus much more quickly).

Figures 10 and 11 show the results that can be obtained in presence of Poissonian noise. Here the simulated images are the same as those shown in Figs. 6 and 9. The only difference is that the Gaussian noise is not added to the blurred images, but to non-stationary Poissonian processes with the local mean given by the values of the pixels in the blurred images themselves (peak signal to noise ratio = 40$~{\rm dB}$). In this way it is possible to get an idea of the degradation in the quality of deblurring operation due to the presence of non-Gaussian noise. The visual inspection of these figures, as well as the rms of the true residuals (i.e., $\Vert \vec{f}-\vec{f}_0 \Vert / \sqrt{N}$), clearly indicate that, if present, such degradation is very small. This conclusion is not unexpected; various works have already shown that LS methods are able to provide results very similar (and often almost identical) to those obtainable with Maximum Likelihood techniques that take into account the Poissonian nature of the noise (e.g., see Bertero & Boccacci 2000). However, the possible worsening of the quality of the results is not due to the use of the mean image $\zeta (m,n)$, but is intrinsic to the adoption of a LS model with non-Gaussian noise.

Figure 12 shows the results obtainable when the proposed deblurring method is applied to simulated observations of the PLANCK Low Frequency Instrument (LFI). The images have been computed as explained in Vio et al. (2003a). The region we consider is a square patch of $370 \times 370$ pixels with a side of about  $20^{\circ}$ centered at $l = 90^{\circ}$, $b = 45^{\circ}$ (Galactic coordinates). The latitude is high enough that the Cosmic Microwave Background (CMB) dominates over the foreground. The PLANCK-LFI instrument works at frequencies 30, 44, 70, and  $100~{\rm GHz}$, but we consider only the last three since at these frequencies the contamination by the foreground is negligible. We assume nominal noise and angular resolution[*]. The maps are blurred through Gaussian PSFs with elliptical symmetry (axial ratio 1:1.3) and appropriate FWHMs (i.e., for the major axis, $\approx$ $23^{\prime}$ at  $44~{\rm GHz}$, $\approx$ $14^{\prime}$ at  $70~{\rm GHz}$, $\approx$ $10^{\prime}$ at $100~{\rm GHz}$). Although the PSFs are not expected to change their shape during the observations, it is not improbable that they may suffer a rotation. Since the importance of such a phenomenon is not yet well determined, for all the frequencies we assume a spatially-variant PSF similar to that shown in Fig. 2, which certainly represents a worst-possible-case scenario. Finally, each map is summed together with simulated white noise with a rms level as expected for the considered channels. Since we choose to work with a pixel size of about 3.5 arcmin, the noise rms are .049, .042 and .043 mK in antenna temperature at 44, 70, $100~{\rm GHz}$, respectively. The assumption of white noise is common in this kind of experiment (e.g., see Vio et al. 2003a; Hobson et al. 1998; Stolyarov et al. 2002). The CMB maps can have negative values, and this prevents the use of methods exploiting the non-negativity constraint of the solution. For this reason, only the Tikhonov deblurring method has been used. Because of the edge effects, the image $\zeta_i(m,n)$ has been obtained through the windowing procedure indicated above with Ns=60 and Nw=30. Using the three maps instead of only the one with the highest resolution provided final results whose rms of the true residuals is improved by about $1\%$.

   
5 Conclusions

We have considered the problem of simultaneously deblurring a set of images of a fixed object when the corresponding PSFs are space-variant and different from each other. We have developed a method, based on a LS approach, to efficiently transform a multi-image deblurring problem into a single-image one. This approach provides substantial savings in computational requirements, and can be implemented using numerical LS algorithms for single-image problems. These conclusions are confirmed by our numerical experiments. We have also presented a novel, computationally efficient, direct deblurring algorithm, which is based on a Singular Value Decomposition (SVD) approximation of the space variant PSF coupled with Tikhonov regularization.

Specific implementations used for the experiments are available in the Matlab package RestoreTools (Nagy et al. 2004b)[*].

References

 

  Online Material

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg1.eps}
\end{figure} Figure 1: One- and two-dimensional bases for the computation of the masks using the linear interpolation approach. In the one-dimensional case the dashed line indicates the limits of each subregion, whereas in the two-dimensional case they indicate the limits of the masks. Symbols $\times $ denote the center of a subregion, whereas the values of the masks at the marked points are given by $\bullet = 1$, $\blacklozenge =0.5$, $\blacktriangle =0.25$, $\blacksquare =0$.
Open with DEXTER
  \begin{figure}
\par\includegraphics[width=15cm,clip]{0754Fg2.eps}
\end{figure} Figure 2: Spatial variation of the PSF across the first image of the set used in the numerical experiments (see text). The pair of numbers in the header of each panel indicates the coordinates of the pixels to which the displayed PSF corresponds. In the intermediate pixels the PSFs are obtained through a linear interpolation of the displayed ones. Each PSF is given by a two-dimensional Gaussian function with a dispersion of 12 pixels along the major axis and 4 pixels along the minor axis.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg3.eps}
\end{figure} Figure 3: Original image, blurred version and blurred + noise version of the first image of the set (see text), and corresponding mean PSF. The images are $400 \times 400$ pixels.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg4.eps}
\end{figure} Figure 4: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using piecewise constant interpolation), MRNSD and Tikhonov deblurred versions of  $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 3. For the MRNSD method the images shown are those obtained after 300 iterations of the algorithm, whereas for Tikhonov it is the one with the smallest rms of the true residuals; for the mean image and first image of the set the rms of the true residuals are, respectively, 0.028 (MRNSD) - 0.032 (Tikhonov), and 0.032.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg5.eps}
\end{figure} Figure 5: Original image, blurred version and blurred + noise version of the first image of the set (see text), and corresponding mean PSF. The images are $400 \times 400$ pixels.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg6.eps}
\end{figure} Figure 6: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using piecewise constant interpolation), MRNSD and Tikhonov deblurred versions of  $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 5. The images shown are the ones with the smallest rms of the true residuals; for the mean image and first image of the set these are, respectively, 0.082 (MRNSD) - 0.088 (Tikhonov), and 0.109.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg7.eps}
\end{figure} Figure 7: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using linear interpolation), MRNSD and Tikhonov deblurred versions of $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 5. The images shown are the ones with the smallest standard deviation of the true residuals; for the mean image and first image of the set these are, respectively, 0.076 (MRNSD) - 0.082 (Tikhonov) and 0.109.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg8.eps}
\end{figure} Figure 8: Original image, blurred version and blurred + noise version of the first image of the set (see text), and corresponding mean PSF. The images are $400 \times 400$ pixels, but only the central $320 \times 320$ pixels are shown (see text).
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg9.eps}
\end{figure} Figure 9: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using linear interpolation), MRNSD and Tikhonov deblurred versions of $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 8. The images shown are the ones with the smallest rms of the true residuals; for the mean image and first image of the set these are, respectively, 0.036 (MRNSD) - 0.036 (Tikhonov), and 0.043.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg10.eps}
\end{figure} Figure 10: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using linear interpolation), MRNSD and Tikhonov deblurred versions of $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 5 and Poissonian noise (peak signal to noise ratio = $40~{\rm dB}$). The images shown are the ones with the smallest rms of the true residuals; for the mean image and first image of the set these are, respectively, 0.076 (MRNSD) - 0.082 (Tikhonov) and 0.109.
Open with DEXTER

  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg11.eps}
\end{figure} Figure 11: Mean image $\zeta (m,n)$ (with Di(m,n) obtained using linear interpolation), MRNSD and Tikhonov deblurred versions of $\zeta (m,n)$, and MRNSD deblurred version of the first image of the set (see text) for the image shown in Fig. 8 and Poissonian noise (peak signal to noise ratio = $40~{\rm dB}$). The images shown are the ones with the smallest rms of the true residuals; for the mean image and first image of the set these are, respectively, 0.037 (MRNSD) - 0.037 (Tikhonov), and 0.044 (NB. here the values of the pixels have been converted into photon counts).
Open with DEXTER
  \begin{figure}
\par\includegraphics[width=14cm,clip]{0754Fg12.eps}
\end{figure} Figure 12: Original image, observed image at $100~{\rm GHz}$, mean image $\zeta (m,n)$ (with Di(m,n) obtained using linear interpolation), and Tikhonov deblurred versions of $\zeta (m,n)$.
Open with DEXTER

    Appendix A: SVD approximation for spatially-variant blurs

In matrix notation, model (2) can be written in the form

\begin{displaymath}{\vec g}= \vec{\cal K}{\vec f}+ {\vec w}~,
\end{displaymath} (A.1)

where $\vec{\cal K}$ is a large $N^2 \times N^2$ ill-conditioned matrix which models the blurring phenomena, $\vec{f}$ is a vector of length N2 representing the true image, and $\vec{g}$ is a vector representing the observed image. Because the problem is ill-posed, regularization methods are needed to compute an approximation of $\vec{f}$. A useful tool in constructing and analyzing regularization methods is the singular value decomposition (SVD). The problem is that $\vec{\cal K}$ is a very large matrix, and it may not be computationally feasible to explicitly compute its SVD.

In Vio et al. (2003a,b) it has been shown that for spatially-invariant blurs, spectral decompositions using fast transforms (such as DFT and the Discrete Cosine Transform) can sometimes be used in place of the SVD, but this depends on the PSF and the boundary condition. If it is not appropriate to use fast transforms, then another approach we considered was to compute an approximation of the SVD from a Kronecker product factorization of  $\vec{\cal K}$.

For spatially-variant blurs, the situation is a bit more difficult since the fast transform-based decomposition methods are no longer a viable approach. However, here we show that the SVD approximation based on a Kronecker product factorization of $\vec{\cal K}$ can be extended to the case of space variant blurs. This work builds on ideas initially proposed by Kamm & Nagy (1998).

    A.1 Space Invariant Blurs

Suppose the blur is spatially-invariant, and let $\vec{P}$ be an image representing the PSF. Then it can be shown that if $\vec{P}$ is decomposed as:

\begin{displaymath}\vec{P}= \sum_{k=1}^r {\vec a}_k {\vec b}_k^T ,
\end{displaymath} (A.2)

then the blurring matrix, $\vec{\cal K}$, can be decomposed as

 \begin{displaymath}
\vec{\cal K}= \sum_{k=1}^r \vec{A}_k \otimes \vec{B}_k .
\end{displaymath} (A.3)

Here the notation $\otimes$ denotes Kronecker product, which is defined as

\begin{displaymath}\vec{A}\otimes \vec{B}
= \left[ \begin{array}{cccc}
a_{11} ...
...& a_{n2}\vec{B}& \cdots & a_{nn}\vec{B}
\end{array} \right] .
\end{displaymath} (A.4)

The matrices $\vec{A}_k$ and $\vec{B}_k$ are defined by the vectors ${\vec a}_k$ and ${\vec b}_k$, respectively. In particular, if the center of the PSF is located at pixel (i,j), then The factorization of $\vec{P}$ can be computed using the SVD. In particular, it is possible to compute

\begin{displaymath}\vec{P}= \sum_{k=1}^r \sigma_k {\vec u}_k {\vec v}_k^T~,
\end{displaymath} (A.8)

where $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_r > 0$, and thus ${\vec a}_k = \sqrt{\sigma_k}{\vec u}_k$ and ${\vec b}_k = \sqrt{\sigma_k}{\vec v}_k$. Using the SVD in this way, we obtain the Kronecker product factorization of $\vec{\cal K}$ given in Eq. (A.3) with the property that $\vec{A}_1$ has more information than $\vec{A}_2$, etc., and similarly for $\vec{B}_k$. In particular, in some measure, $\vec{A}_1 \otimes \vec{B}_1$ is the best Kronecker product factorization of $\vec{\cal K}$. This observation is important because we can exploit the Kronecker product structure of matrices in our computation. In particular, the SVD of a Kronecker product can be computed as:
                        $\displaystyle \vec{A}\otimes \vec{B}$ = $\displaystyle (\vec{U}_a \vec{\Sigma}_a \vec{V}_a^T) \otimes (\vec{U}_b \vec{\Sigma}_b \vec{V}_b^T)$  
  = $\displaystyle (\vec{U}_a \otimes \vec{U}_b)(\vec{\Sigma}_a \otimes \vec{\Sigma}_b)(\vec{V}_a \otimes \vec{V}_b)^T .$ (A.9)

We do not need to explicitly form the big matrices $(\vec{U}_a \otimes \vec{U}_b)$, etc., because the properties of Kronecker products allow us to organize the computations so that we need only work with the smaller matrices $\vec{U}_a, \vec{U}_b$, etc.

The above remarks suggest that an SVD approximation can be obtained by computing the SVD of $\vec{A}_1 \otimes \vec{B}_1$. It is possible, though, to incorporate some information from $\vec{A}_2, \vec{A}_3, \ldots$ and  $\vec{B}_2, \vec{B}_3, \ldots$. Following an approach outlined in Nagy et al. (2004a); Kamm & Nagy (2000), this can be done as follows:

A.2 Space variant blurs

In the case of space variant blurs, a single PSF does not provide enough information to obtain a complete representation of the matrix, $\vec{\cal K}$. One PSF provides only information about one column (or row) of $\vec{\cal K}$. In order to get all N2 columns, we would need N2 PSFs, each one centered at a different pixel location. It would be extremely expensive to to find all of these PSFs. So instead, we consider a simplifying assumption that the blur is locally spatially-invariant (this approach has been used in Faisal et al. 1995; Boden et al. 1996; Nagy & O'Leary 1998). Specifically, suppose that the image domain is partitioned into $p \times q$ rectangular regions, each with a PSF,

\begin{displaymath}\begin{array}{\vert c\vert c\vert c\vert} \hline
\vec{P}_{11...
...ne
\vec{P}_{p1} & \cdots & \vec{P}_{pq} \\ \hline
\end{array}\end{displaymath}

Suppose the image has $N \times N$ pixels, and define the dimensions of the ijth region as $l_i \times m_i$, where $l_1 + \cdots + l_p = m_1 + \cdots + m_q = n$. Further, let $\vec{D}_{ij}$ be a $\{0,1\}$ diagonal matrix representing the pixels in the ijth region. That is, $\vec{D}_{ij} = \mbox{diag}(d_1, \ldots, d_N)$, where dk = d(s-1)n+t=1 if the point (s,t) is in the ijth region; otherwise, dk = 0. With this notation, we represent the matrix  $\vec{\cal K}$ as

\begin{displaymath}\vec{\cal K}= \sum_{i=1}^p \sum_{j=1}^q \vec{D}_{ij} \vec{\cal K}_{ij},
\end{displaymath} (A.11)

where $\vec{\cal K}_{ij}$ is a matrix defined be the spatially-invariant PSF  $\vec{P}_{ij}$.

Now using the Kronecker product factorization for spatially-invariant blurs described in the previous section, we obtain (without loss of generality, we assume each of the PSFs has rank equal to r):

\begin{displaymath}\vec{\cal K}= \sum_{i=1}^p\sum_{j=1}^q \vec{D}_{ij} \left( \sum_{k=1}^r \vec{A}_k^{(i)} \otimes \vec{B}_k^{(j)} \right),
\end{displaymath} (A.12)

where the matrices $\vec{A}_k^{(i)}$ and $\vec{B}_k^{(j)}$ are Toeplitz or Toeplitz-plus-Hankel matrices (depending on the boundary conditions), as described in the previous section. We can simplify this expression by observing that each of the diagonal matrices, $\vec{D}_{ij}$ is separable, and can be written as:

\begin{displaymath}\vec{D}_{ij} = \widehat{\vec{D}}_i \otimes \tilde{\vec{D}}_j,
\end{displaymath} (A.13)

where

\begin{displaymath}\begin{array}{rcl}
\widehat{\vec{D}}_{1} & = & \mbox{diag}(\o...
...{diag}(0,\dots,0,\overbrace{1,\ldots,1}^{m_{q}})~.
\end{array}\end{displaymath} (A.14)

Thus,
                          $\displaystyle \vec{\cal K}$ = $\displaystyle \sum_{i=1}^p\sum_{j=1}^q \vec{D}_{ij} \left( \sum_{k=1}^r \vec{A}_k^{(i)} \otimes \vec{B}_k^{(j)} \right)$  
  = $\displaystyle \sum_{i=1}^p\sum_{j=1}^q \left(\widehat{\vec{D}}_i \otimes \tilde...
...}}_j\right)
\left( \sum_{k=1}^r \vec{A}_k^{(i)} \otimes \vec{B}_k^{(j)} \right)$  
  = $\displaystyle \sum_{k=1}^r \sum_{i=1}^p\sum_{j=1}^q
\left( \widehat{\vec{D}}_i \vec{A}_k^{(i)} \otimes \tilde{\vec{D}}_k \vec{B}_k^{(j)} \right)$  
  = $\displaystyle \sum_{k=1}^r \left[ \left(\sum_{i=1}^p \widehat{\vec{D}}_i \vec{A...
...t) \otimes
\left(\sum_{j=1}^q \tilde{\vec{D}}_j \vec{B}_k^{(j)} \right) \right]$  
  = $\displaystyle \sum_{k=1}^r \left( \vec{A}_k \otimes \vec{B}_k \right),$ (A.15)

where

\begin{displaymath}\vec{A}_k = \sum_{i=1}^p \widehat{\vec{D}}_i \vec{A}_k^{(i)} ...
...d
\vec{B}_k = \sum_{j=1}^q \tilde{\vec{D}}_j \vec{B}_k^{(j)}.
\end{displaymath} (A.16)

Once $\vec{A}_k$ and $\vec{B}_k$ are constructed, the approximate SVD of $\vec{\cal K}$ is computed just as it was in the spatially-invariant case described in Sect. A.1.



Copyright ESO 2005