A&A 423, 1179-1186 (2004)
DOI: 10.1051/0004-6361:20047113

Estimation of regularization parameters in multiple-image deblurring

R. Vio1 - P. Ma2 - W. Zhong3 - J. Nagy4 - L. Tenorio5 - W. Wamsteker6

1 - Chip Computers Consulting s.r.l., Viale Don L. Sturzo 82, S. Liberale di Marcon, 30020 Venice, Italy
ESA-VILSPA, Apartado 50727, 28080 Madrid, Spain
2 - Department of Statistics, Harvard University, Cambridge, MA 02138, USA
3 - Department of Statistics, Purdue University, West Lafayette, IN 47907, USA
4 - Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322, USA
5 - Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden CO 80401, USA
6 - ESA-VILSPA, Apartado 50727, 28080 Madrid, Spain

Received 21 January 2004 / Accepted 25 May 2004

We consider the estimation of the regularization parameter for the simultaneous deblurring of multiple noisy images via Tikhonov regularization. We approach the problem in three ways. We first reduce the problem to a single-image deblurring for which the regularization parameter can be estimated through a classic generalized cross-validation (${\rm GCV}$) method. A modification of this function is used for correcting the undersmoothing typical of the original technique. With a second method, we minimize an average least-squares fit to the images and define a new ${\rm GCV}$ function. In the last approach, we use the classical ${\rm GCV}$ on a single higher-dimensional image obtained by concatenating all the images into a single vector. With a reliable estimator of the regularization parameter, one can fully exploit the excellent computational characteristics typical of direct deblurring methods, which, especially for large images, makes them competitive with the more flexible but much slower iterative algorithms. The performance of the techniques is analyzed through numerical experiments. We find that under the independent homoscedastic and Gaussian assumptions made on the noise, the three approaches provide almost identical results with the first single image providing the practical advantage that no new software is required and the same image can be used with other deblurring algorithms.

Key words: methods: data analysis - methods: statistical - techniques: image processing

1 Introduction

An important problem in image processing is the simultaneous deblurring of a set of observed images (of a fixed object), each of which is degraded by a different point spread function (PSF). Consider, for example, images obtained by the Large Binocular Telescope (LBT). This instrument consists of two $8.4~{\rm m}$ mirrors on a common mount with a spacing of  $14.4~{\rm m}$ between the centers (Angel et al. 1998). For a given orientation of the telescope, the diffraction-limited resolution along the center-to-center baseline is equivalent to that of a  $22.8~{\rm m}$ mirror, while the resolution along the perpendicular direction is that of a single $8~{\rm m}$ mirror. One way to obtain an image with improved and uniform spatial resolution is to simultaneously deconvolve the images taken with different orientations of the telescope.

In a recent paper, Vio et al. (2004) presented an efficient solution based on a single image that improves the performance of iterative algorithms developed in the context of a least-squares approach (Bertero & Boccacci 2000). An important advantage of iterative algorithms is the ease with which constraints such as nonnegativity of the solution can be implemented. However, this incurs a high computational cost (especially in case of slow convergence). In addition, there is a lack of a reliable stopping criteria. Although these issues may not be critical for images of moderate size, they are critical for large images ($\approx$ 107-108 pixels) or studies that require the deblurring of a large number of images. In this respect, direct methods are potentially more useful. For example, with a fixed regularization parameter, Tikhonov deblurring requires only two two-dimensional discrete Fourier transforms.

A limitation of any regularization method, and in particular of Tikhonov's approach, is the need to get a computationally efficient and reliable estimate of the regularization parameter. We present three different methods for such a task in the case of multiple spatially invariant PSFs and additive Gaussian noise. In Sect. 2, we formalize the problem and propose three solutions in Sect. 3. Their performance is studied through numerical experiments in Sect. 4. Section 5 closes with final comments.

2 Formalization of the problem

The image restoration problem is to find an estimate f(x,y) of a fixed two-dimensional object f0(x,y) given p observed images gj(x,y) ( $j=1,2, \ldots, p$) each of which is degraded by a different spatially invariant blurring operator; that is,

g_j(x,y) = \iint K_j (x-x', y-y') f_0(x',y') {\rm d}x' {\rm d}y',
\end{displaymath} (1)

where each Kj(x,y) is a known space invariant PSF.

In practice, the observed images are discrete N $\times$ N arrays of pixels (images are assumed square for simplicity) contaminated by noise. Therefore, model (1) has to be recast in the form

g_j(m,n) = \sum_{m',n'=0}^{N-1} K_j(m-m', n-n') f_0(m',n') + w_j(m,n),
\end{displaymath} (2)

where wj(m,n) will be assumed to be Gaussian white noise with standard deviation  $\sigma_{w_j} = \sigma_w$. Note however that the methods in Sect. 3 may be used when the noise is not white but has a correlation structure that is known up to a constant factor.

If the central peak (i.e., support) of each PSF is much smaller than the image and the object does not have structures near the image boundaries, then the convolution product in Eq. (2) can be well approximated by cyclic convolution, which leads to

\widehat g_j(m,n) = \Kh_j(m,n) \widehat f_0(m,n) + \widehat w_j(m,n),
\end{displaymath} (3)

where the symbol " $~\widehat{}~$'' denotes the discrete Fourier transform (DFT).

In order to estimate f0 using Tikhonov regularization, it is helpful to use matrix-vector notation. Rewrite Eq. (2) as

\vec{g}_j = \vec{A}_j \vec{f}_0 + \vec{w}_j,
\end{displaymath} (4)

where the arrays $\vec{g}_j$, $\vec{f}_0$ and $\vec{w}_j$ are obtained by column stacking gj(m,n), f0(m,n), and wj(m,n), respectively. The matrix $\vec{A}_j$ is block-circulant and defined by cyclic convolution with the jth PSF.

Bertero & Boccacci (2000) estimate $\vec{f}_0$ by Tikhonov regularization where the estimate $\vec{f}_\lambda$ minimizes an average fit to the images subject to a smoothness constraint:

                                $\displaystyle %
\vec{f}_{\lambda}$ = $\displaystyle \arg\min~\left(~\sum_{j=1}^{p} \Vert \vec{A}_j \vec{f}- \vec{g}_j \Vert^2 + \lambda^2 \Vert~\vec{L}\vec{f}~\Vert^2~\right)$ (5)
  = $\displaystyle \arg\min~\left(~\Vert \vec{\cal A}\vec{f}- \vec{\varrho}\Vert^2 + \lambda^2 \Vert~\vec{L}\vec{f}~\Vert^2~\right),$ (6)

where the global imaging matrix and global image are defined, respectively, as

\vec{\cal A}=
\left( \begin{array}{c}
\end{displaymath} (7)

$\lambda $ is the regularization parameter and $\vec{L}$ is, usually, the identity matrix (for energy bound) or a discrete Laplacian operator (for smoothness constraint) (Wahba 1990). The solution  $\vec{f}_\lambda$ satisfies the normal equations

\left(~\vec{A}+ \lambda^2 \vec{L}^T\vec{L}~\right)\vec{f}_{\lambda} = \vec{\vartheta},
\end{displaymath} (8)


\vec{A}= \sum_{j=1}^{p} \vec{A}_j^T\vec{A}_j,\qquad \vec{\vartheta}= \sum_{j=1}^{p} \vec{A}^T_j\vec{g}_j.
\end{displaymath} (9)

For a fixed value of $\lambda $, its explicit solution is given by:

\widehat f_{\lambda}(m,n) = \sum_{j=1}^p \frac{\Kh_j^*(m,n) \widehat g_j(m,n)}{\widehat {\cal{K}}(m,n) + \lambda^2},
\end{displaymath} (10)

where $\widehat {\cal{K}}(m,n) = \sum_{j=1}^p \vert\Kh_j(m,n)\vert^2$.

In the rest of the paper we consider the selection of the regularization parameter $\lambda $ for three different methods whose normal equations reduce in each case to Eq. (8).

3 Estimating the regularization parameter

3.1 A first single-image approach

The Tikhonov regularization (5) is transformed into a single-image problem (6) through the use of the higher dimensional global image  $\vec{\varrho}$. We now define an alternative single-image approach whose dimensionality is the same as that of the original images.

The solution of the normal Eq. (8) depends on the original data  $\vec{\varrho}$only through  $\vec{\vartheta}$. In fact, in the case of homoscedastic uncorrelated Gaussian noise, $\vec{\vartheta}$ and  $\vec{\varrho}$ contain the same information about $\vec{f}_0$. This can be justified with a sufficiency argument on  $\vec{\vartheta}$ (Lehmann & Casella 1998). It thus seems natural to pose the problem directly through  $\vec{\vartheta}$: since the expected value of  $\vec{\vartheta}$ is $\vec{A}\vec{f}_0$, we propose to estimate $\vec{f}_0$ by regularizing the misfit $ \Vert~ \vec{A}\vec{f}- \vec{\vartheta}~\Vert^2$. But one problem with  $\vec{\vartheta}$ is that its covariance matrix is ${\rm Cov}(\vec{\vartheta}) = \sigma_w^2~\vec{A}$, which is no longer proportional to the identity and therefore the classical generalized cross-validation (${\rm GCV}$) is not expected to work well. In fact, our simulations show that ${\rm GCV}$ consistently underestimates $\lambda $ when used with  $\vec{\vartheta}$. However, since $\vec{A}$ is known, one can easily correct for this correlation using a square-root of the symmetric matrix $\vec{A}$: the image

\vec{\varsigma}= \vec{A}^{-1/2}\vec{\vartheta}
\end{displaymath} (11)

has mean $\vec{A}^{1/2}\vec{f}_0$ and covariance matrix  $\sigma_w^2~\vec{A}^{-1/2}\vec{A}\vec{A}^{-1/2} = \sigma_w^2\vec{I}$. Hence, we define the Tikhonov functional

\vec{f}_{\lambda} = {\rm argmin}\left(~ \Vert~ \vec{A}^{1/2...
...a}~\Vert^2 + \lambda^2 \Vert~
\vec{L}\vec{f}~\Vert^2~ \right).
\end{displaymath} (12)

It can be easily shown that the normal equations for Eq. (12) are given by Eq. (8). This result indicates that the functional (5) can be expressed in terms of a single-image $\vec{\varsigma}$ with PSF  $\vec{A}^{1/2}$, whose Fourier domain representation is

\widehat{\varsigma}(m,n) = \sum_{j=1}^p \frac{\Kh_j^*(m,n) \widehat g_j(m,n)}{\widehat {\cal{K}}^{1/2}(m,n)},
\end{displaymath} (13)

and $\widehat {\cal{K}}^{1/2}(m,n)$, respectively. Through $\widehat{\varsigma}(m,n)$, the solution (10) for $\widehat f_{\lambda}(m,n)$ can be expressed in the form

\widehat f_{\lambda}(m,n) = \frac{\widehat {\cal{K}}^{1/2}(...
...hat{\varsigma}(m,n)}{\widehat {\cal{K}}(m,n) + \lambda^2}\cdot
\end{displaymath} (14)

Since the covariance matrix of $\vec{\varsigma}$ is proportional to the identity, we can make use of the classical generalized cross-validation (${\rm GCV}$) function to estimate $\lambda $ (e.g., Wahba 1990; Vio et al. 2003):

{\rm GCV}^{(\vec{\varsigma})}(\lambda) = \frac{1}{N^2}\frac...
...rm Tr}\left[\vec{H}^{(\vec{\varsigma})}\right]/N^2~\right)^2},
\end{displaymath} (15)

where $\vec{H}^{(\vec{\varsigma})}= \vec{A}^{1/2}\left(~\vec{A}+ \lambda^2 \vec{L}^T\vec{L}~\right)^{-1}\vec{A}^{1/2}$, with frequency domain representation
$\displaystyle %
{\rm GCV}^{(\vec{\varsigma})}(\lambda) =
N^2 \sum_{m,n}
...ehat {\cal{K}}(m,n)+\lambda^2 \vert\widehat{\delta}(m,n)\vert^2}
\right)^2\cdot$     (16)

Here, $\widehat{\delta}(m,n)$ provides the two-dimensional DFT of matrix $\vec{L}$. However, despite its theoretical justification and adequate practical performance, ${\rm GCV}$ tends to underestimate the regularization parameter (too small a $\lambda $) in about $10\%$ of the cases. A corrected ${\rm GCV}$ can be used to control this without sacrificing its generally good performance. The corrected weighted ${\rm GCV}$ is:
$\displaystyle %
{\rm GCV_{\alpha}}^{(\vec{\varsigma})}(\lambda) = \frac{1}{N^2}...
{\widehat {\cal{K}}(m,n)+\lambda^2\vert\widehat{\delta}(m,n)\vert^2}
\right)^2$     (17)

where $\alpha > 1$. It reduces to Eq. (16) when $\alpha = 1$ and yields smoother estimates as $\alpha$ increases.

Empirical studies suggest good values of $\alpha$ for practical use in the range of 1.2 and 1.4. Modifications of this sort have been suggested by various researchers (e.g., Nychka et al. 1998). However, only recently (Cummins et al. 2001; Kim & Gu 2004) have systematic simulation studies been conducted to evaluate the performance of the corrected ${\rm GCV}$ in the context of statistical nonparametric modeling. The optimality of the corrected ${\rm GCV}$ (17) can be interpreted in a way similar to that of ${\rm GCV}$ (16). Instead of estimating a relative loss function, Eq. (17) estimates a weighted combination of the average squared bias and the average variance (Cummins et al. 2001). It should also be noted that ${\rm GCV}$ is a function of both $\lambda $ and $\alpha$, but it is difficult to decide whether ${\rm GCV}$ should be minimized over $\alpha$. Current research efforts are still trying to resolve this issue.

It is important to emphasize that the problem of underestimation is not unique to ${\rm GCV}$, it is common to virtually all regularization parameter selection techniques. For example, the generalized maximum likelihood (GML) estimate of $\lambda $ (Wahba 1985) asymptotically tends to deliver slightly smaller estimates than ${\rm GCV}$. An attractive feature of our simple correction is that it is not restricted to ${\rm GCV}$, and sheds a fresh light on underestimation corrections for GML and other approaches.

The minimization of the ${\rm GCV}$ function cannot be done analytically, some numerical iteration method has to be used. In our simulation we use the Newton-Raphson algorithm.

3.2 Multiple-image approach

Although the single-image $\vec{\varsigma}$ is adequate for applications where the most important information is preserved in the single image, here we define ${\rm GCV}$ for functions (5) and (6), and compare their performances with that obtained with the single-image  $\vec{\varsigma}$.

We start by defining a multiple-image ordinary cross validation (${\rm CV}$) function. One such function can be obtained by estimating the prediction error of each image using a leave-one-out estimate

{\rm CV}(\lambda)= \frac{1}{p}\sum_{j=1}^p \frac{1}{N^2}\sum_{i=1}^{N^2} \left(g_{ji}-\tilde{g}_{j,-i}\right)^2.
\end{displaymath} (18)

Here, $\tilde{g}_{j,-i}$ is the prediction based on the estimate of $\vec{f}_0$ that uses all the data except the ith entry in $\vec{g}_j$. Simple algebra leads to an equation for  $\tilde{g}_{j,-i}$ in terms of  $\vec{\tilde g}_j$ (see Appendix A)

\tilde{g}_{j,-i} = g_{ji} + \frac{r_{j,i}}{1-\left(\vec{H}_j\right)_{ii}},\quad r_{j,i} = \tilde{g}_{ji} - {g}_{ji},
\end{displaymath} (19)


\vec{\tilde g}_{j} = \vec{A}_j \vec{f}_\lambda,\quad \vec{H...
...(~\vec{A}+ \lambda^2 \vec{L}^T\vec{L}~\right)^{-1}\vec{A}_j^T.
\end{displaymath} (20)

The multiple-image ${\rm CV}$ function can then be written as

{\rm CV}(\lambda)= \frac{1}{N^2~p}\sum_{j=1}^p\sum_{i=1}^{N...
\end{displaymath} (21)

The multiple-image ${\rm GCV}$ is obtained by modifying Eq. (18) to be rotationaly invariant. That is, the value of the GCV function should remain unchanged if each $\vec{g}_i$ is rotated. There is no unique way to achieve such rotational invariance but one method is to substitute the average diagonal value  ${\rm Tr}~[\vec{H}_j]/N^2$ for  $(\vec{H}_j)_{ii}$

{\rm GCV}^{(\vec{g})}(\lambda)=\frac{1}{N^2~p}\sum_{j=1}^p ...
...t~\vec{r}_j~\Vert^2}{\left(1-{\rm Tr}~\vec{H}_j/N^2\right)^2},
\end{displaymath} (22)

where $\vec{r}_{j} = \vec{\tilde g}_{j} - {\vec{g}}_{j}$. Note that Eq. (22) reduces to the classical ${\rm GCV}$ when p=1.

The implementation of Eq. (22) is done in the frequency domain. It can be rewritten as

{\rm GCV}^{(\vec{g})}(\lambda) = \frac{N^2}{p} \sum_{j=1}^p \frac{{\cal N}_j}{{\cal D}_j}
\end{displaymath} (23)


{\cal N}_j=\sum_{m,n} \Bigg\vert \frac{\Kh_j(m,n) \widehat ...
...widehat{\delta}(m,n)\vert^2} - \widehat g_j(m,n) \Bigg\vert^2,
\end{displaymath} (24)

with $\widehat g(m,n) = \sum_{l=1}^p \widehat K^*_l(m,n) \widehat g_l(m,n)$, and

{\cal D}_j= \left[ \sum_{m,n} \left(1- \frac{\vert\Kh_j(m,n...
...mbda^2 \vert\widehat{\delta}(m,n)\vert^2}\right)\right]^2\cdot
\end{displaymath} (25)

To avoid underestimating the regularization parameter, the corrected ${\rm GCV}$ for Eq. (22) can be derived as

{\rm GCV}_{\alpha}^{(\vec{g})}(\lambda)=\frac{1}{N^2~p}\sum...
...eft(1-\alpha~{\rm Tr}\left[\vec{H}_j\right]/N^2\right)^2}\cdot
\end{displaymath} (26)

Its form in the frequency domain is given by Eqs. (23)-(24), and

{\cal D}_j= \left[ \sum_{m,n}\left( 1- \frac{\alpha \vert\K...
...mbda^2 \vert\widehat{\delta}(m,n)\vert^2}\right)\right]^2\cdot
\end{displaymath} (27)

3.3 A second single-image approach

An alternative form for the ${\rm GCV}$ functional (22) can be obtained by applying the standard ${\rm GCV}$ directly to Eq. (6)[*] based on the single global-image  $\vec{\varrho}$. In this case the rotation invariance is enforced in a higher dimensional space of concatenated images which leads to

                        $\displaystyle %
{\rm GCV}_{\alpha}^{(\vec{\varrho})}(\lambda)$ = $\displaystyle \frac{1}{p N^2} \frac{\sum_j \Vert ~\vec{r}_j~\Vert^2}
...a ~{\rm Tr}\left[\vec{H}^{(\vec{\varrho})}\right]/\left(p N^2\right)~\right)^2}$  
  = $\displaystyle \frac{1}{p N^2} \frac{\sum_j \Vert ~\vec{r}_j~\Vert^2}{\left(~1-
\alpha ~\sum_j {\rm Tr}\left[\vec{H}_j\right]/\left(p N^2\right)~\right)^2},$ (28)

where $\vec{H}^{(\vec{\varrho})} = \vec{\cal A}(\vec{A}+ \lambda^2 \vec{L}^T\vec{L})^{-1}\vec{\cal A}^T$.

Following the usual interpretation of the denominator in the ${\rm GCV}$ function (Wahba 1990), we see that  ${\rm GCV}^{(\vec{g})}(\lambda)$ and  ${\rm GCV}^{(\vec{\varrho})}(\lambda)$ differ in that the former weights each residual error by the noise degrees of freedom of each image; images with a larger trace of $\vec{H}_j$ receive a higher weight. In our case the two ${\rm GCV}$ functions provide identical results (see below).

It can also be shown that even though $\vec{\varsigma}$ and  $\vec{\varrho}$ lead to the same normal equations, their solutions may differ because their ${\rm GCV}$ may give different answers. In fact, one can check that

$\displaystyle %
{\rm GCV}^{(\vec{\varsigma})}(\lambda) - p~ {\rm GCV}^{(\vec{\v...
- \frac{1}{N^2}\sum_j \left\Vert~\vec{g}_j~\right\Vert^2.$     (29)

4 Some numerical experiments

We present the results of numerical simulations to test the reliability of the regularization parameter estimated through the ${\rm GCV}$ functions defined in Sect. 3.

Figures 1-6 compare the deblurring of an extended object with a sharp outline and a set of point-like objects. We have chosen these particular examples because their restoration is a difficult problem. Eight images are available for each object. In each case the PSF is a bidimensional Gaussian with dispersion along the major axis set to twelve pixels, and to four pixels along the minor axis. Their inclinations take equispaced values in the range $0^{\circ}{-}160^{\circ}$. Gaussian white noise is added with two different levels of noise by setting the standard deviations to $1\%$ and $10\%$, respectively, of the maximum value of the blurred images. Three deblurring methods have been used: (I) single-image Tikhonov based on the functional (12) and the ${\rm GCV_{\alpha}^{(\vec{\varsigma})}}$ function (17) ( $\alpha = 1.4$); (II) Tikhonov based on the functional (5) and  ${\rm GCV_{\alpha}^{(\vec{g})}}$ function (26) ( $\alpha = 1.4$); (III) a classic iterative Projected Landweber Method applied to the single-image images for the iterative methods obtained as explained in Vio et al. (2004), that can be used to enforce nonnegativity of the solution (a maximum number of 2000 iterations has been adopted, that is sufficient to provide results that do not show further appreciable visual changes).

There is no need to test the ${\rm GCV}$ functional (28) because in our case it is identical to (26). The reason is that the shape of all the selected PSFs is the same, they only differ in their orientation, and since orientation information is lost through the trace, the denominator  ${\cal D}_j$ does not depend on the image index j. That is, all the images share the same effective degrees of freedom.

Since the Tikhonov technique does not enforce the nonnegativity constraint, the deblurred images obtained with the single-image approach are also shown with their negative values set to zero. Finally, for each of the experiments, the true "optimal'' value (in the sense of the minimization of the rms of the true residuals) for the regularization parameter provided by the ${\rm GCV}$ function (17) has been determined, with an accuracy of two significant digits, by a simple grid search. In all cases, the "true'' and the estimated regularization parameter have been almost identical. In particular, the relative difference between the rms of the true residuals of the corresponding solutions was always less than $0.3 \%$.

As shown by the figures, the multiple and single-image approaches lead to very similar, if not identical, residual rms values.

As expected, Projected Landweber provides the best results in terms of visual appearance and residual rms. In fact, as is well known, the nonnegativity constraint limits the ringing effect around the sharp structures in the image. However, in these experiments the difference seems important only for point-like objects. In any case, these results are obtained at a very high computational cost (each iteration requires the computation of two two-dimensional DFTs).

\par\includegraphics[width=8.8cm,clip]{0113fig1.eps}\end{figure} Figure 1: Original image, blurred version and blurred + noise version of the first image in the set (see text) and corresponding PSF. The images are $256 \times 256$ pixels, and the standard deviation of the noise is $10\%$ of the maximum value of the blurred image.
Open with DEXTER

\par\includegraphics[width=8.8cm,clip]{0113fig2.eps}\end{figure} Figure 2: Deblurring of the image in Fig. 1 with the methods decribed in the text: (I) single-image Tikhonov (12) and ${\rm GCV^{(\vec{\varsigma})}_{\alpha}}$ (17); its version with the negative elements zeroed; (II) Tikhonov (5) and ${\rm GCV^{(\vec{g})}_{\alpha}}$ (26); (III) a classic iterative Projected Landweber Method (convergence after 61 iterations). Tikhonov has been used with $\vec{L}= \vec{I}$ and $\alpha = 1.4$.
Open with DEXTER

\par\includegraphics[width=8.8cm,clip]{0113fig3.eps}\end{figure} Figure 3: The same as Fig. 2 but obtained from an image contaminated with noise whose standard deviation is $1\%$ of the maximum value of the blurred image. For the Projected Landweber algorithm 2000 iterations have been carried out.
Open with DEXTER

Apart from the nonnegativity constraint, the worst performance of the Tikhonov method is due to the functional $\vec{L}$ that tends to smooth out structures with sharp features. This characteristic makes Tikhonov more suited for the restoration of images of diffuse objects. In fact, Figs. 7-9 show that for diffuse objects, single-image Tikhonov provides competitive restoration quality (see also Bertero & Boccacci 1998). Because of the presence of features close to the borders of the images, calculations for the image  $\varsigma (m,n)$ and the deblurring operation have been implemented with the windowing procedure as explained in Vio et al. (2004). This has required the removal of a border 50 pixels thick from the image and has only been done with the single-image Tikhonov approach because, given the necessity of windowing each single image, it would be tortuous to implement the other version of the algorithm.

In order to test the stability of the $\lambda $ estimates, for each experiment 100 realizations of the corresponding noise process are generated and added to the image. In general, the estimates of this parameter appear quite stable; as measured by the rms of the estimated values divided by their median, they fall in the range $1\%{-}2\%$. Figure 10 shows the histogram of the values of $\lambda $ obtained with the single-image Tikhonov approach in the experiments with the worst relative variation.

5 Concluding remarks

We have considered the estimation of the regularization parameter for the simultaneous Tikhonov deblurring of multiple noisy images using three types of ${\rm GCV}$ functions based on different data transformations. The first approach reduces the multiple-image problem to single-image deblurring that allows the use of the classical ${\rm GCV}$, but we use a corrected ${\rm GCV}$ that reduces its tendency to undersmooth the final solution. A second approach is perhaps the most natural as it is a straightforward extension of a classical single-image approach. It aims to achieve a balance between the smoothness of the unknown image and its average goodness of fit to the observed images. The regularization parameter is estimated through a multiple-image extension of the traditional ${\rm GCV}$ function. The third approach is based on a higher dimensional single image defined by concatenating all the images into a single vector and then using the classical ${\rm GCV}$.

\par\includegraphics[width=8.8cm,clip]{0113fig4.eps}\end{figure} Figure 4: Original image, blurred version and blurred + noise version of the first image in the set (see text) and corresponding PSF. The images are $256 \times 256$ pixels, and the standard deviation of the noise is $10\%$ of the maximum value of the blurred image.
Open with DEXTER

\par\includegraphics[width=8.8cm,clip]{0113fig5.eps}\end{figure} Figure 5: Deblurring of the image in Fig. 4 with the methods described in the text: (I) single-image Tikhonov (12) and ${\rm GCV^{(\vec{\varsigma})}_{\alpha}}$ (17); its version with the negative elements zeroed; (II) Tikhonov (5) and ${\rm GCV^{(\vec{g})}_{\alpha}}$ (26); (III) a classic iterative Projected Landweber Method (2000 iterations). Tikhonov has been used with $\vec{L}= \vec{I}$ and $\alpha = 1.4$.
Open with DEXTER

\par\includegraphics[width=8.8cm,clip]{0113fig6.eps}\end{figure} Figure 6: The same as Fig. 5 but obtained from an image contaminated with noise whose standard deviation is $1\%$ of the maximum value of the blurred image.
Open with DEXTER

Our numerical experiments indicate that the three approaches provide very similar results, with the lower-dimensional single image providing the practical advantage that no new software is required and the same image can be used with other deblurring algorithms.

We have assumed that the noise in all the images is independent Gaussian and of constant variance. Combinations of the methods we propose can be used when some of these assumptions are not met. For example, if there are k groups of images with different variances, a weighted Tikhonov regularization method can be applied to a multiple-image deblurring of k single images. In additon, the multiple-image approach can be used if the noise in the original image is correlated but the covariance matrix is known up to a constant (e.g., Gu 2002).

The extreme computational efficiency, coupled with the stability and reliabilty of the estimated regularization parameters, make Tikhonov's method competitive with the more flexible, but much slower, iterative algorithms. In fact, and especially for very large images, even in situations where these algorithms are expected to provide better results (e.g., when the nonnegativity constraint can be exploited), the Tikhonov approach is still a valuable resource (see also Bertero & Boccacci 2000). For example, it can be used to obtain initial solutions for iterative algorithms that may speed up the convergence. Figure 11 shows the residual rms as a function of iteration number for the Projected Landweber algorithm started at a Tikhonov solution and at the uniform image traditionally used. The figure shows that the saving in computation time can be considerable (here it is more than a factor of 10).

We thank Prof. M. Bertero for a careful reading of the manuscript and his useful suggestions.

Appendix A: Derivation of Eq. (19)

Let $\vec{e}_i$ be the vector with 1 in the ith entry and zero everywhere else, and  $\vec{\tilde f}_{k,-i}$ the estimate of $\vec{f}$ that does not use the ith entry of $\vec{g}_k$. We can write

\begin{eqnarray*}g_{k,i} = \vec{e}_i^T\vec{g}_k,\qquad \widehat g_{k,-i} = \vec{e}_i^T\vec{A}_k\vec{\tilde f}_{k,-i}.

Let $\vec{P}_i$ be the projection that deletes the ith entry:

\begin{eqnarray*}\vec{g}_{k,-i} = \vec{P}_i\vec{g}_k

so that $\vec{P}_i^T\vec{P}_i = \vec{I}- \vec{e}_i\vec{e}_i^T$. Let
\par\includegraphics[width=8.3cm,clip]{0113fig7.eps}\end{figure} Figure 7: Original image, blurred version and blurred + noise version of the first image in the set (see text) and corresponding PSF. The original images was $600 \times 600$ pixels, but only the central $500 \times 500$ pixels are displayed (see text). The standard deviation of the noise is $10\%$ of the standard deviation of the blurred image.
Open with DEXTER

\par\includegraphics[width=8.3cm,clip]{0113fig8.eps}\end{figure} Figure 8: Image $\varsigma (m,n)$ for the image shown in Fig. 4, single-image Tikhonov deblurring ($\vec{L}= $ discrete Laplacian), its version with the negative elements zeroed, and the iterative Projected Landweber deblurring (convergence after 24 iterations).
Open with DEXTER

\par\includegraphics[width=8.3cm,clip]{0113fig9.eps}\end{figure} Figure 9: The same as Fig. 8 but obtained from an image contaminated with noise whose standard deviation is $1\%$ of the standard deviation of the blurred image. Tikhonov has been used with $\vec{L}= \vec{I}$. For the Projected Landweber algorithm 2000 iterations have been carried out.
Open with DEXTER

\par\includegraphics[width=8.3cm,clip]{0113fig10.eps}\end{figure} Figure 10: Histogram of the values of the estimated parameter $\lambda $ for the 100 experiments based in Fig. 6.
Open with DEXTER

\par\includegraphics[width=8.3cm,clip]{0113fig11.eps}\end{figure} Figure 11: Convergence rates for the Projected Landweber algorithm in the numerical experiment corresponding to Fig. 3. Two starting guesses are used: the uniform image typical of the classic implementation of the algorithm, and the nonnegative single-image Tikhonov shown in Fig. 3.
Open with DEXTER

\begin{eqnarray*}\vec{a}_{ki} = \vec{A}_k^T\vec{e}_i,\qquad \vec{H}_k = \vec{A}_k(~\vec{A}+ \lambda\vec{I}~)^{-1}\vec{A}_k^T,

then, by definition,

\begin{eqnarray*}\vec{\tilde f}_{k,-i} & =& \left(~\sum_{i\neq k}\vec{A}_i^T\vec...
...ec{\vartheta}- \vec{A}_k^T\vec{e}_i\vec{e}_i^T\vec{g}_k~\right).

It follows that

\begin{eqnarray*}\vec{A}_k\vec{\tilde f}_{k,-i} &= & \vec{A}_k\vec{\tilde f}- \v...

\begin{eqnarray*}\vec{e}_i^T\vec{A}_k\vec{\tilde f}_{k,-i} - {\tilde g}_{ki} &=&...
...{ii}}{1-(\vec{H}_k)_{ii}}~\left(~{\tilde g}_{ki}-g_{ki}~\right)

\begin{eqnarray*}{\tilde g}_{k,-i} - g_{ki} = {\tilde g}_{ki} - g_{ki} + \frac{(...
..._{ii}}{1-(\vec{H}_k)_{ii}}~\left(~{\tilde g}_{ki}-g_{ki}~\right)

\begin{eqnarray*}{\tilde g}_{k,-i} - g_{ki} = \frac{1}{1-(\vec{H}_k)_{ii}}~\left(~{\tilde g}_{ki}-g_{ki}~\right).



Copyright ESO 2004