A&A 442, 397-403 (2005)
DOI: 10.1051/0004-6361:20053414

Dealing with edge effects in least-squares image deconvolution problems

R. Vio1 - J. Bardsley2 - M. Donatelli3 - W. Wamsteker4


1 - Chip Computers Consulting s.r.l., Viale Don L. Sturzo 82, S.Liberale di Marcon, 30020 Venice, Italy
2 - Department of Mathematical Sciences, The University of Montana, Missoula, MT 59812-0864, USA
3 - Dipartimento di Fisica e Matematica, Università dell'Insubria, via Valleggio 11, 22100 Como, Italy
4 - INTA/LAEFF, Apartado 50727, 28080 Madrid, Spain

Received 12 May 2005/ Accepted 11 July 2005

Abstract
It is well-known that when an astronomical image is collected by a telescope, the measured intensities at pixels near the edge of the image may depend on the intensity of the object outside of the field of view (FOV) of the instrument. This makes the deconvolution of astronomical images troublesome. Standard approaches for solving this problem artificially extend the object outside the FOV via the imposition of boundary conditions (BCs). Unfortunately, in many instances the artificially extended object has little in common with the object itself outside of the FOV. This is most pronounced in the case of objects close to the boundary of the image and/or extended objects not completely contained in the image domain. In such cases, an inaccurate extension of the image outside of the FOV can introduce non-physical edge effects near the boundary that can degrade the quality of the reconstruction. For example, if the BCs used result in an extended image that is discontinuous at the boundary, these edge effects take the form of ripples (Gibbs oscillations). In this paper, we extend to least-squares (LS) algorithms a method recently proposed by Bertero & Boccacci (2005, A&A, 437, 369) in the context of the Richardson-Lucy (RL) algorithm for the restoration of images contaminated by Poissonian noise. The most important characteristic of the proposed approach is that it does not impose any kind of artificial BCs and is therefore free from edge effects. For this reason it can be considered the natural method for the restoration of images via LS algorithms.

Key words: methods: data analysis - methods: statistical - techniques: image processing

1 Introduction

When a two-dimensional object f(x,y) in outer-space is viewed through a telescope, the observed image g(x,y) is related to fvia the integral equation

 
$\displaystyle g(x,y) = \int\limits_{-\infty}^{+\infty}
\int\limits_{-\infty}^{+\infty} h(x,y,x',y') f(x',y')~{\rm d}x'~{\rm d}y',$     (1)

where h(x,y,x',y') is called the point-spread function (PSF) and characterizes the blurring effects of the imaging system of interest. For example, in the case of a ground-based telescope, h characterizes both the diffractive blurring effects of the finite aperture of the telescope and the refractive blurring effects of turbulence in the earth's atmosphere. Model (1) allows the PSF to vary with position in both the (x,y) and (x',y') variables. In this case, we say that h is a spatially-varying PSF. Often, it is possible to simplify this model by assuming that the PSF is independent of position. In this case, we say that h is a spatially-invariant PSF, and (1) takes the convolution form

 \begin{displaymath}
g(x,y) = \int\limits_{-\infty}^{+\infty} \int\limits_{-\infty}^{+\infty} h(x-
x',y-y') f(x',y')~{\rm d}x'~{\rm d}y'.
\end{displaymath} (2)

Models (1) and (2) are only theoretical. In practical applications, we only have discrete noisy observations of the image g(x,y), which we model as a discrete linear system

 \begin{displaymath}
\vec{g}= \vec{A}\vec{f}+ \vec{z}.
\end{displaymath} (3)

Here $\vec{g}= {\rm vec} (\vec{G})$ and $\vec{f}= {\rm vec} (\vec{F})$ are one-dimensional column arrays containing, respectively, the observed image $\vec{G}$ and the object $\vec{F}$ in stacked order, $\vec{z}$ is an array containing the noise contribution (assumed to be of additive Gaussian type), and $\vec{A}$ is a matrix that represents the discretized blurring operator corresponding to the PSF $\vec {H}$.

The image deblurring, or deconvolution, problem is to obtain an approximate solution of the linear system (3). A very common approach is the least-squares (LS) method, where the restored image is obtained by approximately solving

 \begin{displaymath}
\vec{f}_{{\rm LS}} = {\rm arg}\!\min_{\hskip-5mm\vec{f}}
\left(~ \Vert~ \vec{A}\vec{f}- \vec{g}~\Vert_2^2 \right).
\end{displaymath} (4)

For astronomical imaging problems, solving (4) is nontrivial because the matrix $\vec{A}$ is typically highly ill-conditioned and the number of unknowns is usually extremely large. A vast literature exists that is devoted to the problem of solving large-scale least squares problems, and a plethora of algorithms have been developed with different restoration capabilities (e.g., see Björck 1996; Starck et al. 2002). Nonetheless, for image deblurring problems, there is an issue that has not been satisfactorily addressed, and arises from the fact that collected images are available only in a finite region, causing measured intensities near the boundary to be affected by data outside the field of view (FOV). Formally this means that, in the case of an $N_{\rm p} \times N_{\rm p}$ PSF $\vec {H}$ and of an $N \times N$observed image $\vec{G}$ (for simplicity, we assume square images), $\vec{f}$ is an $M^2 \times 1$ array with $M=N+N_{\rm p}$. In other words, problem (4) is under-determined. As a consequence $\vec{A}$ is a rectangular-matrix (its size is $N^2
\times M^2$) and this prevents the use of algorithms based on the discrete Fourier transform (DFT). Hence, the huge storage and computational advantages gained by using fast implementations of the DFT are lost.

The standard approach that is used to avoid this problem is to impose a priori boundary conditions (BCs) on $\vec{F}$. The BCs that are most often used force a functional dependency between the elements of $\vec{f}$ external to the FOV and those internal to this area. This has the effect of extending $\vec{F}$ outside of the FOV without adding any unknowns to the associated image deblurring problem. Because of this, problem (4) can be reformulated in terms of an $N^2 \times 1$ unknown array $\vec{F}$, and $\vec{A}$ can be written as an $N^2 \times N^2$ square matrix whose structure can be exploited by fast algorithms.

   
2 The most popular boundary conditions

The following BCs and associated extensions of $\vec{F}$ are by far the most common choices in image processing:

$\bullet$
Zero boundary conditions (Z-BCs) assume that the object is zero outside of the FOV. That is, one assumes that $\vec{F}$ has been extracted from a larger array of the form

\begin{displaymath}\begin{array}{ccc}
\vec{0}& \vec{0}& \vec{0}\\
\vec{0}& \vec{F}& \vec{0}\\
\vec{0}& \vec{0}& \vec{0}
\end{array}\end{displaymath}

$\bullet$
Periodic boundary conditions (P-BCs) assume that the object repeats in all directions. That is, one assumes that $\vec{F}$ has been extracted from a larger array of the form

\begin{displaymath}\begin{array}{ccc}
\vec{F}& \vec{F}& \vec{F}\\
\vec{F}& \vec{F}& \vec{F}\\
\vec{F}& \vec{F}& \vec{F}
\end{array}\end{displaymath}

$\bullet$
Reflective boundary conditions (R-BCs) assume that outside the FOV the object is a mirror image of $\vec{F}$. That is, one assumes that $\vec{F}$ has been extracted from a larger array of the form

\begin{displaymath}\begin{array}{ccc}
\vec{F}_{\rm rc} & \vec{F}_{\rm r} & \vec{...
...ec{F}_{\rm rc} & \vec{F}_{\rm r} & \vec{F}_{\rm rc}
\end{array}\end{displaymath}

where $\vec{F}_{\rm c}$ is obtained by "flipping'' the columns of $\vec{F}$, $\vec{F}_{\rm r}$ is obtained by "flipping'' the rows of $\vec{F}$, and $\vec{F}_{\rm rc}$ is obtained by "flipping'' the rows and columns of $\vec{F}$.

$\bullet$
AntiReflective boundary conditions (AR-BCs) have a more elaborate definition, but have a simple motivation: where R-BCs result in an extension that is $\vec{C}^0$ continuous at the boundary of the FOV, AR-BCs yield an extension that is $\vec{C}^1$ continuous (see Serra-Capizzano 2003). They are given by
                                   F(1-i,j) = 2 F(1,j) - F(i+1,j)  
    $\displaystyle \hspace*{3cm} 1 \leq i \leq N_{\rm p}, 1 \leq j \leq N;$  
    F(i,1-j) = 2 F(i,1) - F(i,j+1)  
    $\displaystyle \hspace*{3cm} 1 \leq i \leq N, 1 \leq j \leq N_{\rm p};$  
    F(N+i,j) = 2 F(N,j) - F(N-i,j)  
    $\displaystyle \hspace*{3cm} 1 \leq i \leq N_{\rm p}, 1 \leq j \leq N;$  
    F(i,N+j) = 2 F(i,N) - F(i,N-j)  
    $\displaystyle \hspace*{3cm} 1 \leq i \leq N, 1 \leq j \leq N_{\rm p};$  

for the edges, while for the corners the more computationally attractive choice is to antireflect first in one direction and then the other. This yields
                                   $\displaystyle F(1-i,1-j) = \; 4 F(1,1) - 2F(1,j+1)$  
    $\displaystyle \hspace*{2.5cm} -2F(i+1,1) + F(i+1,j+1);$  
    $\displaystyle F(1-i,N+j) = \; 4 F(1,N) - 2F(1,N-j)$  
    $\displaystyle \hspace*{2.5cm} -2F(i+1,N) + F(i+1,N-j);$  
    $\displaystyle F(N+i,1-j) = \; 4 F(N,1) - 2F(N,j+1)$  
    $\displaystyle \hspace*{2.5cm} -2F(N-i,1) + F(N-i,j+1);$  
    $\displaystyle F(N+i,N+j) = \; 4 F(N,N) - 2F(N,N-j)$  
    $\displaystyle \hspace*{2.5cm} -2F(N-i,N) + F(N-i,N-j);$  

for $1 \leq i, j \leq N_{\rm p}$. Another natural choice for the corners is to antireflect with respect to the vertices. For instance around the corner (1,1) we obtain F(1-i,1-j) = 2 F(1,1)-F(i+1,j+1) (for more details, see Donatelli et al. 2003). Despite the fact that the two strategies provide restored images of comparable quality, the latter spoils the structure of the coefficient matrix more, with a correction of rank at most $O(N_{\rm p}^2)$, and therefore in the numerical experiments we will adopt the first strategy.
For a spatially invariant PSF, each of these choices imposes a particular kind of structure on the matrix $\vec{A}$ that can be exploited to develop computationally efficient algorithms (e.g., see Ng et al. 1999; Serra-Capizzano 2003; Jain 1989, and the references therein). It is important to emphasize, though, that the extensions of $\vec{F}$ obtained using the above BCs are artificial. Consequently, the reconstructed images obtained in the corresponding deblurring problems may contain artifacts due to the inaccuracy of such extensions. In practical applications it has been found that, in general, Z-BCs and P-BCs provide the worst results. This is due to the fact that Z-BCs and P-BCs extensions can be discontinuous at the boundary of the FOV, which is incompatible with model (3). When discontinuities are present in the extension, ripples, or Gibbs oscillations, form in the restored images. This phenomenon is much less prevalent when R-BCs are used since the resulting extension of $\vec{F}$preserves the continuity of the image; and is even less prevalent for AR-BCs which, in addition, preserve the continuity of the gradient of the image. Not surprisingly, of the above choices, AR-BCs yield the most accurate image reconstructions. Nonetheless, even with AR-BCs, there is no guarantee of the quality of the final result.

   
3 A natural solution

Formally, the minimum norm solution of problem (4) is given by

 \begin{displaymath}
\vec{f}_{{\rm LS}} = \vec{A}^\dagger\vec{g},
\end{displaymath} (5)

where $\vec{A}^\dagger$ denotes the pseudo-inverse of $\vec{A}$and has the form $\vec{A}^\dagger = (\vec{A}^T\vec{A})^{-1}\vec{A}^T$ provided the matrix $\vec{A}$ has full column rank N. Here, as was noted above, the fact that this matrix is not square prevents the use of the fast DFT. To combat this difficulty, one can zero-pad and shift $\vec {H}$ so that it becomes an $M \times M$ array $\vec {H}_{\rm p}$. A periodic extension of $\vec {H}_{\rm p}$ of period M yields an $M^2\times
M^2$ blurring matrix $\vec {\cal A}$ that is block circulant with circulant blocks, and hence, the fast DFT can be used. $\vec{G}$ is also zero extended to become an $M \times M$ array that we will denote $\vec{G}_{\rm p}$. We note the very useful fact that zero padding $\vec{G}$ in this way negates the nonphysical effects that can arise in the associated deblurred images due to the assumption that $\vec {H}_{\rm p}$ is periodic.

The benefit of the approach set forth in the previous paragraph is that it allows for the use of the fast DFT. Unfortunately it does not, by itself, work well. The reason is that the matrix $\vec{A}$ in (3) maps $M \times M$ arrays into $N \times N$arrays, whereas $\vec{A}^T$ works in the opposite direction. However, because of the zero-padding operation, the matrices $\vec {\cal A}$ and $\vec {\cal A}^T$ are artificially forced to map $M \times M$ arrays into $M \times M$ arrays. Thus, one might expect that if the zero-extended model is modified in such a way that the domains in which $\vec{A}$and $\vec{A}^T$ operate are preserved, things should work. This simple observation was recently made by Bertero & Boccacci (2005) in the context of the restoration of images contaminated by Poissonian noise, and led to the development of a modified version of the Richardson-Lucy (RL) algorithm that has proved to provide superb results[*].

The extension of the arguments of Bertero & Boccacci (2005) to the LS case leads us to rewrite problem (4) in the equivalent form

 \begin{displaymath}
\vec{f}_{{\rm LS}} = {\rm arg}\!\min_{\hskip-5mm\vec{f}} \le...
...ec{W}
\vec {\cal A}\vec{f}- \vec{g_{\rm p}}~\Vert_2^2 \right).
\end{displaymath} (6)

Here, $\vec{W}= {\rm diag}[\vec{\overline W}]$, where $\vec{\overline W}$ is the $M^2 \times 1$ vector obtained by column-stacking the $M \times M$mask-matrix having entries set to 1 in correspondence to the pixels containing the original image $\vec{G}$ and 0otherwise.

Problem (6) can be rewritten in the revealing form

 \begin{displaymath}
\vec{f}_{{\rm LS}} = {\rm arg}\!\min_{\hskip-5mm\vec{f}} \le...
...\odot \vec {\cal A}\vec{f}- \vec{g_{\rm p}}~\Vert_2^2 \right),
\end{displaymath} (7)

where symbol "$\odot$'' means element-wise multiplication. It is evident that the role of the mask $\vec{\overline W}$ (and hence of matrix $\vec{W}$) is to force the comparison between $\vec{g_{\rm p}}$ and $\vec {\cal A}
\vec{f}$ only in correspondence to the pixels in the original image $\vec{G}$ without the imposition of any a priori BCs. The solutions of (4) and (6) coincide (from an algebraic point of view the expressions of the two functionals in (4) and (6) are exactly the same), and yet this simple modification permits us to develop algorithms that can exploit the fast DFT preserving all the information present in the original problem (4).

4 A modified Landweber algorithm

The prototypical iterative regularization algorithm for least squares problems is the Landweber method (LW). This is a gradient descent algorithm for solving problem (4). In particular, it creates a sequence of iterates that, provided $\vec{A}$ has full column rank, converges to the solution in Eq. (5) (see Engl et al. 1996). Because of its slow convergence, it is not frequently used in practical applications. However, because of its simplicity, it is often utilized in the theoretical analysis of the potential performance of the LS approach.

If $\vec{f}_{0}$ is the starting image, then the iterations take the form

 \begin{displaymath}
\vec{f}_k = \vec{f}_{k-1} + \omega \vec{A}^T [\vec{g}- \vec{A}\vec{f}_{k-1}],
\end{displaymath} (8)

where, $k=1,2, \ldots $, and $\omega$ is a real positive parameter satisfying $0 < \omega < 2 / \Vert \vec{A}\Vert^2_2$. The value of $\omega$ determines, in part, the convergence of the iteration. If $\vec{f}_0 = \vec{0}$, this equation can be rewritten in the form

 \begin{displaymath}
\vec{f}_k = \omega \sum_{l=0}^{k-1}(\vec {I}- \omega \vec{A}^T \vec{A})^l \vec{A}^T \vec{g}.
\end{displaymath} (9)

Applying LW to the problem of solving (6) yields the modified Landweber iteration

 \begin{displaymath}
\vec{f}_k = \vec{f}_{k-1} + \omega (\vec{W}\vec {\cal A})^T [\vec{g_{\rm p}}- (\vec{W}\vec {\cal A}) \vec{f}_{k-1}].
\end{displaymath} (10)

We note that due to the equivalence of (4) and (6), the sequences generated by (8) and (10) will be the same. Moreover, as was already observed, this sequence converges to the solution in Eq. (5). Thus, since the sequence generated by (10) converges to $(\vec{W}\vec {\cal A})^\dagger\vec{g_{\rm p}}$, we have $(\vec{W}\vec {\cal A})^\dagger\vec{g_{\rm p}}=\vec{A}^\dagger\vec{g}$.

For the sake of clarity and completeness, we now provide a detailed analysis of the convergence of (10). The analogue of Eq. (9) in this case is

 \begin{displaymath}
\vec{f}_k = \omega \sum_{l=0}^{k-1}(\vec {I}- \omega(\vec{W}...
...{W}\vec {\cal A}))^l (\vec{W}\vec {\cal A})^T \vec{g_{\rm p}}.
\end{displaymath} (11)

Now, if the SVD of $\vec{W}\vec {\cal A}$ is given by $\vec{W}\vec {\cal A}=\vec{U}\vec{\Lambda}
\vec{V}^T$, where $\vec{\Lambda}={\rm diag}(s_1,\ldots,s_{M^2})$, replacing $\vec{W}\vec {\cal A}$ in Eq. (11) by its SVD yields

 \begin{displaymath}
\vec{f}_k = \vec{V}\vec{\Lambda}_k \vec{U}^T\vec{g_{\rm p}},
\end{displaymath} (12)

where the matrix $\vec{\Lambda}_k$ is diagonal with diagonal elements
 
                        $\displaystyle [\vec{\Lambda}_k]_{ii}$ = $\displaystyle \omega
s_i\sum_{l=0}^{k-1}(1-\omega s_i^2)^l$  
  = $\displaystyle \left\{
\begin{array}{cr}
(1-(1-\omega s_i^2)^k ) s_i^{-1} \quad & s_i>0,\\
0 & \quad s_i=0.\end{array}\right.$ (13)

A geometric series identity was used to obtain the second equality for si>0. It is evident from Eq. (13) that $\vec{\Lambda}_k=\vec{\widetilde\Lambda}_k\vec{\Lambda}^\dagger$, where

\begin{displaymath}[\vec{\widetilde\Lambda}_k]_{ii}= 1-(1-\omega s_i^2)^k,
\end{displaymath} (14)

and

\begin{displaymath}[\vec{\Lambda}^\dagger]_{ii}= \left\{\begin{array}{cr} s_i^{-1} & \qquad s_i>0, \\
0 & \qquad s_i = 0.\end{array}\right.
\end{displaymath} (15)

Thus Eq. (12) can be rewritten as
                             $\displaystyle \vec{f}_k$ = $\displaystyle (\vec{V}\vec{\widetilde\Lambda}_k \vec{V}^T)(\vec{V}\vec{\Lambda}^\dagger \vec{U}^T)\vec{g_{\rm p}}$ (16)
  = $\displaystyle \left[\vec {I}- (\vec {I}- \omega \vec {\cal A}^T \vec{W}\vec {\cal A})^k
\right](\vec{W}\vec {\cal A})^\dagger \vec{g_{\rm p}}.$ (17)

Now, provided $0 < \omega < 2 / \Vert \vec{W}\vec {\cal A}\Vert^2_2$, we have $(\vec {I}- \omega \vec {\cal A}^T \vec{W}\vec {\cal A})^k \longrightarrow 0$ as $k\rightarrow\infty$, and hence,

 \begin{displaymath}
\displaystyle\lim_{k\rightarrow\infty}\vec{f}_k = (\vec{W}\vec {\cal A})^\dagger
\vec{g_{\rm p}}.
\end{displaymath} (18)

Eventually, since $(\vec{W}\vec {\cal A})^\dagger\vec{g_{\rm p}}=\vec{A}^\dagger\vec{g}$ and $\Vert \vec{W}\vec {\cal A}\Vert_2=\Vert\vec{A}\Vert_2$, we conclude from Eq. (18) that $\lim_{k\rightarrow\infty}\vec{f}_k = \vec{A}^\dagger\vec{g}$ when $0 < \omega < 2 / \Vert \vec{A}\Vert^2_2$.

A DFT-based coding of the modified LW is trivial. In fact, since $\vec{W}\vec{g_{\rm p}}= \vec{g_{\rm p}}$, $\vec{W}^2 = \vec{W}$, and $\vec{W}^T = \vec{W}$, it is not difficult to see that the iterate (10) can be rewritten in the form

 \begin{displaymath}
\vec{f}_k = \vec{f}_{k-1} + \omega \vec {\cal A}^T [\vec{g_{\rm p}}- \vec{\overline W}\odot \vec {\cal A}\vec{f}_{k-1}].
\end{displaymath} (19)

Iteration (19) can be efficiently computed via a two-step procedure. In particular,
1.
the term in the square brackets on the rhs is evaluated first via the equation

\begin{displaymath}r(i,j) = g_{\rm p}(i,j) - {\overline W}(i,j) (H_{\rm p} \otimes f_{k-1})(i,j); \nonumber
\end{displaymath}  

2.
then the updated iterate is computed using

\begin{displaymath}f_k(i,j) = f_{k-1}(i,j) + \omega (H_{\rm p}^T \otimes r)(i,j). \nonumber
\end{displaymath}  

Here symbol "$\otimes$'' denotes the convolution operator. As is well known, in the case of two-dimensional arrays $(x \otimes
y)(i,j) = {\rm IDFT2}[{\widehat x}(i,j) {\widehat y}(i,j)]$ and $(x^T \otimes
y)(i,j) = {\rm IDFT2}[{\widehat x}^*(i,j) {\widehat y}(i,j)]$, where ${\widehat x}(i,j) =
{\rm DFT2}[x(i,j)]$, ${\widehat y}(i,j) = {\rm DFT2}[y(i,j)]$, with ${\rm
DFT2}[.]$, ${\rm IDFT2}[.]$ and "  * '' indicating the two-dimensonal discrete Fourier transform, the two-dimensional inverse discrete Fourier transform and the complex-conjugate matrix, respectively.

For each iteration this procedure requires the computation of four $M \times M$ DFTs, versus the two $N \times N$ DFTs required by the classic algorithm based on P-BCs.

5 Some numerical experiments

We now present some numerical experiments in which the performance of the modified LW algorithm is compared with the performance of classical LW when R-BCs and AR-BCs are imposed. Actually, we have used a projected version of LW that constrains the solution to be nonnegative. We do this because in the experiments we have used as a benchmark the modified RL algorithm as proposed by Bertero & Boccacci (2005) which also imposes a nonnegativity constraint on the solution.

In all of the experiments, the adopted PSF is a Gaussian with circular symmetry and dispersion set to 6 pixels. We have deliberately chosen non-astronomical objects. In fact, because of the more important contribution of the high frequency components, objects with sharp outline are much more difficult to restore than those typical of astronomical observations. Hence, we have considered a rectangular function and a very sharply structured object. For each object we have considered three different situations concerning noise, i.e., no-noise, Gaussian noise and Poissonian noise. In the last two cases an ${\it SNR} = 20~{\rm
dB}$ and a peak ${\it SNR} = 30 ~{\rm dB}$[*], respectively, have been adopted. This is to ensure that the performance of the modified LW does not depend on the particular kind of noise that contaminates the images. In order to avoid pixels with zero values, a sky-background has been added whose intensity, in the blurred image, is set to $1\%$ of the maximum value of the image. Figures 1-10 clearly indicate the supremacy of the modified LW over classical LW with both R-BCs and AR-BCs.


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg1.eps}\end{figure} Figure 1: Rectangular function used in the first experiment. No noise has been added. The delimited area corresponds to the image used in the deblurring operation.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg2.eps} \end{figure} Figure 2: Relative restoration error (RRE) $\Vert \vec{f}_k - \vec{f}\Vert / \Vert \vec{f}\Vert$ vs. the number of iterations for the experiment with the object in Fig. 1.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg3.eps} \end{figure} Figure 3: Results obtained from deblurring for the experiment with the object in Fig. 1. The displayed RREs correspond to the best value obtained during the iterations.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg4.eps} \end{figure} Figure 4: As in Fig. 2 but in the case of Gaussian noise with ${\it SNR} = 20~{\rm
dB}$.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg5.eps} \end{figure} Figure 5: As in Fig. 2 but in the case of Poissonian noise with peak ${\it SNR} = 30 ~{\rm dB}$.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg6.eps} \end{figure} Figure 6: Sharply structured object used in the second experiment. No noise has been added. The delimited area corresponds to the image used in the deblurring operation.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.4cm,clip]{3414fg7.eps} \end{figure} Figure 7: Relative restoration error (RRE) $\Vert \vec{f}_k - \vec{f}\Vert / \Vert \vec{f}\Vert$ vs. the number of iterations for the experiment with the object in Fig. 6.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.48cm,clip]{3414fg8.eps} \end{figure} Figure 8: Results obtained from deblurring for the experiment with the object in Fig. 6. The displayed RREs corresponds to the best value obtained during the iterations.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.48cm,clip]{3414fg9.eps} \end{figure} Figure 9: As in Fig. 7 but in the case of Gaussian noise with ${\it SNR} = 20~{\rm
dB}$.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.48cm,clip]{3414fg10.eps} \end{figure} Figure 10: As in Fig. 7 but in the case of Poissonian noise with peak ${\it SNR} = 30 ~{\rm dB}$.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.48cm,clip]{3414fg11.eps} \end{figure} Figure 11: Rectangular function used in the third experiment. Noise is Poissonian with peak ${\it SNR} = 60~{\rm dB}$. The delimited area corresponds to the image used in the deblurring operation.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.48cm,clip]{3414fg12.eps} \end{figure} Figure 12: Relative restoration error (RRE) $\Vert \vec{f}_k - \vec{f}\Vert / \Vert \vec{f}\Vert$ vs. the number of iterations for the experiment with the object in Fig. 11.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{3414fg13.eps} \end{figure} Figure 13: Results obtained from deblurring for the experiment with the object in Fig. 11. The displayed RREs corresponds to the best value obtained during the iterations.
Open with DEXTER

As shown by Figs. 11-13, this result does not change when an experiment based on a typical astronomical object is considered. Apart from the fact that the noise is Poissonian with a peak ${\it SNR} = 60~{\rm dB}$, the other conditions of the experiment are identical to those in the previous cases. The only point worth noting is that AR-BC is able to provide a reconstruction whose quality is similar to that of the modified LW and RL algorithms. This is a consequence of the fact that in the object the contribution of the high frequency components (i.e., those that determine the edge effects) is negligible.

6 Conclusions

The theoretical arguments and numerical experiments presented above lead one to conclude that in the case of objects close to the boundaries of an image and/or extended objects not completely contained in the image domain, the imposition of artificial BCs is not, in general, the optimal approach. This is due to the fact that imposing BCs can alter the original problem in a way that is not compatible with physical reality. Much better results are obtained by solving the under-determined LS problem (6).

Three more points have to be stressed. The first one is that there are indications that the solution of problem (6) permits one to obtain remarkably good restorations even in the case of Poissonian noise, where model (3) no longer holds. Given the limited number of experiments carried out in this paper, we cannot justifiably make the claim that this will hold in general. However, in a recent work, Vio et al. (2005) have shown that the LS deblurring algorithms are remarkably insensitive to the nature of the noise. Hence, it can be safely expected that modified LS algorithms will also work well in the case of non-Gaussian noises.

A second point to note is the fact that, in spite of inferior performance in terms of reconstruction quality when compared to the approach that we advocate in this paper, AR-BCs work surprisingly well. This could be useful in the case of very large images. In fact, when the PSF is doubly symmetric, the deblurring algorithms can be implemented using the discrete sine transform (DST-I). In this case, LW requires two $(N-2)~\times~(N-2)$ DST-Is and a few computations of lower order for the edges (Donatelli & Serra-Capizzano 2005), whereas with an asymmetric PSF, it requires two DFTs of size $M \times M$ (extending the image with an antireflective padding). Therefore, in the case of a generic PSF, one LW iteration with AR-BCs shows a computational cost about one half of modified LW.

As a final comment, we note that although we have considered only the Landweber algorithm, all of the LS algorithms that make use of the matrices $\vec{A}$ and $\vec{A}^T$ can be similarly modified.

References

 

Copyright ESO 2005