next previous
Up: Astronomical image representation by


1 Introduction

The wavelet transform has been extensively used in astronomical data analysis during the last ten years. A quick search with ADS shows that around 600 papers contain the keyword "Wavelet'' in their abstract, and all astrophysical domains were concerned, from the sun study to the CMB analysis.

This large success of the wavelet transform (WT) is due to the fact that astronomical data presents generally complex hierarchical structures, often described as fractals. Using multiscale approaches such as the wavelet transform (WT), an image can be decomposed into components at different scales, and the WT is therefore well-adapted to astronomical data study.

A series of recent papers (Candès & Donoho 1999d; Candès & Donoho 1999c), however, argued that wavelets and related classical multiresolution ideas are playing with a limited dictionary made up of roughly isotropic elements occurring at all scales and locations. We view as a limitation the facts that those dictionaries do not exhibit highly anisotropic elements and that there is only a fixed number of directional elements, independent of scale. Despite the success of the classical wavelet viewpoint, there are objects, e.g. images that do not exhibit isotropic scaling and thus call for other kinds of multiscale representation. In short, the theme of this line of research is to show that classical multiresolution ideas only address a portion of the whole range of interesting multiscale phenomena and that there is an opportunity to develop a whole new range of multiscale transforms.

Following on this theme, Candès & Donoho introduced new multiscale systems like curvelets (Candès & Donoho 1999c) and ridgelets (Candès 1999) which are very different from wavelet-like systems. Curvelets and ridgelets take the the form of basis elements which exhibit very high directional sensitivity and are highly anisotropic. In two-dimensions, for instance, curvelets are localized along curves, in three dimensions along sheets, etc. Continuing at this informal level of discussion we will rely on an example to illustrate the fundamental difference between the wavelet and ridgelet approaches -postponing the mathematical description of these new systems.


  \begin{figure}
\par\includegraphics[height=6.8cm,width=6.8cm,clip]{fig_bar_noise...
...5mm}
\includegraphics[height=6.8cm,width=6.8cm,clip]{fig_bar_rid.ps}\end{figure} Figure 1: Top left, original image containing a vertical band embedded in white noise with relatively large amplitude. Top right, signal obtained by integrating the image intensity over columns. Bottom left, reconstructed image for the undecimated wavelet coefficient, bottom right, reconstructed image from the ridgelet coefficients.

Consider an image which contains a vertical band embedded in white noise with relatively large amplitude. Figure 1 (top left) represents such an image. The parameters are as follows: the pixel width of the band is 20 and the SNR is set to be 0.1. Note that it is not possible to distinguish the band by eye. The wavelet transform (undecimated wavelet transform) is also incapable of detecting the presence of this object; roughly speaking, wavelet coefficients correspond to averages over approximately isotropic neighborhoods (at different scales) and those wavelets clearly do not correlate very well with the very elongated structure (pattern) of the object to be detected.

We now turn our attention towards procedures of a very different nature which are based on line measurements. To be more specific, consider an ideal procedure which consists in integrating the image intensity over columns; that is, along the orientation of our object. We use the adjective "ideal'' to emphasize the important fact that this method of integration requires a priori knowledge about the structure of our object. This method of analysis gives of course an improved signal to noise ratio for our linear functional better correlate the object in question, see the top right panel of Fig. 1.

This example will make our point. Unlike wavelet transforms, the ridgelet transform processes data by first computing integrals over lines with all kinds of orientations and locations. We will explain in the next section how the ridgelet transform further processes those line integrals. For now, we apply naive thresholding of the ridgelet coefficients and "invert'' the ridgelet transform; the bottom right panel of Fig. 1 shows the reconstructed image. The qualitative difference with the wavelet approach is striking. We observe that this method allows the detection of our object even in situations where the noise level (standard deviation of the white noise) is five times superior to the object intensity.

The contrasting behavior between the ridgelet and the wavelet transforms will be one of the main themes of this paper which is organized as follows. We first briefly review some basic ideas about ridgelet and curvelet representations in the continuum. In parallel to a previous article (Starck et al. 2002), Sect. 2 rapidly outlines a possible implementation strategy. Sections 3 and 4 present respectively how to use the curvelet transform for image denoising and and image enhancement.

We finally develop an approach which combines both the wavelet and curvelet transforms and search for a decomposition which is a solution of an optimization problem in this joint representation.


  \begin{figure}
\par\includegraphics[width=8.9cm,clip]{ridgelet.eps}\hspace*{1mm}...
...eps}\hspace*{1mm}
\includegraphics[width=8.9cm,clip]{ridgeshift.eps}\end{figure} Figure 2: A few ridgelets.


next previous
Up: Astronomical image representation by

Copyright ESO 2003