A&A 398, 1185-1193 (2003)
DOI: 10.1051/0004-6361:20021687

A wavelet packets equalization technique to reveal the multiple spatial-scale nature of coronal structures

G. Stenborg 1 - P. J. Cobelli 2

1 - Max-Planck-Institute für Aeronomie (MPAe), Katlenburg-Lindau, Germany
2 - Universidad de Buenos Aires, Argentina

Received 28 August 2002 / Accepted 12 November 2002

Precise determination of onset times of dynamical events as observed by coronagraphs requires a high-degree-of-accuracy tracking of the structures involved. In particular, early steps in the evolution turn to be of utmost importance. However, corresponding features often lack the sharpness needed to obtain unambiguous results. To overcome this constraint, we developed a multiresolution image processing technique applicable to any coronagraph data set to enhance both boundaries and internal details of originally faint and diffuse structures. The method implemented employs a multi-level decomposition scheme (splitting algorithm of a wavelet packet on non-orthogonal wavelets) via the à trous wavelet transform, local noise reduction and interactive weighted recomposition. This approach represents a major advance towards unambiguous image interpretation and provides a means for the quantification of stationary and dynamic coronal structures required for conducting morphological studies. A range of example applications based on LASCO-C1, -C2, and -C3 data sets are shown. Different reconstruction strategies are discussed.

Key words: methods: data analysis - methods: numerical - techniques: image processing - Sun: corona - Sun: coronal mass ejections (CMEs)

1 Introduction

During the last few years much new knowledge has been gained regarding the empirical and physical relationships among coronal mass ejections (CMEs), eruptive prominences, and solar flares. However, in all cases, detailed physics of the i) magnetic field topology of the involved structures, ii) eruption of the initial quiescent structure (if any), iii) triggering causes of the flare (if any), and iv) subsequent evolution of the mass bulk and associated magnetic cloud (if any) represents a major gap in the understanding of these phenomena (Chen 1997). Despite these deficiencies, our current physical picture enables us to comprehend qualitatively the different stages in the evolution of a coronal structure, from its first emergence in the solar atmosphere, in terms of a complementary interplay among resistive MHD turbulence, ideal ordered MHD flows, and magnetic helicity (Low 1996).

In fact, flares, eruptive prominences, and CMEs, as well as heating and acceleration of the structures involved do not take place arbitrarily, but are related by the basic processes mentioned above to specific stages of their evolution. In this sense, onset time studies based on transients' evolution observations can provide the key to establish the causality between two or more events, and guide the theoretical efforts. As it has been stated by Gosling (1993): "cause and effect lie at the heart of the science of solar-terrestrial physics''.

Onset time analysis was early used by Harrison (1986, 1991) to study the flare-CME causality relationship. In his work, CMEs and their associated flares, observed by the Solar Maximum Mission (SMM) instruments, were examined. It was found that in all cases studied of flare-associated CMEs, the flare began about 20 min after the CME had erupted. Such results were confirmed by Hundhausen (1988), combining SSM observations with those of the Mauna Loa coronagraph to bring the field of view down to 1.2 $R_{\odot}$, reducing the uncertainty in the observed onset times of CMEs. In all such cases, the CME is observed to precede the associated flare by minutes to an hour. From this measurements it would be reasonable to conclude that the flare energy liberated did not play a role in the acceleration of the CME. Further comparison of simultaneous observations in soft X-ray and white light ground-based coronagraph (Hiei et al. 1994) not only confirmed these results but also suggested that the CME could be largely an ideal or non-resistive MHD process not involving sudden intensive dissipative heating of the coronal plasma, which only takes place in the post-CME flare. However, recent Solar and Heliospheric Observatory (SOHO) observations showed that the CME and the flare are associated but that they can occur at anytime within several tens of minutes of one another (Zhang et al. 2001). This experimetal evidence strongly supports the view that the flare does not drive the CME-onset and vice-versa.

It is worth noting that this controversy arises from the nature of the techniques applied to estimate the onset times. The CME onset time, computed by extrapolation using the observations above the occulting disk, is randomly located within windows tens of minutes wide around the flare onset time (Harrison 1995).

A reliable technique for the precise determination of onset times requires a high-degree-of-accuracy tracking of close-to-limb coronal structures. Such a technique would be straightforward by means of cross-correlation of a particular sub-region in consecutive images, provided sharp and fixed features are present in the sub-regions considered. However, these features often lack the necessary sharpness to obtain positive, narrow distributed results.

As an additional drawback, the relatively long exposure time necessary for the observation of coronal lines accentuates these effects through the smearing of moving structures during the evolution of very fast dynamical events. Let v be the plane-of-sky speed of a moving coronal feature and $\tau_{\rm e}$ a time lapse of the order of magnitude of the typical exposure time. Given the spatial resolution of the instrument $r_{\rm s}$, the movement extents to

\begin{eqnarray*}d = \frac{v \: \tau_{\rm e}}{r_{\rm s} \: 700 ~{\rm km}} \: ~{\rm pixels}.

For example, for a radially outward moving feature with 800 km s-1 plane-of-sky velocity at equatorial latitudes recorded in a 20 s exposure by a coronagraph with an angular extent of 5.6 arcsec/pixel, the feature will spread along approximately 4 pixels. Hence its leading edge will lack the necessary contrast for reliable tracking. Moreover, thin faint features may not be discernible.

Consequently, precise identification of diffuse coronal structures turns to be a fundamental step in the onset time analysis scheme. Such a task consists on generating enhanced images in which both boundaries and internal structure details of originally faint features are resolved.

In order to provide to the more accurate determination of the onset time we present a general digital image processing technique applicable to any coronagraph data set to enhance low signal-to-noise ratio (S/N) images aimed at detection and reconstruction of diffuse close-to-limb magnetic field structures (such as loops, arcades, internal structures of CMEs and the like) immersed in a noisy background. The method is based on the splitting algorithm of a wavelet-packets analysis (Wickerhauser 1991; Chui 1992) and employs a 2D version of the à trous (Holschneider & Tchamitchian 1990; Shensa 1992) algorithm. This technique constitutes a major advance towards unambiguous image interpretation and provides a means for the quantification of stationary and dynamic coronal structures required for conducting morphological studies.

The organization of the paper is as follows. Next section examines those properties of the wavelet transform which are relevant for our scheme, and the 2D à trous algorithm employed is presented. The multiple-level decomposition technique is described in detail and the reconstruction of structures follows. Section 3 shows application examples on coronal structures observed by the Large Angle and Spectroscopic Coronagraph (LASCO) C1, C2, and C3 (Brueckner et al. 1995) onboard SOHO. Summary and conclusions are given in Sect. 4.

2 Description of the method

In this section we discuss in detail the digital processing technique developed to reveal the multiple spatial-scale nature of solar coronal structures in coronagraph images. On the first three subsections we address the key issues which constitute the building blocks of our technique. Next, their joint function is described.

2.1 Wavelet transforms and multiresolution

Standard Fourier transform decomposes a signal into non-local sines and cosines components of different frequencies. Windowed Fourier transform represents an advance towards the aim of spatial localization. In this scheme, a signal is chopped into sections for separate analysis, and this so-called "windowing'' is accomplished by translating a compact support weight function. This type of transformation gives information in both time and frequency domains. However, as the weight functions are translated its window size remains constant, i.e., time-widths are not adapted to frequency.

The wavelet transform acts similarly, but instead of non-local functions, it uses spatially localized functions $\psi_{a,b}(x)$called wavelets. These wavelets are constructed by translation and dilation of a mother wavelet $\psi(x)$ given by

$\displaystyle %
\psi_{a,b}(x) = \frac{1}{\sqrt{a}} \: \phi\left( \frac{x-b}{a} \right) \quad (a \neq 0)$     (1)

where a > 0 is the scale parameter (playing the role of a frequency), and b is the position parameter. This set of parameters (a,b) describes a point in the scale-space plane. Wavelets transform sampled data to the time-scale domain, and signals are analysed by dilations and compressions of this mother wavelet (analysing wavelets are therefore adapted to frequency), which is translated over the entire signal's domain.

The definition of the continuous wavelet transform $W_{\rm c}f(a,b)$ for a 1D signal $f(x) \in L^2(\Re)$ is (Grossmann & Morlet 1984)

$\displaystyle %
W_{\rm c}f(a,b) = \frac{1}{\sqrt{a}} \int_{-\infty}^{\infty} \: f(x) \: \psi_{a,b}^*\left(
\frac{x-b}{a} \right) \: {\rm d}x.$     (2)

The transform is characterized by the following three properties:
it is a linear transformation,
$\displaystyle %
W_{\rm c} \left[f+g \right](a,b) = W_{\rm c}f(a,b) + W_{\rm c}g(a,b);$     (3)

it is covariant under translations,
$\displaystyle %
f(x) \longrightarrow f(x-u) \quad W_{\rm c}f(a,b) \longrightarrow W_{\rm c}f(a,b-u);$     (4)

it is also covariant under dilations,
$\displaystyle %
f(x) \longrightarrow f(sx) \quad W_{\rm c}f(a,b) \longrightarrow s^{-1/2}
W_{\rm c}(sa,sb).$     (5)

This last property allows for the analysis of hierarchical structures.

It has been shown to be invertible, so that f(x) can be recovered by evaluating the double integral:

$\displaystyle %
f(x) = \frac{1}{\Delta} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
W_{\rm c}f(a,b) \: \psi_{a,b}(x) \: \frac{{\rm d}a {\rm d}b}{a^2}\cdot$     (6)

For simplicity, the continuous transform was shown, although for practical applications the continuous set of parameters (a,b) must be discretized.

From Eqs. (2) and (6), it is seen that, unlike the Fourier transform, the wavelet transform is not its own inverse. This implies that we can transform an image using wavelets and consider each wavelet plane as another image that can be further decomposed at each transformation. We will come back to this point on Sect. 2.3, where the multiple-level decomposition scheme is described.

2.2 The 2D-à trous wavelet transform

Our discrete approach to the wavelet transform was implemented using a 2D version of the so-called à trous algorithm (Holschneider & Tchamitchian 1990; Shensa 1992). We consider the sampled data, $\{c_0(k,l)\}$, assumed to be the scalar products at pixels (k,l) of the function f(x,y)with a scaling function $\phi(x,y)$ corresponding to a low pass filter, such that

$\displaystyle %
c_0(k,l) = \langle f(x,y), \phi(x-k,y-l) \rangle .$     (7)

The scaling function is chosen to satisfy
$\displaystyle %
\frac{1}{2} \phi\left(\frac{x}{2},\frac{y}{2}\right) = \sum_{m,n} \: h(m,n) \: \phi(x-k,y-l)$     (8)

where h is a discrete low pass filter associated with the scaling function $\phi(x,y)$.

The smoothed data ci(k,l) at a given resolution i and at a position (k,l) is the scalar product

$\displaystyle %
c_i(k,l) = 4^{-i} \: \left\langle f(x,y), \phi\left(\frac{x-k}{2^i},\frac{y-l}{2^i}\right) \right\rangle$     (9)

which is obtained computing the convolution
$\displaystyle %
c_i(k,l) = \sum_{m,n} \: h(m,n) \: c_{i-1}(k+2^{i-1}m,l+2^{i-1}n).$     (10)

The discrete wavelet transform for a resolution level i is given by the differences between two consecutive resolutions,
wi(k,l) = ci-1(k,l) - ci(k,l).     (11)

Thus, the à trous wavelet transform of an image produces a set $\{w_i \}$ of resolution-related views of the image, called wavelet scales. At each scale, the wavelet plane $\{ w_i(k,l) \}$has the same number of pixels as the image.

A series expansion of the original image, c0, provides an expression from which it can be recomposed trivially as the sum of all the wavelet planes and the smoothed array $c_{\rm p}$:

$\displaystyle %
c_0(k,l) = c_{\rm p}(k,l) + \sum_i^p w_j(k,l)$     (12)

We used a B3-spline for the scaling function, which leads to the convolution with a bi-dimensional mask of $5 \times 5$ given by

1 & 4 & 6 & 4 & 1 \\ [2mm]
4 & 16 &...
...& 16 & 24 & 16 & 4 \\ [2mm]
1 & 4 & 6 & 4 & 1

scaled by 256. This choice is based on several reasons. In wavelet theory, splines lead to the only wavelets admitting a closed-form formula (piecewise polynomial). All other wavelet bases are defined indirectly by an infinite recursion (or by an infinite product in the Fourier domain). Because of this piecewise polynomial characteristic, they are smooth and well-behaved functions. In fact, splines of degree n are (n-1) continuously differentiable. As a result, splines present excellent approximation properties. The primary reason for working with a basic spline (commonly termed "B-spline'') is that they are compactly supported. In particular, B-splines are the shortest and smoothest possible polynomial splines with an approximation order of (n+1). This short-support feature is a key consideration for computational efficiency. Among the family of polynomial splines, we choose cubic splines (B3-splines) because they exhibit a minimum curvature property which it is traduced in a lesser tendency to oscillate.

Extensive literature exists on splines and their applications. The reader is referred to Unser (1999) and references therein, for a general introductory treatise on splines and their applications to image processing.

For completeness, we reproduce the algorithm derived by Murtagh et al. (1995) to compute the associated wavelet transform:

i is initialized to 0 and we start with data given by ci(k,l).
i is incremented by 1, and a discrete convolution of the data  ci-1(k,l) is performed using the filter h based on B3-spline interpolation.
The discrete wavelet transform is obtained from the difference: wi(k,l) = ci-1(k,l) - ci(k,l).
If i is less than the number p of scales we want to compute, then go back to step 2.
The set $\mathcal{W} = \{w_1,\ldots,w_p,c_p \}$ is the wavelet transform of the data, composed by p wavelet scales and an smoothed array.
The à trous wavelet transform is non-orthogonal and furnishes a highly redundant data set. As opposed to the standard wavelet transform method, this transform is isotropic and its results are significantly better at high noise levels. This is the reason for its success on astronomical images where objects are diffuse and more or less isotropic.

2.3 Multiple-level wavelet decomposition

When a signal is decomposed in wavelet scales, its contents are partitioned into consecutive frequency bands, or octaves. The first wavelet planes contain the higher frequency components, while lower frequency signatures are present towards last scales, and on the smoothed array, i.e., the continuum.

Fluctuations due to time-varying background and those introduced by noise are not of interest for data restoration as they do not contain relevant information. Moreover, their contribution must be reduced to a minimum in order to attain precise detection of structures. In the case of non-orthogonal wavelets, the signal-to-noise ratio increases towards the coarser scales, noise being predominantly concentrated at the finer scales. Thus, for a sufficiently high noise level, the signal contained in high frequency scales is completely drowned by noise.

Straightforward filtering of the wavelet coefficients at this early stage in the decomposition leads either to the rejection of the signal along with noise or, in the opposite case, to such a high noise level that the signal is still unrecognizable.

However, as we have seen in Sect. 2.1, each scale may be considered to be another base-level image whose quality is to be enhanced. The splitting algorithm of a wavelet-packet analysis (Wickerhauser 1991) is based on this fact and provides a finer analysis of the frequency content of transients at a given scale. Wavelet packets have the capability of partitioning the higher-frequency octaves to yield better frequency localization. This is done by recursively decomposing (transforming) wavelet scales to give rise to second, third, and any desired level of multiple decomposition. So, instead of truncating the coefficients of the finest scales, these are transformed once again and the truncation is performed on the coefficients of a higher-level decomposition. Generally, this strategy allows for the restoration of signal's features at scales of lower levels, originally dominated by noise. This is particularly well suited for higher-frequency scales whereas coarser ones, characterized by larger SNR, can be filtered directly at lower levels of decomposition without affecting the information content significantly. Moreover, due to the redundancy of the à trous wavelet transform, the main features of the original signal can be recognized even after several decompositions of the noisiest scales have been made.

The 1D variant of this multiple-level decomposition technique was implemented by Fligge & Solanki (1997) for noise reduction in astronomical spectra.

\par\includegraphics[width=8.5cm,clip]{fig1.eps}\end{figure} Figure 1: Multiple-level decomposition labelling scheme: 3-level decomposition tree. For clarity, only one branch is shown in each decomposition level, but it is assumed that when computing a new level all coefficients of the previous one are decomposed.
Open with DEXTER

A few words are in order in regard to notation. The first level decomposition of an image c0 in p0 scales gives rise to the wavelet transforms set $\{ w_i^{(0)} \}$, where $i=1,2,\ldots,p_0$are wavelet planes. For notation purposes, the i=0 subscript in every $w_0^{(\ldots)}$ is used to identify the continuum component at each decomposition (previously termed cp); it is not a wavelet scale and must not be confused with any of them.

We can further decompose the first coefficient, w1(0), in p0,1 wavelet scales. This leads to the tree structure shown in Fig. 1. A new level decomposition of wn(0,1) in p0,1,n scales produces one of the possible third-level decomposition sets: $\{ w_0^{(0,1,n)},
w_1^{(0,1,n)},\ldots,w_{p_{0,1,n}}^{(0,1,n)} \}$.

Note that, using this notation, and given a wavelet coefficient (or continuum), its corresponding decomposition level is equal to its number of superscripts. Furthermore, for a given level of decomposition, the expansion (recomposition) formula is straightforward; e.g., for 2-level decomposition:

$\displaystyle %
c_0 = \sum_{i=0}^{p_0} \sum_{j=0}^{p_{0,i}}
\: w_j^{(0,i)}$     (13)

provided summation limits are specified.

2.4 Reconstruction of structures

The proposed technique for the reconstruction of structures in coronagraph images consists on 2-level decomposition via the à trous wavelet transform, followed by local noise reduction and weighted recomposition. In what follows, the articulation of all these elements is explained in detail.

Noise is inherent to any physical measurement process. For CCD detectors, Gaussian and Poisson noise distributions are the most important. In particular, we consider the case of Gaussian distributed noise. However, if the noise in the original data I(x,y) responds to a Poisson distribution, the transform

$\displaystyle %
t \vert I(x,y) \vert = 2 \sqrt{I(x,y) + \frac{3}{8}}$     (14)

acts as if the data arose from a Gaussian white noise model with $\sigma =1$ under de assumption that the mean value of I is large (Anscombe 1948). In addition, the Anscombe transform has been extended to consider also additive Gaussian readout noise (Murtagh et al. 1995).

Noise reduction can be accomplished by filtering in the wavelet domain, identifying those wavelet coefficients which are significantly non-zero against the noisy background, reserving them and rejecting all others (Donoho & Johnstone 1994). In our scheme, this is implemented by local hard-thresholding, according to:

$\displaystyle %
W_m^{(\ldots)}(k,l) =\left\{
0 & {\rm if~ } \...
...\vert w_m^{(\ldots)}\vert \geq k \: \sigma_m^{(\ldots)}(k,l)
\end{array}\right.$     (15)

where $w_m^{(\ldots)}(k,l)$, and $W_m^{(\ldots)}(k,l)$, are the wavelet and filtered wavelet coefficients of the noisy image, respectively; both computed at pixel position (k,l) of scale m. The factor k is an adjustable parameter defining the confidence level of the retained coefficients. In general we set k=3, which results in a level of confidence of 99.7%.

We define $\sigma_m^{(\ldots)}(k,l)$ as the local standard deviation around pixel (k,l) of the m wavelet scale at a given decomposition level. This value is assessed from the standard deviation of noise $\sigma_{\rm I}(k,l)$ in the original image and from the examination of noise progression in wavelet space.

Since we are dealing with non-orthogonal wavelets, we need to know the standard deviation of noise at each scale and level of decomposition. Due to the linearity of the wavelet transform (Eq. (3)), this knowledge is gained by simulating an image containing noise with a unitary standard deviation and taking the wavelet transform of this image. Then the standard deviation of noise at scale j of a given decomposition level, $\hat{\sigma}^{(\ldots)}_j$, of this simulated image is computed. The values thus obtained describe the behaviour of the noise in the multiple-level wavelet space.

In contrast, the standard deviation of noise in the original image is derived differently. Given a coronagraph image, we take its first-level à trous wavelet transform. From the first wavelet scale (highest frequency in this decomposition level), the local noise variance is determined in the following way. For a fixed pixel position, say (k,l), it is calculated by taking its $N\times N$ neighbouring pixels given by the cartesian product $[k,k+N]\times[l,l+N]$ and computing their standard deviation. This value is stored in an array at its corresponding position, i.e., (k,l). When this operation is extended to cover all pixels, the resulting data array (of the same size as the array from which it is calculated) constitutes the local standard deviation $\sigma_{\rm I}(k,l)$ of noise on the original image. N is a freely adjustable parameter, which determines the degree of localness of the estimation relative to the full-image size. N=5was used for processing the example images shown in Sect. 3.

Finally, properties of the wavelet transform enable us to express the local standard deviation of noise at scale m of a given decomposition level as:

$\displaystyle %
\sigma_m^{(\ldots)}(k,l) = \hat{\sigma}^{(\ldots)}_m \cdot \sigma_I(k,l) \:$     (16)

i.e., the standard deviation of the noise of scale m at that same decomposition level multiplied by the standard deviation of the noise of the image.

The noise variance estimation technique we employed was first developed by Murtagh et al. (1995). The approach of local noise variance estimation is analogous to that employed by Starck et al. (1997) on the analysis of spectral lines.

For the reconstruction of structures, we employ an interactively weighted recomposition method. According to this scheme, the reconstructed image $\mathcal{I}$ results from the following recomposition formula

$\displaystyle %
\mathcal{I} = \sum_{i=0}^{p_0} \sum_{j=0}^{p_{0,i}}
\: \alpha_{i,j} \: W_j^{(0,i)}$     (17)

$\alpha_{i,j}$ being the weight coefficient for the j second-level scale (or continuum if j=0) of the i wavelet scale (continuum if i=0) of the first decomposition. The set $\{ \alpha_{i,j} \}$is specified interactively by the user. Each complete set of weight coefficients $\{ \alpha_{i,j} \}$ defines what we call a reconstruction strategy. This weighted reconstruction scheme closely resembles the process of frequency equalization of a signal.

3 Applications

In this section four selected examples are shown. They show the application of our technique to the Large Angle and Spectroscopic Coronagraph (LASCO) C1, C2 and C3 (Brueckner et al. 1995) selected data sets to contrast-enhance structures such as coronal loops and features of dynamical events. LASCO-C1 images the emission inner corona from 1.1 to 3.0 $R_{\odot}$ with the help of a Fabry-Perot interferometer and a set of blocking interference filters. It is a mirror version of the classical Lyot coronagraph without external occulter, thus preserving full resolution across the whole field-of-view (1 pixel subtends 5.6 arcsec on the solar disk). On the other hand, the externally occulted coronagraphs LASCO-C2 and -C3 observe the extended white light corona from 1.5 to 6.0 $R_{\odot}$ and 3.7 to 30 $R_{\odot}$, respectively. The images used were the so-called level 0.5, i.e., raw images that have been rotated to put the solar north at the top of the image without corrections for instrument response, stray light, etc. They are in units of DN (digital numbers or counts). The background in LASCO-C2 and -C3 images amounts for the contributions of the static K-corona, F-corona and stray light. In some of the application cases the background was removed. The corresponding model for the background was created by taking the minimum over a 4 week period (close to the time of the observation to be reduced) of daily median images.

In each case, different reconstruction strategies are applied in order to illustrate the technique's adaptability to the specific structure's types and characteristics under potential study.

3.1 Fe XIV green line loops in the inner corona

\includegraphics[width=8.4cm,clip]{H3948F2D.eps} \end{figure} Figure 2: Fexiv green line loops in the inner corona as seen by LASCO-C1 (level 0.5 with continuum removed) on May 21, 1998 at 14:33 UT (upper left). The other three frames show three different reconstruction schemes based on an 8 first-level scales plus continuum, each scale further subdivided in 4 scales plus continuum.
Open with DEXTER

In this case we seek for a reconstruction strategy to gain resolution of diffuse loops in the inner corona. To that purpose we show the results arose from applying our technique to a LASCO-C1 Fexiv green coronal line image at 530.3 nm (continuum subtracted) registered on May 21, 1998 at 14:33 UT. Two-level decomposition consisting of 8 scales with 4 sub-scales each was used in the processing of this case study.

In Fig. 2 the original image (upper left corner) along with three example reconstructions are exhibited. The detailed reconstruction strategies employed in each of these cases is the following:

Upper right image:
The first-level continuum is included with unitary weight ( $\alpha_{0,j} = 1$ for all $j=0,1,\ldots\!,4$) and the first-level scales 1 to 4 are assigned four times this weight ( $\alpha_{i,j} =
4$ for i=1,2,3,4 and j=1,2,3,4). Other scales are rejected for this reconstruction.
Bottom left image:
All subscales are included with a weight of 4 and the first-level continuum is set to $\alpha_{0,j} = 1$ for all $j=0,1,\ldots\!,4$(this reconstruction strategy resembles plain unsharp masking).
Bottom right image:
This reconstruction uses only first-level scales from 2 to 4. Components outside this set are disregarded. Weight coefficients are given by: $\alpha_{2,1}=10$, $\alpha_{2,2} = 7$, $\alpha_{3,2} = 7$, all others are set to 4.
Note how the low frequency component present in both the original image and the one in the opposite corner is suppressed in the other two images, thus giving larger contrast-enhancement for close-to-limb structures. On the other hand, by highlighting the high frequency components, the structures appear sharper (bottom right).

3.2 CME development in LASCO-C2 field-of-view

\includegraphics[width=4.2cm,clip]{H3948F3P.eps} \end{figure} Figure 3: Time evolution of a CME on August 13, 2002 as observed by LASCO-C2 at 08:06 UT, 10:54 UT, 12:30 UT, and 13:54 UT (from top to bottom), the column on the left corresponding to the LASCO-C2 (level 0.5 with background removed) images. The other 3 columns correspond to different restoration processes based on an 8 first-level scales plus continuum, each scale further subdivided in 3 scales plus continuum.
Open with DEXTER

This application is aimed to exemplify how the proper choice of the reconstruction strategies can reveal details hidden or not resolved in original images. The selected event, as seen by LASCO-C2 on August 13, 2002 between 08:06 UT and 13:54 UT, relates to a typical 3-part CME (Gosling 1997 and references therein) on the NE quadrant. Detailed analysis of this event will be presented in a forthcoming paper (Stenborg et al. 2002). The level 0.5 images used for this examples are with background removed. No cosmic ray removal routine was applied in order to highlight the enhancement of the star field as a side-effect.

Figure 3 shows four stages of its evolution (as seen from top to bottom). Each of the four rows represents a definite instant in time, while columns (from left to right) show the original image and three selected reconstructions based on different strategies. In this way, the temporal evolution can be followed easily within each of the four columns. The original images were firstly decomposed in 8 scales (plus continuum) and a second level decomposition in 3 sub-scales (plus continuum) followed.

The first reconstruction strategy (second column) is merely aimed at the sharpening of structures. It is equivalent to an unsharp masking obtained by giving to every wavelet scale the same weight. The main difference with the unsharp masking is that the continuum component corresponding to each second-decomposition-level scale was completely removed, i.e. $\alpha_{i,0} = 0$ for $i=1,2,\ldots\!,8$. It can be seen that boundaries are better discerned all across the field of view of the instrument.

\includegraphics[width=8.8cm,clip]{H3948F4B.eps} \end{figure} Figure 4: Prominence on June 2, 1998 at 13:31 UT. Left panel: LASCO-C2 level 0.5 without background removal. Right panel: corresponding image constructed upon decomposition in 50 first-level scales plus continuum, all of them further decomposed in 3 levels plus continuum, and reconstructed considering only up to level 25. See text for details.
Open with DEXTER

The second strategy (third column) points to the enhancement of internal details, as can be noticed especially in the second and third rows. In this case, the continuum at the first decomposition level was removed ( $\alpha_{0,j}=0$ for j=0,1,2,3), giving more weight to the high frequency components. The twisted-like structure of a filament inside the CME not discernible in the original images is revealed with this reconstruction scheme, allowing its tracing until the very end of the field of view (last row). On the other hand, the streamer northwards of the CME that is seen as being pushed away as the CME evolves (likely by an associated bow shock) is now clearly defined. Moreover, visual inspection of a movie made using this reconstruction revealed the high correlation between the streamer and the leading edge confirmed by the null relative speed between them (Stenborg et al. 2002).

Finally, the last reconstruction strategy (fourth column) depicts minor details, only shown to illustrate how the choice of different scales affect the final result. Only the first-level continuum and the second-level highest frequencies were used, explicitly avoiding their continuum component.

3.3 Prominence in LASCO-C2

In this example we test the technique's performance on 0.5 level images with no background removal to check its power when the background level closely matches that of the signal. The selected image corresponds to the well known event occurred on June 02, 1998 (see e.g., Plunkett et al. 2000) and was registered at 13:31 UT. In Fig. 4 both the original level 0.5 image (left) and the wavelet processed one (right) are shown.

The original image was transformed into 50 first-level wavelet scales plus continuum which were further decomposed in 3 sub-scales each. For the reconstruction, only sub-scales belonging to the first 25 first-level decomposition were considered, excluding the continuum.

As can be seen in the figure, this reconstruction strategy proves to be very effective in defining the internal structure of the prominence, especially in the case of those features close to the occulter. Although in the original image structures are hardly recognizable the processed image reveals its filamentary nature.

The reader can also observe that our technique also revealed sharp edges inside the occulter. They were left on purpose to make visible the detection of features originally hidden. Diffraction rings close to the occulter are seen to have been enhanced, too.

3.4 CME internal details enhancement on LASCO-C3

\includegraphics[width=8.8cm,clip]{H3948F5B.eps} \end{figure} Figure 5: CME on July 4, 2002 at 00:42 UT. Left panel: LASCO-C3 level 0.5 with background, cosmic rays and star field removed. Right panel: processed image based on 8 first-level scales plus continuum decomposition, each scale further subdivided in 3 scales plus continuum. For details on the reconstruction strategy see text.
Open with DEXTER

This last example is aimed to illustrate the application of our algorithm on LASCO-C3 images. Despite the lower spatial resolution of the instrument as compared to LASCO-C1 and -C2, internal details can be revealed provided the right reconstruction scheme is chosen. The image selected was taken on July 4, 2002 at 00:42 UT. Figure 5 shows both the original level 0.5 image (background, cosmic rays, and star field removed) and the reconstructed one. The original image was decomposed on 8 first-level wavelet planes plus continuum, each of them further transformed into 3 sub-scales plus continuum.

The reconstruction strategy was based upon stressing all wavelet scales with respect to the continuum (kind of unsharp masking). In a compromise between reducing noise and increasing sharpness, we set: $\alpha_{0,j} = 1$ for j=0,1,2,3; $\alpha_{1,1}$, $\alpha_{1,2}$, $\alpha_{2,1}$, $\alpha_{7,0}$, $\alpha_{8,0}$ and $\alpha_{8,3}$ were set to 3; while the rest of the weight coefficients were given twice this value.

Crude visual inspection shows that local gradients inside the CME structure are increased.

4 Summary and conclusions

Our multiresolution technique for the selective enhancement of specific spatial scales composing a given coronagraph images is based on a joint application of multi-level decomposition via the à trous wavelet transform, local noise reduction, and interactive weighted recomposition.

The key steps involved in its implementation can be summarized as follows:

Multi-level wavelet decomposition via the à trous algorithm.
Local noise variance estimation for each scale in the decomposition scheme.
Threshold determination based on statistical criteria (confidence levels).
Noise reduction by local hard-thresholding on wavelet coefficients of each sub-scale.
Interactive weighted recomposition to reconstruct structures.
Typical coronagraph images show coexistent structures exhibiting high and low intensities (wide dynamic range). Our method's property of being highly localized (depending upon the value of N relative to the image size) allows to treat them on the same ground and without affecting each other. As a result, portions of the image with high intensity as compared to the area of interest can be taken into account without compromising the quality of the reconstruction.

The examples described in Sect. 3 show the wide range of applicability and potentiality of the technique in revealing structures that were not resolved or even hidden in the original images. Boundaries and internal features of structures can also be sharply defined provided the right choice of the reconstruction strategy is made. The technique's adaptability to different instrument characteristics is also evidenced by the examples.

These complementary aspects present in the technique constitute a major advance towards unambiguous image interpretation, fulfilling the requirements for conducting reliable morphological studies. Corresponding application of this technique jointly with a specially developed tool will be presented in Stenborg et al. (2002). Detection through selective enhancement and weighted reconstruction allows for subsequent tracking across the field of view and quantification of kinematical quantities of both pseudo-stationary and dynamic coronal structures.

The authors would like to point out that this approach confers detailed control over the restoration process, allowing the user to separately weigh the contribution of each different filtered wavelet component of the multiple-level decomposition scheme to the reconstruction. In particular, and as was seen in the application examples, different reconstruction strategies can be designed to treat each kind of coronal structure or dynamical event in order to obtain a final image that results more adequate to specific purposes.

Images are a courtesy of SOHO/LASCO consortium. SOHO is a project of International Cooperation between ESA and NASA. The authors would like to express their gratitude to Marilena Mierla and Maria Hebe Cremades both PhD students at Max-Planck-Institut für Aeronomie (MPAe) for their help with the images.



Copyright ESO 2003