A&A 391, 339-351 (2002)
DOI: 10.1051/0004-6361:20020643

Application of wavelet analysis to transversal coronal loop oscillations

J. Ireland 1 - I. De Moortel 2


1 - L3 Communications Analytics Corporation, NASA Goddard Spaceflight Center, Code 682.3, Building 26, Room G-1, Greenbelt, Maryland 20771, USA
2 - School of Mathematical and Computational Sciences, University of St. Andrews, St. Andrews, KY16 9SS, Scotland, UK

Received 24 January 2002 / Accepted 29 April 2002

Abstract
There as yet remain few examples of well observed, transversal oscillations in coronal loops. Such oscillations have the potential to yield much information on the nature of the solar corona, as demonstrated by the analysis of Nakariakov et al. (1999) of a transversely oscillating loop observed in the TRACE 171 Å passband on 14th July, 1998. Their analysis extracts a decaying loop oscillation signal from the data which is then considered in the light of the substantial body of theoretically and computationally derived knowledge of the dynamics of coronal loops. The analysis presented in this paper approaches the reduction of the same dataset using wavelet techniques described by De Moortel & Hood (2000) and De Moortel et al. (2002). The authors show that the value of the decay exponent N in a decaying oscillating time series of the form $\exp(-kt^{N})$ is measurable from a wavelet transform of the time series (for some decay constant k and time t). The application of these techniques shows that the value of the decay exponent in the 14th July, 1998 event is not well determined by the data, i.e., the associated error is very large. Since the value of the decay exponent implies the presence of particular decay mechanisms and not others, the large error associated with the exponent value implies that a wide range of mechanisms should be considered when discussing the physics behind this event. Comments are also made on the time dependence of the oscillation wavelet scale. Two additional examples of transversal coronal loop oscillations are also analysed.

Key words: Sun: activity - Sun: corona - Methods: data analysis


  
1 Introduction

It is an observational fact that the Sun's outlying corona is very much hotter than the underlying solar surface. This begs the question, what heats the solar corona? At present, the exact nature of the coronal heating mechanism remains a mystery.

A fundamental problem any mechanism must overcome to heat the corona is the theoretically expected high value of the Lundquist number. This implies that for energy to be efficiently dissipated in timescales short enough to produce the observed heating of the corona, it must be dissipated at very small length scales. Many mechanisms have been suggested (Mandrini et al. 2000; Aschwanden 2001, Narain & Ulmschneider 1996; Narain & Ulmschneider 1990) that, in one way or another, transfer energy from longer lengthscales at which significant energy appears to be injected, to shorter lengthscales where energy can be dissipated.

This has been the overwhelmingly popular approach to the coronal heating problem in the recent past. Much effort has gone into sidestepping the high Lundquist number problem in order to get energy down to the lengthscales at which it can be efficiently dissipated. However, a recent analysis by Nakariakov et al. (1999) has shown that the Lundquist number may be substantially smaller - by 7 or 8 orders of magnitude - than previously thought. These authors showed that a flare seen in the TRACE 171 Å passband (corresponding to around 1MK) on 14 July 1998 in active region AR8270 triggers an oscillatory event in a loop anchored in a nearby active region that may be magnetically connected to the flaring region.

Nakariakov et al. (1999) analyse the oscillation of this loop. It is assumed that the loop oscillation follows the form

 \begin{displaymath}\exp(-kt)\sin(\omega t + \phi)
\end{displaymath} (1)

for fitted values of decay constant k, angular frequency $\omega$and phase $\phi$. The assumed decay mechanism behind the observed dynamics is the decay of the fundamental global kink oscillation mode of the loop. Application of the relevant theoretical and computational studies result in a Lundquist number for the corona that is some 7 to 8 orders of magnitude smaller than that expected from theory. This in turn implies that the coronal dissipation coefficient is some 7 to 8 orders of magnitude larger than expected. If true, the vastly higher coronal dissipation coefficient would solve a substantial number of existing difficulties in many coronal heating theories (although exactly how such a large value can exist would no doubt shift the emphasis of coronal physics considerably).

In reply to this assertion, a recent paper by Schrijver & Brown (2000) suggest that the behaviour of the loop oscillation may be due to the particular dynamics of the underlying photospheric magnetic sources and not due to a very low magnetic Reynolds number. The observed decaying oscillation is merely the coronal response to the motions of the underlying magnetic field. This model addresses a number of observational questions not addressed directly by the Nakariakov et al. (1999) approach, principal of which is the fact very few loops in the active region are observed to oscillate (we study the same loop as Nakariakov et al. 1999 for the purposes of comparison). If the active region loops are truly reacting to a flare trigger one would naively expect many more loops to oscillate. This suggests the loops that do oscillate differ from their neighbours in some way; Schrijver & Brown (2000) suggest that the photospheric source field may be the cause of this difference.

Ofman (2002) shows that the decay mechanism described De Pontieu et al. (2001), that is, enhanced chromospheric damping at the footpoints of the coronal loop due to ion-neutral damping of Alfvén waves, cannot account for the observed loop oscillation decay time of the 14 July 1998 event observed by Nakariakov et al. (1999). However, Ofman (2002) also points out that the plausibility of this mechanism depends on a number of observed and estimated parameters that will vary from loop to loop, making this decay mechanism more or less significant on a case by case basis.

The precise mechanism by which this loop oscillation decays is clearly of great significance to the understanding of the solar atmosphere and hence understanding these loops' behaviours is crucial. The advantage of the assumption of Eq. (1) in describing the behaviour of decaying oscillations in coronal loops is its simplicity. However, it can be generalised easily by introducing a decay exponent N such that

 \begin{displaymath}\exp(-kt^{N})\sin(\omega t + \phi).
\end{displaymath} (2)

This form of loop oscillation decay represents a spectrum of possible decay mechanisms, of which the Nakariakov et al. (1999) form (N=1, outlined above) is just one. For example, N=3 may be indicative of Alfvén wave phase mixing due to small scale density/magnetic field inhomogeneities (De Moortel et al. 1999; De Moortel et al. 2000; Ireland & Priest 1997). Coupling between different wave modes (Mann et al. 1997) can be represented with a N=2 decay exponent. Introducing a decay exponent different from 1 allows exploration into the possibility that different decay mechanisms may be responsible for the observed oscillation decay.

This paper applies the ideas contained in De Moortel et al. (2002), to probe the dynamics of transversal coronal loop oscillations. The oscillation associated with the flare event of 14 July, 1998 is used to describe the application. Two other examples (occuring on 25 October 1999 and 21 March 2001), both in TRACE 171 Å data, are also analysed. De Moortel et al. (2002) outline a wavelet based method of analysing decaying oscillations in order to determine important properties of the dynamics, for instance, the value of the decay coefficient k and the decay exponent Nassociated with a $\exp(-kt^{N})$ decay. Estimates to the value of N (and its error) for these examples have been given by De Moortel et al. (2002): these estimates, however, do not correctly reflect the measurement errors present in the data and so the accuracy of the decay exponents measured there is overestimated.

This paper derives values to the decay exponent in a more complete way, starting by examining the measurement process itself (see Sect. 2.1) and then applying that to the data (Sect. 2.2) fully taking into account the noise properties of the data. This approach effects a closer look at the errors present in the data, generating a more realistic assessment of the error in determining the decay exponent. This paper concentrates on the value of N (and the error in its measurement) as this not only tells us how fast the decay occurs but also gives us a pointer as to which mechanism might be causing the energy of the oscillation to decay.

  
2 Analysis

An examination of the dynamics of this loop begins with determining the motion of the loop and the errors associated with that determination. The resulting time series and its statistics are then utilised to provide statistical information on the wavelet transform, from which a decay exponent N and error $\delta N$ are found. The process followed in deriving the loop motion and its attendant statistics is described in the following sections.

  
2.1 An algorithm for finding the position of the loop

A movie of the TRACE 171 Å data of 14 July, 1998, 12:55 UT - 13:40 UT, clearly shows the presence of a single, easily observable oscillating loop (or bundle of loops). Crucially, as pointed out by Nakariakov et al. (1999), the loop motion - i.e., its average deviation from some average rest position - also decays, indicating that the energy associated with this motion decreases with time. Such a movie also shows that the whole loop appears to oscillate, although the displacement of the loop is at a maximum close to its apparent loop top. A slit, 4 pixels wide in the north-south solar-Y direction and 26 pixels wide in the east-west solar-X direction was placed at the position indicated in Fig. 1.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f1.eps}
\end{figure} Figure 1: An example image from the TRACE 171 Å bandpass data taken on 14 July, 1998 beginning at 12.55 UT. The loop under study is outlined in white. The slit shown (black outlined box) delimits the area where the loop oscillation is measured. The slit is 26 pixels long in the solar-X direction (13 arcsecs) and 4 pixels high in the solar-Y direction (2 arcsecs). There are 36 images in the resulting time series.
Open with DEXTER

As was noted in Nakariakov et al. (1999), adjacent slices (in the north-south direction) exhibit good coherence at this position in the data. A time series is formed by summing the data over the solar-Y direction to form a time series in solar-X and time.

Figure 2 shows this x-t' surface; at each time (sample number t') the brightest point has been normalised to 1.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f2.eps}
\end{figure} Figure 2: Solar-X - sample number (sample number, denoted in the text by t', is used as a proxy for time) plot of the loop oscillation. This data is formed by summing along the solar-Ydirection of the data contained by the slit described in Sect. 2.1 at each observation time. At each time, the brightest point has been normalised to 1. The loop clearly oscillates: the rotation of the Sun is apparent too by the steady drift to larger solar-X values.
Open with DEXTER

This plot shows the effect of solar rotation superimposed on the loop oscillation. Note that there are several bright pixels surrounding the brightest one at most times, i.e., the loop is almost equally bright at a number of pixels.

The question now becomes one of how best to decide where the loop is at each time, and the following approach was chosen: a window of fixed width w (in pixels; in the analyses presented below, w=8) is slid along the x direction. At each position the total number of counts in that window is found. After sliding along all the x-values, the position of the maximum of this windowed sum is found and it is this position that is called the position of the loop.

By using a window to find the loop position, the algorithm integrates over the whole width of the loop rather than just considering a single point on it, as in Nakariakov et al. (1999). Including the notion of a loop width in the definition of the loop position makes the derived time series less sensitive to the precise position of the slit in the original movie, to the noise in the data and to the fact that there may not be one single brightest pixel at each time. Note that setting w=1 recovers the analysis algorithm of Nakariakov et al. (1999).

The time series derived from the data is shown in Fig. 6. The time series has a cadence of 74 s and has been corrected for the effect of solar rotation. It is evident that the loop does oscillate, over and above the error estimates. The derivation of the error associated with each point in the time series is discussed in more detail below.

  
2.2 Interaction of the loop position finding algorithm with the signal noise


  \begin{figure}
\par\includegraphics[width=12.4cm,clip]{MS2305f3.eps}
\end{figure} Figure 3: Example loop position distributions arising from the interaction of photon noise with the loop position finding algorithm. The distributions show the frequency with which the loop position finding algorithm deviates away from the locations found from the source data when the algorithm is fed x-t' surfaces statistically similar to the one shown in Fig. 2.
Open with DEXTER

The incident photons TRACE collects are subject to Poisson statistics. The data however is noisier over and above this particular source - for example, the data also suffers from JPEG (Joint Picture Expert Group) compression noise. A fuller description of the noise aspect of TRACE data can be found in Aschwanden et al. (2000). For the purposes of this paper, it is sufficient to assume that the images used were subject to Poisson noise only i.e., other sources of noise are small in comparison. Since the data is noisy, one can expect that the position of the loop is also subject to noise via the position finding algorithm detailed above acting on this noisy data.

To measure the effect of the photon noise interacting with the loop position finding algorithm, a statistical approach is used; one thousand statistically similar x-t' time series are analysed using the position finding algorithm described in Sect. 2.1. Each of these x-t' time series is drawn from a Poisson distribution with mean equal to the number of photons found in the source time series. Applying the loop position finding algorithm to each of these x-t'time series (analogues of Fig. 2) generated 1000 loop position time series. Using these loop position time series, distributions of loop position for each time were found.

Some example distributions illustrating the interaction of the loop position finding algorithm with statistically similar x-t' time series are shown in Fig. 3. These show that the position distributions vary from time to time, depending on the conditions at that time. The distributions tell us that there is a small, but finite probability that the loop position can be 1, 2, 3 etc., pixels different from that measured directly from the original x-t' time series data (Fig. 2). Let the probability of finding a difference in loop position between the loop position measured from the data and one measured from a statistically similar x-t' time series be an for a difference of n pixels. The probability distribution can be written as

\begin{displaymath}\mbox{Prob} \left\{ \begin{array}{l}
\mbox{ loop position is...
...rent from the original data} \\
\end{array} \right\} = a_{n}
\end{displaymath} (3)

where $\Sigma_{n} a_{n} = 1$. Values of an were calculated in the range $-5\le n \le 5$, where negative values of n mean smaller pixel values, and positive values mean larger pixel values. In practice, pixel differences exceeding $\pm5$ are not found indicating that the distribution is adequately represented by this range of pixel differences. The square deviation of this distribution (denoted by $\sigma^{2}_{\rm A}$ where the subscript "A'' refers to "algorithm'') is the expected value of n2, i.e.,

\begin{displaymath}\sigma^{2}_{\rm A} = \Sigma_{n} n^{2}a_{n}.
\end{displaymath} (4)

Figure 4 (solid line) shows a graph of $\sigma^{2}_{\rm A}$ as a function of sample number t'.
  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f4.eps}
\end{figure} Figure 4: Square deviations arising from the loop position finding algorithm ( $\sigma _{\rm A}^{2}$, solid line) and the uniformly random noise ( $\sigma _{\rm P}^{2}$, dashed line). The total square deviation at any point in time is the sum of these two quantities. The average value of $\sigma _{\rm A}^{2}$ and $\sigma _{\rm P}^{2}$ over all sample numbers is indicated by the dotted line.
Open with DEXTER

The contribution of this effect varies with time, depending on the intensities observed at each successive time. For example, if the value of any given pixel is only slightly larger than its next nearest neighbour then the Poisson noise can be enough to change the position of the brighter pixel by 1 in one of the statistically similar x-t' time series. When this effect is folded into the operation of the loop position finding algorithm, it is found that the loop position becomes smeared out statistically, carrying the effect of the Poisson noise in the data through to the time series.

  
2.3 Finite pixel size

The above effect can move the position of the maximum of the loop from one pixel to another. However, each pixel is also finitely sized, that is, each pixel resolves a finitely sized piece of the Sun. If the loop position at a particular time is at a particular pixel position then this implies that the true loop position is somewhere inside this pixel.

Since there is no way to tell where the true loop position is inside the pixel (because the instrument is at the limit of its resolution), the true loop position could be anywhere inside this pixel. Hence the probability of finding a difference in loop position of n pixels (as described in the previous section) really shows that the true loop position is in a range and not exactly at n.

The error introduced by the finite pixel size cannot be anymore than 0.5 pixels - any higher and the loop position would move to a neighbouring pixel. Let x be the true loop position. Using the definitions of n and an given above the probability of finding x is given by

 \begin{displaymath}\mbox{Prob}\left\{ x: n-0.5 < x \le n+0.5 \right\} = a_{n}
\end{displaymath} (5)

where x is uniformly distributed in the range. The square deviation in the expected position therefore takes account of both the effect described in Sect. 2.2 and the finite pixel size. Denoting the square deviation as $\sigma^{2}_{\rm A,P}$ one obtains

\begin{displaymath}\sigma^{2}_{\rm A,P} = \Sigma_{n}
a_{n}\left[n^{2}+\frac{1}{12}\right]
\end{displaymath} (6)

where the subscript "P'' refers to the effect of finite pixel size and "A,P'' refers to the algorithm derived error of Sect. 2.2 taken along with the effect of finite pixel size. Note also that

\begin{displaymath}\sigma^{2}_{\rm A,P} - \sigma^{2}_{\rm A} = \frac{1}{12}\cdot
\end{displaymath} (7)

Note that the right hand side of this equation is just $\sigma _{\rm P}^{2}$, the expected square deviation of a uniform distribution in the limits [-1/2,1/2]. Examining Fig. 4, shows that the pixel noise $\sigma _{\rm P}^{2}$ is a significant contribution to the uncertainty for some pixels and not significant for others. The error bars in Fig. 6 are formed by adding as the sum of squares the square deviations arising from the position finding algorithm and finite pixel size sources of uncertainty.

  
2.4 Analysis of the wavelet transform: Decay exponent

In Sect. 2.2, 1000 statistically similar loop position time series were created by applying the loop position finding algorithm of Sect. 2.1. These time series were then subjected to the finite pixel size effect of Sect. 2.3, yielding a set of time series that reflect the sources of uncertainty described in Sects. 2.2 and 2.3. The wavelet transform of this set of time series yields information on the statistics of its wavelet transform. The transforms give us information on the probability of the existence of significant power (power that is above the confidence level) at different wavelet scales. It also yields an error estimate for the wavelet power for all scales and times.

This is important for the application of the analysis method of De Moortel et al. (2002). In this analysis, the time series is first wavelet transformed. The transform is two dimensional, one dimension in wavelet scale, the other in time. It is shown that if the original time series has a decay of the form

\begin{displaymath}\exp\left(-kt^{N}\right)
\end{displaymath}

where k is the decay constant, t is time and N is the decay exponent then the exponential decay survives the transform into the wavelet power at all wavelet scales.

Since the decay survives the transform at all scales one can derive values of N from the transform at each wavelet scale. The following procedure is implemented in order to find values to N from the transform. First, a scale is chosen (note that attention is confined to portions of the transform that are above the confidence level, set in this case to 99%): the wavelet power at this scale is

 \begin{displaymath}\vert W\vert _{i}^{2} \approx Q\exp\left(-kt_{i}^{N}\right)
\end{displaymath} (8)

as shown in De Moortel et al. (2002), where Q>0 is a function of wavelet scale, but is constant here since we are fixing the wavelet scale (note that this constraint is relaxed later - see Sect. 3.3.3 for further details). |W|i2 is normalised by dividing by $\max\ (\vert W\vert _{i}^{2})$ and so $\vert W\vert _{i}^{2} \propto
\exp(-kt_{i}^{N})$. The value of the exponent can be found by taking the double logarithm of the transform and fitting a straight line to the data: the gradient of the line is the value of the exponent, that is,

\begin{displaymath}\ln(-\ln\vert W\vert^{2}_{i}) \propto \ln(k) + N\ln(t_{i}).
\end{displaymath} (9)

Hence finding the decay exponent of the loop motion has been reduced to a line-fitting problem. The fit is obtained by minimising the weighted sum

 \begin{displaymath}\sum_{i=1}^{n}
\frac{[\ln(-\ln(\vert W\vert^{2}_{i})) - Y_{i}]^2}
{\sigma_{L}^{2}(i)}
\end{displaymath} (10)

where Yi is a first order polynomial, i.e., a straight line, the gradient of which is the measured decay exponent. The weights $\sigma_{L}^{2}(i)$ are defined as (Barford 1985)

\begin{displaymath}\sigma_{L}^{2}(i) = \left\vert
\frac{\sigma_{W}^{2}(i)}
{\vert W\vert^{2}_{i}\ln\vert W\vert^{2}_{i}}
\right\vert\cdot
\end{displaymath}

The quantity $\sigma_{W}^{2}(i)$ is found by calculating the square deviation of the spread of wavelet power values at the chosen scale and time i for all the wavelet transforms derived from the statistically similar time series.


  \begin{figure}
\par\includegraphics[width=13cm,clip]{MS2305f5.eps}
\end{figure} Figure 5: An example wavelet transform shown as a contour plot. The black line shows the path of maximum power along the top of the ridge leading down from the local maximum (bottom left of the transform). Power above the confidence level is bright and above the contour marked ``1''.
Open with DEXTER

This procedure for finding the decay exponent N is repeated for all the statistically similar time series, resulting in a spread of values to N at the chosen wavelet scale. The measured value of N is taken to be the average of this spread. The error in N, $\delta N$is estimated by the value of the standard deviation of this spread of decay exponent values. An error on the value of N is important as it allows us to measure how well defined the decay exponent is (at that scale) by the data.

  
2.5 Analysis of the wavelet transform: Time varying wavelet scale

Apart from determining the decay exponent of the oscillation, it is also possible to use a wavelet analysis to look for evidence that the period of the oscillation varies with time. This can be achieved by examining the structure of the wavelet transform. A portion of the wavelet transform of one of the loop position time series is shown in Fig. 5 as a contour plot. There is a peak in the power at approximately sample number 5 and wavelet scale 210 s. Note that this peak continues to larger and smaller sample numbers t' as a gentle ridge in the wavelet transform (see Fig. 5). By following the maximum power along the top of this ridge as a function of time, an estimate of the variation of wavelet scale with time can be obtained.

Following the path along the top of the ridge of wavelet power down from the maximum to previous and future times traces a path in the data such as the one shown in Fig. 5. This process is repeated for all 1000 wavelet transforms, from which an average path (along with its corresponding standard deviation) can be found. This average path represents an estimate of the variation of oscillation wavelet scale with time.

  
3 Results

Sections 3.1-3.3 deal with the transverse coronal loop oscillation of 14 July 1998. The results are found by placing a slit in the data as shown in Fig. 1 and performing the analysis as described in Sect. 2. The resulting time series is shown in Fig. 6.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f6.eps}
\end{figure} Figure 6: The time series analysed for the results described in Sect. 3. The error bars shown are one standard deviation $\sigma _{\rm A,P}$.
Open with DEXTER

Section 3.4 considers two other examples from 25 October 1999 and 21 March 2001.

  
3.1 14 July 1998: Relative noise contributions

Both an oscillation and decay are obvious in the original time series. The differing size of error bar at each point reflects how well the oscillation at that point in time has been determined by the loop position finding algorithm. It is found that at each time the probability distribution an is approximately symmetrical and decays quickly and monotonically to zero away from the central value of a0. Figure 4 shows the square deviations of these distributions as a function of time (solid line) in comparison to the square deviation (dashed line) arising purely from the uniform distribution of the loop position inside a pixel (see Sects. 2.2 and 2.3). As can be seen, for some times the finite pixel size is the primary source of uncertainty. For others, however, the dominant source of uncertainty is the noisiness of the data interacting with the loop position finding algorithm. The average square deviation (dotted line) is greater than that arising from the pixel noise, indicating that, on average, the noise arising from the interaction of the loop position finding algorithm with the data makes a significant contribution to the overall noise.

  
3.2 14 July 1998: Time varying wavelet scale of the oscillation

Figure 7 shows the results of following the peak ridge of wavelet power in the 1000 statistically similar transforms.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f7.eps}
\end{figure} Figure 7: The variation of oscillation wavelet scale as a function of sample number (time). The error bars shown are given as one standard deviation. The dot-dashed lines denote the Nakariakov et al. (1999) results, whereas the dashed lines denote the cone of influence of the wavelet transform.
Open with DEXTER

The solid line describes the variation of wavelet scale. Errors are estimated by calculating the standard deviation of all 1000 results at each sample number. The horizontal dot-dash lines delimit the Nakariakov et al. (1999) results whereas the dashed line shows the cone of influence for this transform.

There is a clear linear increase in wavelet scale from sample number 4 to about sample number 15. This increase is over and above the estimated error. From sample number 15 onwards the error bars increase dramatically, so much so that a claim of further time dependence cannot be supported. This is because at these large values of t', the oscillation is not very well defined since the signal has almost decayed to the level of the noise, resulting in a very small signal to noise ratio. Test time series having a fixed oscillation period, with all other properties similar to the observed time series, were subjected to the same analysis as the observed time series. These test time series do not show the same time varying wavelet scale. This shows that the time varying wavelet scale does not arise from the properties of the transforms i.e., edge effects. It is concluded that the wavelet scale of the time series varies with time. The wavelet scale $S_{\max}$ in seconds is estimated as

\begin{displaymath}S_{\max}(t') = \left\{ \begin{array}{ll}
200 + 6(t'-4), & 4 \le t' \le 14 \\
260, & 14 < t' \le 30
\end{array} \right.
\end{displaymath} (11)

where t' is the sample number.

  
3.3 14 July 1998: Decay exponents

Multiple methods are used to find values for the decay exponent and these are labelled N1 to N4 inclusive. Detailed explanation of how each value arises is given below.

  
3.3.1 First determination

The process of distillation of a decay exponent from this time series has been outlined above and is presented in more detail in De Moortel et al. (2002). The method assumes that the original oscillation has a fixed period for the duration of the time series. However, the results of the previous section show that the time series appears to have a time dependent wavelet scale, and hence period.

By looking only at the section $ 14 < t' \le 30$, we can assume that the wavelet scale there is constant in this range and calculate a value of N. An analysis of the of the time series in this range for the decay exponent yields the answer $N_{1} \le 2.0 \pm 1.2$.

De Moortel et al. (2002) note that the analysis method consistently overestimates the value of N. For time series in which the decay time is only a few times longer than the oscillation period - such as the one analysed in this paper - they note that this overestimation will be more pronounced. Clearly, therefore, the value derived above denotes a range of upper limits to the value of the decay exponent. Note that the value assumed by Nakariakov et al. (1999) falls in the estimated error range. The error associated with the value of N1 is large, indicating that the decay exponent is not well determined by the data. The noise associated with the time series is substantial enough to hide the true value of N inside large error bars.

  
3.3.2 Second determination

The wavelet analysis above has yielded the information that the oscillation appears to have a time varying period. Since this is new information, it is now natural to ask if this piece of information can improve a direct fit of a decaying oscillation to the data. It is also natural to ask if a value to the decay exponent can be found by other means in order to test the validity of the above determination.

A simple parameter space search was conducted to find the value of Nthat best fits the data in the model

 \begin{displaymath}y(t) = A\cos\left(\frac{2\pi t}{P(t)}+\phi\right)
\exp\left[-\left(\frac{t}{\tau}\right)^{N}\right]
\end{displaymath} (12)

where $P(t)=1.03S_{\max}(t)$ where $S_{\max}(t)$ has the same form as $S_{\max}(t')$ but is suitably rescaled from sample number to time. The factor 1.03 arises from a comparison of wavelet scales to pure Fourier modes (Torrence & Compo 1998). The other parameter values - amplitude A, phase $\phi$ and decay time $\tau$are estimated from the values described by Nakariakov et al. (1999). The decay exponent spans the range $0\le N \le 3$ in steps of 0.2 and fits are performed for each of the 1000 statistically similar time series. A plot of decay exponent against the residual is formed and the value of N at which the residual reaches a minimum is taken to be the value of N that best fits the data. The results are plotted in Fig. 8,
  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f8.eps}
\end{figure} Figure 8: Variation of residual with decay exponent for time varying period. The solid line indicates the average residual; the dashed lines indicate an error in the residual estimated at one standard deviation of the distribution of results.
Open with DEXTER

the solid line indicating the average value to N, the dashed lines indicating one standard deviation (i.e., the estimated error) in the distribution of residuals found at each value of N.

The best value of decay exponent is found to be N2 = 1.6(1.0,2.8)(where the numbers in the brackets denote the lower and upper values to the range). The lopsided error bar reflects the asymmetrical nature of the residual graph. The minimum value of the residual is found to be 14.3. The remaining parameters are $A=2.7 \pm
0.3$ px, $\phi=0.9\pm 0.1$ radians and $\tau=517\pm100$ s.

As a check, it is useful to compare the N2 result against the answer that arises from setting P(t) = 250 s. Such a comparison allows one to assess if having a time varying period significantly improves the fit (as well as yielding a value to the decay exponent). The same fitting process is followed yielding N3 = 1.0(0.6,2.0), with a residual value of 14.6. The other parameters are $A=2.5 \pm
0.2$ px, $\phi=0.8\pm 0.1$ radians and $\tau=815\pm141$ s. The variation of residual with decay exponent is shown in Fig. 9.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f9.eps}
\end{figure} Figure 9: Variation of residual with decay exponent for constant period. The solid line indicates the average residual; the dashed lines indicate an error in the residual estimated at one standard deviation of the distribution of results.
Open with DEXTER

Finally, if we were to approximate $P(t)=\mbox{constant}= 240$ s, set N=1 and assume the decay mechanism of Nakariakov et al. (1999) then the two values of $\tau$ derived in this section would lead to a value of the Lundquist number on the order of or smaller than that found in Nakariakov et al. (1999).

  
3.3.3 Third determination

The fact that the residuals for both the time dependent period model and the fixed period model are so similar indicates that the two models are indistinguishable (for this data) using a simple fitting technique. The differences between the two models are washed out in the noise. The fitting process of Sect. 3.3.2 is global for the entire time series, whereas the analysis of Sect. 2.5 is localised due to the localised nature of the wavelet. The residual used to discriminate models in Sect. 3.3.2 is a global quantity, and so depends on the noise in the entire time series. The errors associated with the time varying wavelet scale of Sect. 2.5 and Fig. 7 are small enough to allow one to say that the wavelet scale does vary with time. These errors are necessarily localised, due to the local nature of wavelet analysis, and so reflect local conditions rather than global properties.

The analysis of De Moortel et al. (2002) assumes that the oscillation period is constant for all times. In essence, the method follows cross sections of wavelet power at constant scale, analysing these for the value of the decay exponent. But when the peak power of the wavelet scale moves to other scales, then using the method on horizontal cross sections of the transform only samples the time varying oscillation scale and does not represent it fully as in the case for a constant oscillation period.

However, the methods of De Moortel et al. (2002) may still be applied by making the following approximation. The wavelet scale $S_{\max}(t')$ changes by approximately 60 s in 750 s. This is relatively slow and so we may assume that at horizontal cross sections the scale of the oscillation does not vary too greatly with time, i.e., the oscillation scale is approximately constant. Assuming this, it is possible to apply the De Moortel et al. (2002) analysis over the entire time series, instead of the small section used in Sect. 3.3.1.

The advantage of considering this approach is that the systematic overestimation inherent in the method (see De Moortel et al. 2002) can be mitigated by considering test time series that approximately model the observed one. This can be done since the length of time series we are considering here is longer than in Sect. 3.3.1 above and therefore better satisfies the asymptotic nature of this method. Time series of known decay exponent are analysed by exactly the same method as the real data. The difference between the true exponent and the measured exponent yields an estimate of the consistent overestimation in the method of De Moortel et al. (2002). Experiments with test data suggest that the overestimate in the gradient (of Yi when minimising Eq. (10)) is on the order of 0.5.

The corrected decay exponent as a function of wavelet scale is shown in Fig. 10.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f10.eps}
\end{figure} Figure 10: Variation of decay exponent (N4) with wavelet scale.
Open with DEXTER

In the range of Nakariakov et al. (1999) wavelet scales the measured exponent is relatively flat, indicating that in this likely range of wavelet scales the method does not depend strongly on the wavelet scale (justifying the approximation made above). The corrected value is $N_{4}=1.3\pm1.2$.

  
3.4 Other examples of coronal loop oscillations

As was mentioned above, other examples of transversal loop oscillations do exist. A comprehensive list of the best observed examples is given in Schrijver et al. (2002). Analyses of these oscillations are given in Aschwanden et al. (2002). To further illustrate the wavelet based techniques described above, two examples from this list have also been analysed.

  
3.4.1 25 October 1999

This oscillation appears in a faint loop stretching out from NOAA active region AR 8739. It appears to have been triggered by an M-class flare at the southeastern edge of the field of view. Note that although two loops are distinguishable in the box, the leftmost oscillates the most and so is used to derive the time series Fig. 3b. Most of the oscillation consists of sub-arcsecond displacements from the mean and all points have a substantial error associated with them; hence, the signal to noise ratio is small in this example. This is reflected in Fig. 3c where the size of the error bars preclude any definitive statement about the possibility of a time dependent wavelet scale for this oscillation. The measured wavelet scale for the oscillation is $122\pm35$ s, which overlaps with the determination of 143 seconds given by Aschwanden et al. (2002). The best estimate of the decay time of $552\pm111$ s is considerably longer than the 200 s quoted in Aschwanden et al. (2002), Difficulties in obtaining a similar time series to Aschwanden et al. (2002) may account for this. The noise in the data is also reflected in the estimation of the decay exponent, $N = 0.43 \pm 1.17$. This value agrees with the value derived by De Moortel et al. (2002), although the estimated error (one standard deviation of the distribution of results obtained via the statistically similar time series) also accommodates the assumption of N=1 by Aschwanden et al. (2002).

  
3.4.2 21 March 2001

This oscillation is first observed at 02:32:44 UT, about 4 min after a M1.8 flare (Schrijver et al. 2002). A filament also destabilizes. During the oscillation, a cool spray of darker, cooler material lifts off, but mostly drains back down. The loop appears to be anchored in NOAA AR 9384. The oscillation is substantially larger than the error assocated with it at earlier times, although after sample number 14, it appears as if the oscillation is effectively damped. This is reflected in the wavelet scale as a function of sample number. Although there is a slight gradient in the line in Fig. 12c from sample number $6\rightarrow14$, it can be accommodated by the error bars.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f11r.eps}
\end{figure} Figure 11: Analysis of the loop oscillation seen in TRACE 171Å passband on 25th October 1999. Image a) shows the location of the oscillating loop on the Sun and the box from which the loop oscillation is extracted. A summation is performed parallel to the loop (in this case, the longer dimension), collapsing the data down to one line transverse to the loop at each time, analogous to the process described in Sect. 2.1. The analysis then proceeds as outlined in that and following sections. The resulting time series is shown in b), and the variation of scale as a function of time is shown in c). The cadence of these observations was 27 s.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{MS2305f12.eps}
\end{figure} Figure 12: Analysis of the loop oscillation seen in TRACE 171 Å on 21st March, 2001. Panel a) shows the region and the area used to calculate the oscillation. Panel b) shows the derived time series and c) shows the variation of wavelet scale with time. The cadence for these observations was 97 s.
Open with DEXTER

An examination of the movie associated with this oscillation shows that there may be more than one loop in the field of view of the small, outlined box in Fig. 12a, indicating that the derived time series may be due to a convolution of one or more loops oscillations. Compared to the example shown in Fig. 7, however, the evidence for a time varying wavelet scale is not compelling.

The best estimate of the wavelet scale is $495\pm51$ s, close to the value quoted by Aschwanden et al. (2002). The decay exponent is measured at $N = 2.0 \pm 1.3$ and the decay time is $1339\pm198$ s, agreeing well with De Moortel et al. (2002). The high value of decay exponent and long decay time is reasonable when compared to the time series: the oscillation survives at high amplitudes for almost half the time series and then decays quickly, consistent with a high decay exponent/long decay time oscillation. Note however that N=1 is not excluded by this analysis.

  
4 Conclusions

The purpose of this paper is to show that wavelet techniques can be used to probe the dynamics of coronal loop oscillations. These techniques offer new ways of determining the parameters of interest in the corona. The present analysis concentrates on two features of the dynamics - the decay exponent and the oscillatory behaviour of the loop.

For the event of 14 July, 1998, the four values of decay exponent found $N_{1\rightarrow4}$, whilst overlapping, suffer from large errors. The Nakariakov et al. (1999) assumed value of N=1 is not excluded by any determination. However, one could also argue that setting N=1 a priori assumes a knowledge that is not explictly supported by the data. Averaging N2, N3 and N4 yields an estimate to the decay exponent across all methods and approaches to the data of $N=1.3\pm0.9$, comfortably overlapping with the N1 range of values.

The other examples - 25 October 1999 (see Sect. 3.4.1) and 21 March 2001 (see Sect. 3.4.2) - also generate estimates of decay exponents that overlap with each other over a wide range of values. Note that signal to noise ratio in the three time series analysed is of the same order of magnitude, which percolates down to the similarly large errors in the final determinations of decay exponent for the examples shown. This suggests that unless the signal to noise ratio is improved, these methods applied to this type of data is unlikely to yield a significantly improved determination of the decay exponent, and hence the dynamics of these transversally oscillating coronal loops.

De Moortel et al. (2002) estimate values for N for the three examples used here. For the 14 July 1998, 25 October 1999 and 21 March 2001 events they find $N = 1.79 \pm 0.025$, $N = 0.42 \pm
0.0076$ and $N = 2.83 \pm 0.06$ respectively. Note that the errors generated in De Moortel et al. (2002) are very much smaller than those generated by the analysis presented in this paper. This is because the error they quote is derived from a simple least squares linear fit to the double log of the transform using a uniform (equal to 1) error estimate at each point in the time series. The error quoted therefore only reflects the fact that the data fitted (only one realisation out of all the statistically possible time series) does not lie on a straight line: it is not a reflection of the inherent noise in the data in the measured value of N. The true values to the errors are important, however, as they will aid in the identification of the correct dissipation mechanism. This points to the fact that in these types of analyses careful attention must be paid to the treatment of errors as they are just as important as the value itself.

The attention given to the interaction of the loop position finding algorithm with the data acknowledges the degree of noise in the data. Other algorithms, such as fitting the intensity across the loop with a Gaussian model, should be tested to ascertain their susceptibility to noise. Despite the noise, wavelet scale of the 14 July 1998 oscillation measurably changes with time. This is a good example of the power of a wavelet analysis. This may show that the derived time series is in fact due to a superposition of two or more loops with different decay times and/or different decay exponents. Alternately, if there is truly only one loop being considered it suggests that the decay mechanism of this loop is more complex than that of a body losing energy because it is moving in a dissipative medium.

The time-localised nature of wavelet techniques make this new type of investigation possible. However, the need for better coronal datasets is also undeniable. To this end, datasets in which the oscillating loop is bright compared to the background would reduce the error in loop position ( $\sigma _{\rm A}^{2}$) due to the noise in the data interacting with the loop position finding algorithm. In addition, future missions having better signal to noise ratios would also reduce the value of $\sigma _{\rm A}^{2}$, improving the data.

  
4.1 Current and future instrumentation

Understanding the dynamics of these transversal loop oscillations holds the promise of a deeper understanding of the physics of the solar atmosphere.

Consider the best case scenario for any future observation of such a transversal observation using the loop position finding algorithm above. In such a scenario, the loop position finding algorithm noise is zero, i.e., $\sigma_{\rm A}^{2} = 0$. The oscillation is no longer observable when the amplitude of the oscillation has decayed to less than 0.5 pixels. For an oscillation having a decay envelope of initial amplitude A pixels, decay time $\tau$ and decay exponent N, the maximum time $\theta$ before the oscillation is unobservable is

 \begin{displaymath}A\exp\left[-\left(\frac{\theta}{\tau}\right)^{N}\right] = \frac{1}{2}\cdot
\end{displaymath} (13)

This defines an upper limit to the useful length of a time series for a given decay envelope and instrument since the numerical value of Ais entirely due to the abilities of the instrument. It relates the spatial capabilities of an instrument to the maximum useful length of time series possible. Inserting the Nakariakov et al. values of $\tau
= 14.7$ min, A = 2.8 pixels and N=1 yields $\theta = 25$ min, which exceeds the length of time series used in their analysis, 20 min. Hence their values are consistent with the properties of the instrument, in this case TRACE.

Suppose now that the same decaying oscillation is observed by two instruments. Instrument (1) measures an amplitude of A1 pixels, whereas instrument (2) measures an amplitude of A2 pixels. Therefore

 \begin{displaymath}\frac{\theta_{1}}{\theta_{2}} =
\left[
\frac{\ln(2A_{1})}{\ln(2A_{2})}
\right]^{1/N}\cdot
\end{displaymath} (14)

Assume now that the optics in any instrument does not reduce the spatial resolution of the instrument, so that the spatial resolution is exactly equal to the pixel resolution. This best case scenario is definitely not true for any real instrument, but will allow comparisons to be made (although the effect of the instrumental optics could be simulated by increasing the value of the right hand side of Eq. (13)). Comparisons are made using Eq. (14) and the N3 parameter values determined in Sect. 3.3.3.

The pixel resolutions of EIT, CDS and Yohkoh-SXT (2.0'', 2.0'' and 2.46'' respectively) imply that the oscillation of 14th July 1998 would have been marginally detectable. The length of time series would have been very much shorter than that seen by TRACE. When the influence of the optics is folded in to define a real spatial resolution it is likely that this oscillation would not have been observed at all. For the time being therefore, TRACE is likely to be the only instrument with which we may observe these events.

There are, however, a number of other instruments either planned or being built, and it is instructive to consider what they may bring to the study of transverse oscillations in coronal loops. SOLAR-B, a co-operative mission of Japan, USA and the United Kingdom will carry two instruments that are of direct interest to coronal physics, XRT (X-Ray telescope) and EIS (EUV Imaging Spectrometer). It is scheduled for launch in August 2004. XRT has a 1 arcsec pixel resolution which means it would have seen 3.2 oscillations (if the oscillation had an X-ray component) on 14th July 1998 before it became unobservable. EIS has a planned pixel resolution of 2 arcsec, and so would probably fare little better than EIT, CDS and Yohkoh-SXT in observing this oscillation. The NASA mission HESSI (High Energy Solar Spectroscopic Imager), launched on 5 February 2002, can resolve elements down to 2 arcsec. STEREO (Solar Terrestrial Relations Observatory), a NASA/Johns Hopkins University mission scheduled for launch in 2003 will carry EUVI, the Extreme Ultraviolet Imager which has a pixel resolution of 1.6 arcsec, and so would have behaved similarly to SOLAR-B/XRT in terms of its ability to see the oscillation of 14th July 1998. Finally, the science definition team report on the Solar Dynamics Observatory (SDO) calls for an EUV spectroscopic imager with a pixel size of 1 arcsec. Having Doppler velocities co-temporal with imaging information adds an extra space dimension which will be very valuable in diagnosing the oscillation mode present. In addition, if the signal to noise ratio is higher than that of TRACE (in any of these future instruments), then the loops will be brighter against the background, reducing the contribution to the noise that the interaction of the loop position finding algorithm makes with the data. However, the ultimate limit remains the pixel size.

It seems that the best hope for new types information on these intriguing phenomena lies in their possible occurence at different temperatures, improved cadence (coupled with a commensurate, increased sensitivity) or in the capabilities of SDO. TRACE, therefore, remains invaluable to the investigation of these oscillations. One can also ask what pixel size would be required to extend the length of time such oscillations remain observable. Changing the subject of Eq. (14) yields

 \begin{displaymath}\frac{A_{2}}{A_{1}} =
\left[2A_{1}\right]^{\left(\theta_{2}/\theta_{1}\right)^{N}-1}.
\end{displaymath} (15)

Note that Eq. (15) depends strongly on N, i.e., oscillations with large decay exponents will require a very much smaller pixel size. For example, assuming that the oscillation observed on 14th July 1998 truly has N=1 then increasing the length of useful time series by 250 s (approximately 1 period) would require pixels 70% the size of those onboard TRACE. Increasing the length of time series by 500 s would require pixels about half the size of TRACE pixels.

4.2 Models of transversal coronal loop oscillations

Given that the decay exponent could be other than 1, what mechanism could lead to this behaviour? This is the real question of interest, as through the mechanism an understanding of the source physics may be obtained. Also, what type of mechanism could lead to the change in period, or perhaps more accurately, oscillatory timescale, of an oscillating loop?

This observation in particular may open the door to other possible mechanisms, particularly the model of Schrijver & Brown (2000) In this model, the loop oscillation is caused by field lines reconnecting across the separatrix. Dynamical theoretical studies of this phenomenon would be very interesting; it is not immediately obvious what behaviour the coalescing of two magnetic elements in the photosphere would have on the overlying coronal loops. A time series of the motion of numerically modelled loops would allow a direct comparison between observation and theory. Priest & Schrijver (1999) describe simple potential magnetic field experiments/models that demonstrate the extreme sensitivity of coronal field lines close to the magnetic separatrix to conditions at their footpoints.

Murawski & Roberts (1994) examined the time signature of impulsively generated waves in coronal loops modelled as two dimensional slabs. In this work, they found that the loops react periodically and quasi-periodically to an external impulse. When considering two neighbouring slabs, they found that the velocity evolution of the loops was very much more complicated. The dynamics of this model includes a lot of what was seen on July 14th, 1998, most notably the initial impulse, oscillations and of course the decay of those oscillations. A more detailed examination of the dynamics of this model - in particular the decay exponent and the time dependence of the oscillation period - would be very useful in comparing the observations to the data.

Alternatively, it could be that the change in wavelet scale with time is simply revealing that the loop observed is not in fact a single loop but two (or more) loops with differing oscillation periods and decay times. Experiments with test time series having two components, the longer period one having a longer decay time exhibit a behaviour in their wavelet transforms very similar to that observed experimentally. This conclusion is also supported by simply looking at a movie of the loop - at some times it appears that the loop width increases, which may be indicative of more than one loop.

A strong selection effect is evident in this multiple loop model. The oscillation that is seen is one that has a large enough amplitude, a long enough decay time and a small enough decay exponent so that it can be seen. Other oscillations may be present; indeed, an entire spectrum of oscillations may be present. Therefore, when a single oscillation model is fitted to such data the fitted values are biassed towards large amplitudes, long decay times and small decay exponents. Hence the values measured are weighted averages only and do not represent the range that may be present.

The reason why this loop oscillates and its decay mechanism is clearly not resolved. Plainly, the choice of mechanism makes assumptions about the underlying physics and that choice should be as fully supported as possible by the observations. Consequently, as much information as is reasonable (and an assessment of the quality of that information) should be obtained from a careful and thorough analysis of the data. Wavelet techniques such as the ones described here have been developed and applied with this in mind.

Acknowledgements
JI would like to thank the Solar MHD Group of the University of St. Andrews, the European Solar Magnetometry Network and L'Osservatorio di Astronomica di Capodimonte, Napoli, Italy for their support of this work.

References

 


Copyright ESO 2002