A&A 471, 687-694 (2007)
DOI: 10.1051/0004-6361:20066491
M. Rieutord1 - T. Roudier2 - S. Roques1 - C. Ducottet3
1 - Laboratoire d'Astrophysique de Toulouse et Tarbes,
UMR 5572, CNRS et Université Paul Sabatier, 14 avenue E. Belin, 31400
Toulouse,
France
2 -
Laboratoire d'Astrophysique de Toulouse et Tarbes, UMR
5572, CNRS et Université Paul Sabatier, 57 avenue d'Azereix, BP
826, 65008 Tarbes Cedex, France
3 -
aboratoire Hubert Curien, UMR5516, CNRS et Université Jean
Monnet, 18 rue du Prof. B. Lauras, 42000 Saint-Étienne, France
Received 2 October 2006 / Accepted 12 April 2007
Abstract
Aims. Determination of horizontal velocity fields on the solar surface is crucial for understanding the dynamics of structures like mesogranulation or supergranulation or simply the distribution of magnetic fields.
Methods. We pursue here the development of a method called CST for coherent structure tracking, which determines the horizontal motion of granules in the field of view.
Results. We first devise a generalization of Strous method for the segmentation of images and show that when segmentation follows the shape of granules more closely, granule tracking is less effective for large granules because of increased sensitivity to granule fragmentation. We then introduce the multi-resolution analysis on the velocity field, based on Daubechies wavelets, which provides a view of this field on different scales. An algorithm for computing the field derivatives, like the horizontal divergence and the vertical vorticity, is also devised. The effects from the lack of data or from terrestrial atmospheric distortion of the images are also briefly discussed.
Key words: convection - turbulence - Sun: granulation - Sun: photosphere
Determining the flows on the surface of the Sun has triggered
many efforts over the last decade. Of special concern was
determination of granules motions that can reveal horizontal flows on
scales larger than, typically, twice their horizontal size 2000 km. This scale is small enough to provide observers of the Sun's
surface with a detailed sampling of the large-scale flows, such as
supergranulation, and therefore makes it very interesting to determine
granule motions.
However, from the viewpoint of fluid mechanics, granules are not passive
scalars whose motions trace that of the fluid; rather, they are
structures in intensity or, assuming perfect correlation with
temperature, coherent structures of temperature. But the evolution of
the temperature field is the result of radiative, as well as advective,
processes. It is only in case that the latter dominates
that the motions of granule can be associated with horizontal flows.
Using numerical simulations in a large horizontal box, we have shown
that granule motions are highly correlated with horizontal flows when
the scale is larger than 2500 km (Rieutord et al. 2001); below this scale,
granule motions should be considered as (solar) turbulent noise.
Once the equivalence of plasma motion and granule motion is assumed,
one is left with the problem of measuring the latter motion. This is
not an easy task owing to the small angular size (1.3
)
of
the structures. Ground-based observations are sensitive to atmospheric
turbulence, while space observations are expensive owing to the
(relatively) large aperture needed for resolving granules.
Basically, two techniques have been used to measure horizontal velocity fields: either the tracking of individual granules (Strous 1995) or local correlation tracking (November & Simon 1988). The results of these two techniques have been compared (Roudier et al. 1999; Simon et al. 1995) and found to broadly agree; in the test using numerical simulations (Rieutord et al. 2001), they show the same degree of correlation with the actual plasma flows. However, detailed examinations (Roudier et al. 1999; Simon et al. 1995) have demonstrated worrying differences, especially when field derivatives like vertical vorticity and divergence are computed.
In fact from the point of view of signal processing, these two methods differ fundamentally. On the one hand, granule tracking emphasises the importance of the granule, gives no signal in between granules, and yields a velocity field that is sampled randomly following the distribution of granules. On the other hand, local correlation tracking (LCT hereafter) treats granules and intergranules on an equal footing and yields a velocity field on a regular grid. Broadly speaking, the two methods differ in the interpolation process, which unfortunately influences the final result.
In this paper we present and analyse in some detail an algorithm based on granule tracking which is able to give a reconstruction of the velocity field at all scales larger than the sampling scale. This algorithm has already been introduced in Roudier et al. (1999) in a preliminary version. We call it CST for coherent structure tracking to underline its relation with the physics lying behind it. Such an algorithm is close in its principles to particle-imaging velocimetry (PIV), as used in experimental fluid mechanics (e.g. Adrian 2005).
We developed this algorithm for three reasons: the first is obviously because it gives a different view of the data than does the LCT algorithm, since many interpolation problems may influence the final results (see the discussion in Potts et al. 2003). The second one is that it may be used on raw data and gives an estimate of the error introduced by atmospheric distortion (see the companion paper, Tkaczuk et al. 2007, in which this point is developed). Finally, it offers the possibility of selecting specific structures according to their nature, size, lifetime, etc. and of studying their motion.
In the next section we discuss the different steps of the algorithm, especially the segmentation and interpolation processes, and also point out the effects of regions lacking data. Discussion and conclusions follow.
Before describing the different steps in some detail, let us recall the five main steps of this algorithm:
To identify a granule one needs a criterion with which to decide whether a given pixel belongs to a given structure or not. This criterion needs to be local in order to avoid threshold effects due to large-scale variations in intensity, which either come from terrestrial atmospheric effects, solar acoustic waves, or even magnetic fields.
A classical criterion is based on detecting local maxima of the intensity through the curvature C=Ii+1-Ii-(Ii-Ii-1)(Roudier et al. 1999; Strous 1995). This criterion has the advantage of being simple, robust, and therefore quite efficient. However, comparing the detected patterns with the original image shows that this criterion underestimates the size of the granules. It is therefore interesting to know whether this criterion can be improved. An objective test of this improvement will be that the lifetime of the granules is increased.
![]() |
Figure 1: Comparison of the result of different segmentations on the same test image. From top to bottom and left to right we have : the original image, the result of Strous's method, the result of the BW method, the result of the proposed method. Note that BW leaves some granules undetected. |
Open with DEXTER |
Another method has been proposed by Bovelet & Wiehr (2001, hereafter referred to as BW), with a multiple-level tracking algorithm. It is based on the use of multiple-threshold levels applied to the intensity image. The granules detected at a high level are gradually extended to adjacent pixels whose intensity exceeds the lower level, while keeping a minimum distance with respect to other granules. This approach, which is very similar to a watershed-based segmentation (Soille 1999; Vincent & Soille 1991), yields sizes and shapes of granules that conform more to direct observation of the image. Nevertheless, the number of detected granules is less compared to the Strous algorithm because this method is based on the intensity.
The Strous-curvature criterion is more efficient at separating the granules. It consists in selecting the pixels whose local minimal curvature is not negative. This curvature is calculated in the four directions defined by the 8 neighbours of the considered pixel. The underestimation of the granule size stems from real granule's extension being greater than the positive curvature region. We thus propose a new segmentation algorithm (hereafter termed CD) that combines both ideas of BW and the Strous algorithm. It consists in the following steps:
![]() |
Figure 2:
Comparison of the effect of different segmentations on
lifetimes of granules. BW refers to the Bovelet and Wiehr method while
CD refers to our method (see text); the associated number is the
extension threshold parameter
![]() |
Open with DEXTER |
This new approach leads to a segmentation with the same granules as
Strous's approach, but with a controllable size. Strous's
segmentation is obtained with
.
Decreasing the value of
extends the granules. In Fig. 1 we illustrate
the discussed segmentation method using Pic-du-Midi data
. This figure illustrates the way our
segmentation extends that of Strous and closely follows the shape of
granules.
To study the influence of the segmentation on the lifetime of
granules, we plotted the statistic of the lifetime for the three
methods : the Strous method, BW method, and our proposed method.
For our method, we took three different threshold values:
,
,
and
(Fig. 2).
This figure shows that, broadly speaking, the segmentation does not influence the statistics of lifetimes very strongly. However, some differences can be noticed in the detail. The BW algorithm, by detecting less granules, shows a deficit of short-lived and long-lived granules; on the other hand, our algorithm eliminates long-lived structures when used with too low a threshold. We understand this behaviour as the result of enhanced splitting of larger structures.
In conclusion, Strous's algorithm seems the most efficient for our purpose, and it can be improved in the way we described, but at the price of increasing the computation effort a lot.
Once the image has been segmented, each granule needs to be identified. This operation, although very simple, can be quite time-consuming since all pixels should be tested at least once. The most efficient way we have found to deal with this operation is to use a recursive algorithm, letting granules grow from a single pixel. A pixel belongs to a granule if it shares at least one side with another pixel of the granule.
![]() |
Figure 3: Trajectories of granules in a five-minute time sequence. Top: Granule displacements in a small subfield. Bottom: enlarged view of a granule trajectory. Data are from the 1988 Pic-du-Midi series (e.g. Muller et al. 1992). |
Open with DEXTER |
Once the granules have been identified, the (x,y) coordinates of their
barycentre are computed. Hence, each image is converted into a set of
points
describing the position of the granules at time
tn. These data may be completed by the set of granules surfaces,
shapes, etc.
The set of points
is then divided into trajectories
Trajectories are identified by comparing each position
on two
consecutive images and putting nearest neighbours together provided
their position does not differ more than a given threshold that is
determined by an upper bound on the velocity. Typically, we reject
velocities higher than 5 km s-1.
If one disposes of a long time series of images, it may be useful to
determine the time evolution of the velocity field. For this purpose a
time window of width
needs to be used and trajectories are
restricted to time windows. Hence, for a given time window, one derives
a set of N trajectories:
From this set we derive a mean velocity associated with each trajectory;
the kth trajectory gives the velocity
The values of the velocities are of course not uniformly distributed
in the field of view and we need to know how they constrain the velocity
field at a given resolution: small-scale components are weakly constrained,
while large-scale ones are highly constrained. The maximum resolution
for the velocity field is given by the density of trajectories and can
be estimated by the mean distance
between the
.
As the
time window
is increased, the maximum resolution increases
according to the law
as clearly shown
by Fig. 4. This law arises because granules cover the Sun's
surface permanently. From it, we can derive the maximal spatial
resolution for a given time resolution. Indeed, from
Fig. 4, it turns out that the mean volume of space-time
occupied by one granule is
1200 Mm2 s.
![]() |
Figure 4:
Mean distance between velocity vectors as a function of the
time window used for the measurement. The line shows the law
![]() |
Open with DEXTER |
This value is useful for determining the highest time resolution that can
be allowed for the large-scale velocity fields. Indeed, as pointed out in
Rieutord et al. (2001), granules cannot be used to trace plasma flow under a
scale of 2.5 Mm (except in the case of very rapid flows like "explosion'' of
granules); thus the determination of a large-scale flow needs a mesh
size not smaller than 1.25 Mm. This means that each grid point will have
a have a trajectory only if
s. Conservatively, we
estimate that the time resolution cannot be higher than 15 min.
Often time resolution is not needed and therefore
is much
larger than 15 min. In such a case,
several velocities may be given by granules appearing at a
given place on the grid. We then use the average velocity
of the granules as the measure of the local velocity. This introduces
another quantity, namely the rms fluctuation around this mean. This rms
velocity is a measure of the proper velocity of granules and therefore
a proxy of the local strength of turbulent convection.
![]() |
Figure 5:
Isoline plot for the z-component of the curl of the velocity
field
vx=-y, vy=x with some values of ![]() ![]() ![]() |
Open with DEXTER |
![]() |
Figure 6: To the left, the velocity field with a small mesh size (713 km) is complete with 83% of data, while on the right, using a larger mesh size of 1223 km, the velocity field is complete to 99%. Patterns of velocities are easily identified between the two. We used a time interval of 15 min. |
Open with DEXTER |
Once the velocities in the field of view are determined, we need to know what kind of flow they represent: for instance, vortices, shear layers, diverging sources, etc. For this purpose we need to determine the best continuous differentiable field that approximates the data. The determination of such a field can be done in various ways, but we wish to introduce no information or a minimum of into this process. We also wish to avoid the propagation of errors and noise in the field of view.
We have found that wavelet multi-resolution analysis (MRA) is an interesting tool for this purpose, because it gives a decomposition of the signal at all the scales allowed by the size of the box and therefore gives a good view of all the components of the signal (Meyer 1987; Mallat 1989).
The basic idea of a "multi-resolution representation'' of
is to project a signal f on a "wavelet orthonormal basis'' of
,
at which point it is possible to extract the "difference
in information'' between two successive approximations of the signal
(approximations at the resolution 2j and 2j+1). The wavelet
orthonormal basis is a family of functions
built by dilations and translations of a unique function
:
the analysing wavelet. The decomposition thus obtained is
this MRA. The signal can be
reconstructed from this representation without any difficulty. We give
a minimal background to this technique in appendix A.
![]() |
Figure 7: Divergence of the velocity field shown in Fig. 6. As in Fig. 6, on the left we have the more resolved, but less complete, sample and on the right the less-resolved, but more complete, sample. In the first row the data have been only slightly filtered (they are projected on the space j=1), and one can hardly identify any common structure in the two plots. However, we clearly see some common features between the two plots when considering the low frequency j=3-component. |
Open with DEXTER |
Now we need to specify the choice of the analysing wavelet. As in many problems of image processing, we choose the Daubechies wavelet because of its compact support. This property is important since it minimizes border effects and interactions between patterns of the signal during the filtering process. Moreover, using these wavelets also preserves the location of zero-crossing and maxima of the signal during the analysis, a property that results in mutual suppressive interactions across its different scale representations and superior robustness in noisy environments (Sahiner & Yagle 1993). Thus, the contours of the image can be determined efficiently. We understand that this property is important in image processing since the features of the image are preserved after filtering. For the velocity fields we are dealing with, this is also an interesting point, because we wish to identify flow structures like divergences and vortices.
Finally, wavelet analysis also allows us to determine the relevance (or the reality) of flow structures on different scales. One may indeed apply the MRA to the velocity field and the noise field. Then, for each scale of the flow, we can compare the details and the amount of noise to see whether the details are relevant or are simply noise structures.
Once an approximation of the velocity field is known, it is useful to
detect flow patterns that may be important for the dynamics of the
fluid. As the (measured) velocity field is purely two-dimensional, two
quantities are relevant for enhancing flow structures: the
divergence
and the z-component of the
vorticity
.
The way derivatives can be computed can be explained with a one-dimensional example. Let us consider the approximation on scale jof the signal f
![]() |
(2) |
We show the computations of the curl field on a simple given velocity field in Fig. 5, namely vx=-y, vy=x. We see that, except for the border effects due to the finite size of the filter, value 2 is correctly restored.
One of the problems arising when reconstructing the velocity field comes from the presence of bins without data. Such bins produce structures in the divergence and curl fields. A simple illustration of the effect is given in Fig. 5 where the curl of a solid rotation velocity field is plotted. This figure shows the importance of the compact support of the wavelet in limiting the propagation of errors.
Let us now consider some real data taken from the Pic du Midi data set (see Roudier et al. 1999, for details). Considering the velocity field first, the effects of empty bins is not dramatic as may be seen in Fig. 6; velocity patterns are indeed not affected much by holes. This is not the case for the divergence where we see, in Fig. 7, that when the number of empty bins is increased, the patterns hardly remain identifiable. Only after a strong filtering can one recover similar patterns.
The foregoing example illustrates what may be at the origin of the worrying differences found by Simon et al. (1995) between feature tracking and LCT, namely an interpolation problem emphasised by considering differentiated fields. The two methods will converge to similar results only when these effects are overcome, which means that time or space-averaging is large enough. Indeed, empty bins disappear either when the sampling grid is coarse enough or when the time resolution is low enough (see end of Sect. 2.2).
In this paper we have presented the coherent structure tracking algorithm aimed at reconstructing the horizontal velocity field on the solar surface from the granule motions.
We first discussed the role of segmentation and described a way to generalise Strous algorithm. We have shown that a segmentation that follows the shape of granules more closely is more sensitive to the splitting of large granules and that, as far as velocity measurements are concerned, the Strous criterion remains the most efficient. We then showed that the reconstruction of the velocity field is a delicate process because it is subject to many constraints. Indeed, granules do not sample the field of view uniformly, and reconstruction of the velocity field, along with its derivatives like divergence or curl, requires some interpolation. There are many ways of performing such an operation; however, classical methods, like polynomial interpolation, would propagate errors and noise everywhere. We thus selected a method based on MRA which projects data onto Daubechies wavelets. The finite support of these functions limits the effects of noise and error propagation: sides and regions lacking in data have a limited influence. Moreover, the signal is decomposed at different scales through a filtering process. At each step the filtered field and the remaining details can be viewed and compared.This representation is particularly relevant for turbulent flows, since the relation between amplitude and scale is of crucial importance for constraining models of these flows.
We also related the minimum size of the velocity mesh grid to the time resolution. We thus found that, typically, one granule trajectory occupies a "volume'' of 1200 Mm2 s. When the "space-time'' resolution does not reach this limit, many granules contribute to the velocity in one mesh point. Their mean velocity is considered as the true local velocity, but local fluctuations around this mean gives some information on the local strength of convection.
We did not discuss here the influence of the Earth's atmospheric
turbulence on the determination of the velocity fields and derivatives.
Surely this is an important point: typical atmospheric distortion of
images induces, in good seeing conditions, feature motions of 0
13.
For a long-lived structure, say 10 min, this means an uncertainty of
the velocity as high as 220 m/s. This is quite large compared to a
typical velocity of 600 m/s. Hence, atmospheric noise is a non negligible
part of the data and a careful determination of its influence is needed.
This is the subject of the companion paper to which we refer the reader
(see Tkaczuk et al. 2007).
Finally, although this has not been tested yet, we think that the CST algorithm can be fruitfully used to track the magnetic features of the photosphere, like network bright points or even determine velocities of features in the solar atmosphere.
We give in this appendix the basic background of MRA and
refer the reader to textbooks for a more complete presentation
(e.g. Daubechies 1992; Mallat 1999). An MRA is a sequence
of closed subspaces of square-integrable
functions,
,
such that the five following properties are
satisfied,
:
Vj can be interpreted as the set of all possible signal approximations at the resolution 2j (a resolution r is defined by the size 1 / r of the smallest detail). Thus, if the smallest detail in V0 has size 1, it is possible to read on Vj details of size 2-j.
It follows from this property that the approximation of the signal at resolution 2j+1 contains all the necessary information for determining the same signal at a lower resolution 2j (plus additional details).
In other words, when j increases, the approximated signal converges to the original signal. Conversely, if j (the resolution) decreases, the approximated signal converges to zero (it contains less and less information).
This property defines 2 as the rate of scaling change (the ratio of two successive resolution values).
This property characterizes the invariance under discrete translations: when the signal is translated by a length proportional to 2-j, the approximations are translated by the same amount and no information is lost in the translation.
One studies a signal f of
by projecting it
orthogonally on the collection of Vj and Wj.
This procedure can be carried out according to the pyramidal
algorithm presented below.
First, let us define the two filters hand g that can be deduced from the MRA.
This analysis allows us to determine a function h,
which is the impulse response of some -periodic low-pass
filter H defined with the scaling function:
.
On the other hand, one defines function gby
,
g being the impulse response of the
-periodic
high-pass filter G. The filters H and G are
quadratic mirror filters and are linked by the relation
,
giving
g(n) = (-1)1-nh(1-n) for the impulse responses. Then, the pyramidal architecture for
computing the wavelet representation can be easily written as:
After a number J of iterations to be defined
by the problem, we have decomposed f0 into
,
and fJ.
Let us finally mention that
reconstruction of f0(n) from the details
and the last approximation is just as easy and has appreciable
quality. One has to iterate (starting from j=J):
The MRA can be generalised to two dimensions for image-processing
applications. We can define a sequence of multi-resolution vector
spaces and the approximations of a signal
(Mallat 1989). Since the image under study is bounded, we choose
with a compact support (i.e. vanishes outside a finite region). In that
case, filters h and g have only finitely many coefficients that
satisfy the MRA conditions (Daubechies 1992). Functions
and
become more regular as the number of coefficients n increases
(the case of n=1 corresponds to the discontinuous Haar basis). It has been
proved (Daubechies 1992) that the regularity of
and
increases
linearly with n. By choosing n=8, we obtain a good compromise between
regularity and support width.