Free Access
Issue
A&A
Volume 600, April 2017
Article Number A117
Number of page(s) 12
Section The Sun
DOI https://doi.org/10.1051/0004-6361/201527695
Published online 12 April 2017

© ESO, 2017

1. SOLA helioseismic inversions in a nutshell

The solar interior is filled with waves travelling between various places. In the convective envelope, the pressure (p) modes largely dominate the spectrum. The propagation of the waves is affected by perturbations in the plasma state parameters, by magnetic fields and last but not least by plasma streaming. The time-distance helioseismology comprises a set of tools used to measure and analyse the wave travel times. In this study we focus on difference travel times, that is, the difference of the measured travel time of waves travelling in the opposite directions. The difference travel times in the quiet Sun regions are mostly sensitive to flows (Burston et al. 2015).

The standard time-distance helioseismic pipeline consists of the following consecutive steps: first the spatio-temporal datacube is prepared using the tracking and mapping pipeline, this datacube is spatio-temporarily filtered to retain only waves of interest and subsequently the travel times are measured from cross-correlations of the filtered signal at two places. These travel times are finally inverted for flows assuming the linear relation between flow vector vtrue and the measured difference travel time δτ via the sensitivity kernel Ka (coming usually from forward modelling, e.g. Birch & Gizon 2007; Burston et al. 2015); (1)where the position vector splits into the horizontal r and the vertical z domains. Index a uniquely refers to the selection of the measurement geometry (there is a free choice of spatio-temporal filters, distance of the measurement points, and/or additional spatial averaging). The realisation of the random noise na is not known, however its covariance matrix may be measured from the data and used in the estimate of the random-noise level in the inverted flow velocities.

Equation (1) describes the forward problem, which gives the recipe for how to compute the (forward-modelled) travel times when the vector velocity field v is known. The usual requirement is an inverse modelling, hence the derivation of the velocity field in the Sun from the measured set of travel-time maps. To do so, various classes of methods were employed, where two of them are used most often: the regularised least squares (RLS) and optimally localised averaging (OLA). The RLS method (in time-distance helioseismology used for the first time by Kosovichev 1996) seeks to find the models of the solar interior, which provide the best least-squares fit to the measured travel-time maps, while regularising the solution (e.g. by requiring the smooth solution). OLA or its form Subtractive OLA (SOLA; Pijpers & Thompson 1992) is based on explicitly constructed spatially confined averaging kernels by taking a linear combination of sensitivity kernels, while simultaneously having control over the level of random noise in the results. A SOLA-type inversion is the principal method discussed in the current paper using a code validated by Švanda et al. (2011) and a data processing pipeline verified against the direct surface measurements by Švanda et al. (2013).

Both methods result in the estimates of flows at a given target depth. This estimate is a true velocity field smoothed by the averaging kernel and also contains the random-noise component (vrnd). (2)where α = (x,y,z) is a α-component of the velocity vector vtrue. SOLA method uses a user-given target function, which provides an initial estimate of the resulting averaging kernel. The method balances the spatial misfit of the true averaging kernel, the required target function and the propagation of the random noise into the resulting tomographic maps.

The averaging kernel with components can be derived from the inversion as a secondary product and requires normalisation so that its spatial integral equals unity. The components contain information about the smoothing of the flow component in the direction of the inversion (β = α) and also about the leakage (the cross-talk) from the other components (for βα).

The level of the random noise may also be estimated directly from the inversion (in case of OLA method), so at least its root-mean-square (RMS) value is known.

In effect, the output of SOLA inversion is a set of tomographic maps at various target depths. By the formulation, the results are therefore regularly gridded (defined on a regularly spaced grid) in the horizontal domain (where the spatial grid is usually regular), whereas in the vertical direction, the coverage of the domain is given by a selection of the target depths, hence usually not regularly gridded. The regularly gridded solutions are required for instance whenever the estimates of spatial derivatives are needed; for example, when computing for all components of divergence, vorticity, helicity or Reynolds stress. The aim of this study is to derive and test a method for estimating the regularly gridded 3D vector flow field from such a set of discrete tomographic maps with known averaging kernels and noise-level estimates. Simplified approaches were used recently, usually using the target depths as true representatives of the location of tomographic maps and interpolating between them in order to get an estimate of the regularly gridded velocity field (e.g. DeGrave & Jackiewicz 2015). We would like to point out that such simplified methods are often not justified, because the typical helioseismic averaging kernels contain sidelobes in their sensitivity and therefore it is difficult to assign only a single depth to such a kernel. The averaging kernels also often peak at a different depth than, for example, where their gravity centre is located, which makes the assignment of a single representative depth even more ambiguous. Our goal is to find a way of reconstructing a reasonable estimate of the vertical variations of the flow that is only sparsely covered by a set of tomographic maps. We would like to use the full structure of the known averaging kernels to properly reconstruct the estimate of the true flow. Our technique aims to combine the information from a set of inversions with different averaging kernels, thereby to some extent resembling the super-resolution (e.g. Hardie et al. 1997) technique, routinely used in the imaging problems.

2. Formulation of the problem

We seek the 3D regularly gridded vector velocity field which, if convolved with a set of given averaging kernels, provides a set of flow maps that is close to the set of tomographic maps from the inversion at given target depths. Inspired by Eq. (2), the sought relation may be written for each target depth zd as (3)where we replaced the continuous integrations by their discrete limits: (4)Note that Eq. (3) is in fact a discretised Eq. (2) without a noise term explicitly given. Constant hx is the spacing of the horizontal discrete grid (and hz analogously represents the spacing in the vertical grid). For simplicity, we assume that the grids in both domains are regular and equidistant. For irregular grids our solution must either be modified accordingly, or all vectors, matrices and 3D arrays must be interpolated onto a regular grid. The symbolic summation over rj represents the summation over all N = nxny discrete values of position vector rj having dimensions (nx,ny). Index d indicates the selection of a target depth. We assume that the velocity vector v is close to the true velocity field vtrue. How close the two are is discussed in Sect. 2.2.

For each target depth zd, there is a separate Eq. (3), however the modelled velocity field v is common to all these equations. The noise level of each of these maps very likely varies, hence we introduced an inverse weighting by the RMS of the random noise, basically meaning that less precise inferences (with larger RMS of random noise) are penalised against more precise inferences (with smaller RMS of random noise). For simplicity, we first discuss the equations to be solved for one target depth and only then combine all available target depths to form the larger problem.

Factors of in (3) could, in principle, be removed from each side of the equation. In the following, we use a set of Eq. (3) written for a set of target depths zd, where keeping these factors in the equations simplifies an introduction of additional terms later in the paper, especially when a regularisation of the solution is studied in order to reduce the propagation of the random-noise.

Equation (3) may be written in a Fourier space with a definition of the Fourier transform given by a pair of equations; where k is a horizontal wave vector and hk spacing in the Fourier domain: (7)The star denotes the complex conjugate. Equation (7) holds for each wave vector k and target depth zd. The precise derivation of Eq. (7) is given in Appendix A.

2.1. Solution for the flow

For each target depth zd, we have three (three velocity components vγ) Eqs. (7). However, all these equations have a common velocity field that is to be determined. Not counting the ideal or trivial cases, in reality we expect that it will be impossible to fulfill Eq. (7) for a set of different target depths zd exactly. By casting the problem written for a set of target depths into the matrix equation, (8)the use of the pseudoinverse solvers allows us to find the solution x in a least-squares sense. Matrices y, A and x constitute of blocks of Eq. (7), namely (9)(10)and (11)The arrows in matrices y, A and b indicate the running indices in depth (z) and in target depths (d). The goal is to retrieve vector x, which contains the estimate of the regularly gridded velocity field. Such calculation may be done using standard solvers. We note that in cases where for αβ, the solution will be simpler as the matrix A will consist of diagonal blocks. Such a solution is equivalent to the case when no cross-talk exists. Usually, there is a (possibly small) contribution of the cross-talk so it is wise to use such information in the reconstruction of the estimate of the true velocity field.

2.2. Averaging kernels

In the previous section, we assumed that the sought velocity field v is close to the real flow vtrue. The relation “close” can be quantified by an assumed linear relation (12)where vnoise corresponds to the propagated random noise and is a linear operator representing the convolution with the new averaging kernels . In this case, zt is the target depth on the vertical grid. For each grid point on the vertical grid, we have a new averaging kernel. Equation (12) is therefore an equivalent of Eq. (2) in the original SOLA problem. Following the notation introduced in Eqs. (12), (2) may also be written in a symbolic operator form (13)and (8) as (14)where represents the operator form of matrix A (Eq. (10)) that also absorbed the multiplicative factor of (2π)2hz for simplicity.

We insert (12) into (13) and use (14) to obtain (15)By comparing the terms we have (16)The equation above gives a recipe for how to compute the new set of averaging kernels from the old ones and explains how the realisation of the random noise propagates through the reconstruction. After performing the mathematics in detail, the new averaging kernels result from solving the equation (17)for each α, k, and zt, where (18)

(19)and A is given by Eq. (10).

3. Random noise

Tomographic maps from time-distance helioseismology naturally contain some level of random noise, which originates from the random excitation of the waves by convection. The realisation of this noise propagates through our reconstruction procedure into the regularly gridded estimate of the velocity field.

3.1. Random noise-level estimation

Similarly to the determination of the averaging kernel, we could proceed in deriving the propagation of random noise through the procedure, basically leading to the second part of Eq. (16). The tomographic maps as inputs to our reconstruction have known estimates of random noise-levels in the form of RMS of this random component only. As we show later, this might not be enough, as we also need (and use) the estimate of the spatial power spectrum of this random-noise component N(k) (for details see Appendix B). In such a case, the use of the simulated random-noise realisation seems to be a feasible solution to obtain an estimate of the random noise in the reconstructed flow cubes.

The noise part of (16) then written explicitly transforms into (20)where A is given by (10) and for δx and δy explicitly (21)Values of are randomly generated so that their spatial power spectrum is N(k) and their RMS equals . The estimate of the random-noise level propagating to the resulting reconstructed velocity field is then computed as (22)

3.2. Dealing with random noise

The algorithm described in Sect. 2 actually belongs to the problems of deconvolution in astrophysics, which are known to be unstable in the presence of random noise. Using our synthetic tests, we found that our implementation, as described in the previous Sections in the case of realistic noise power spectrum, boosts the random noise by approximately eleven orders of magnitude, which is unacceptable for any reasonable flow reconstruction.

Thus, we require a method for suppressing the propagation of the random noise. A way out is to avoid the spatial wave numbers, where the noise dominates. This can either be achieved by an introduction of the filter in the k-space domain that, in effect, allows us to reconstruct for only some subset of wave vectors and to replace the contribution of wave vectors with large wave numbers (where the noise is likely do dominate) by zeros. We found that for a small level of noise (less than 1% contribution) when approximately 20% smallest wave numbers were used, the reconstruction led to an acceptable solution. For any larger noise contribution, the solution was not acceptable.

Another option is to introduce an ad-hoc regularisation term to ensure a smooth solution, by assuming that the random noise will cause small-scale oscillations in the solution. Such regularisation is relatively common in helioseismology. We may introduce a smoothing parameter ϵ and, thanks to keeping factors in Eq. (7), simply modify the matrix A to A: (23)Our ad-hoc term resembles that of the RLS inversion, as stated by Couvidat et al. (2006), for example. Then, we invert for x from equation y = (2π)2hzAx. The boosting of the noise by eleven orders mentioned above forces us to use a very strong regularisation (with ϵ in the order of 104 or more), which, in effect, allows us to reconstruct only very large-scale components of the spatial spectrum correctly, making the results extremely smooth. All the details of the flow field are lost. We do not find such a result satisfactory. However, the regularisation may become useful, when dealing with numerical stability, for example, when the matrix A is close to singular.

As a final attempt, we adapt solutions from the image restoration problems, namely the Wiener filtering. Wiener filtering introduces something resembling a k-space filter that allows us to weight wave numbers so that only the wave numbers where the signal dominates the random noise contribute to the reconstructed image. The method requires prior knowledge about the expected power spectrum of the real solar flows and random noise, which we have at our disposal (details in Appendix B).

Inspired by the Wiener-filtering algorithm instead of inverting for x from y = (2π)2hzAx for each k, we obtain an optimal estimate of x as (24)Note that A(k) is a matrix, whereas in the original Wiener formulation it was a scalar. We represent the norm of the matrix by the Euclidean norm. N(k) and S(k) represent the estimates of power spectra of the random noise and the signal (for details see Appendix B).

4. Implementation and synthetic tests

We implemented the above given algorithm in Matlab programming language. To assess the performance of the method, we constructed synthetic inversion-like tomographic maps obtained by smoothing the numerical simulation of the solar convection (Fig. 1, source data from Ustyugov 2008) with a set of averaging kernels (Fig. 2). The synthetic averaging kernels were constructed as Gaussians in both directions with a horizontal width of 5 Mm. In the vertical direction, the averaging kernels target depths of 1, 1.9, 2.9, 4.3, 6.2, 9.2 and 12.1 Mm with a full-width-at-half-maximum equal to the target depth of the given kernel.

thumbnail Fig. 1

Simulated convective flows used for the validation of our reconstruction techniques. The units of the coordinates are in Mm.

Open with DEXTER

thumbnail Fig. 2

Depth sensitivity of averaging kernels used in construction of seven synthetic tomographic maps.

Open with DEXTER

The reconstruction was performed on a horizontal grid identical to the horizontal grid of the model (with a pixel size of 1.4 Mm) and the vertical grid extending from 10 Mm depth to 400 km above the surface with a vertical sampling of 200 km. For each grid point, we computed the estimate of the reconstructed flow vector, averaging kernels for each flow component, and also the estimate of the noise levels for each flow component. Due to the assumed translation invariance in the horizontal domain, for each flow component, the averaging kernels and the estimates of the noise levels are identical for all horizontal points within the identical depth.

thumbnail Fig. 3

Examples of original averaging kernels (dashed) and averaging kernels after deconvolution at the same depths (solid). Only horizontally averaged kernels are plotted.

Open with DEXTER

thumbnail Fig. 4

Comparison of horizontal cuts through averaging kernels (along y = 0) at the target depth 1.9 Mm for the input averaging kernel (black) and the reconstructed one (red).

Open with DEXTER

The averaging kernels are, in general, narrower than the averaging kernels of the input set of tomographic maps (see Fig. 3), which is what we wanted. In Fig. 3, we only compared averaging kernels at corresponding target depth; we remind the reader that due to the sampling of the vertical domain as described above, there is, in fact, 53 averaging kernels in total, one for each grid point in the vertical domain. In our testing case, the overall shape of the new averaging kernels is not satisfying compared to the original one, as the averaging kernels of the reconstruction contain many sidelobes (the negative ones as well). It must be noted that in a realistic case, the averaging kernels also contain sidelobes (the negative ones as well). Therefore, from this point of view, it is not possible to immediately say which is “worse”.

In Fig. 3, we plotted examples of horizontally integrated kernels for simplicity. The horizontally integrated kernels are independent of noise levels, as by using the Wiener-like filtering, the k = 0 is always reconstructed. In the horizontal domain, the shape of the kernels is influenced by the level of random noise. In the ideal case (no noise present), the deconvolution goes to approximately the level of a pixel size (see Fig. 4); they are close to a numerical Dirac δ function. The described behaviour is exactly the same for all depths as the input averaging kernels had the same full-width-at-half-maximum of 5 Mm. With an increasing level of noise, the kernels get wider and they approach the shape of the input kernels for larger noise levels. Then, in effect, only a little or no deconvolution of the flow in the horizontal domain takes place. We would like to stress that our main goal was to reconstruct the reasonable estimate of the flow in the vertical domain, where the input set of tomographic maps only sparsely covers the vertical domain.

The total integrals of the new averaging kernels are then used to properly scale the magnitude of the flows at each target depth. This re-normalisation is necessary because Wiener-like filtering suppresses some of the Fourier components and therefore naturally introduces a scaling of resulting flow magnitudes. Consequently, we normalise new averaging kernels so that their spatial integral equals unity.

The reconstructed flow compared to the model is shown in Figs. 58. One can see that the method works extremely well in cases where no noise is present. The correlation coefficient between Figs. 5a and b is 0.8–0.9 for all components. At first glance, almost all details of the flow field seem to be correctly reconstructed (see also first two rows of Fig. 6). Also, the depth structure is reconstructed very well, which can be seen in Fig. 8 (black and red lines). Small-scale oscillations of the flow are well beyond the resolution limit set by the averaging kernels, but the smoothed trend is captured relatively well. Interestingly, the two cuts (black and red lines of Fig. 8) match one another better in the depth range of 0–6 Mm than deeper down. At depths 0–6 Mm, all the averaging kernels overlap.

We further investigated this issue and found that the best reconstruction is indeed achieved in depth range, which is “included” in several tomographic maps. This gives a recipe for how to select the target function for SOLA inversions that may potentially lead to a reasonable estimate of the regularly gridded velocity field by our method. The target functions should be relatively broad and should overlap. Also, a larger set of different target functions provide a more robust estimate of the true velocity field.

Our reconstruction method also returns reasonable results in the case where the random noise is present in the input data. The fact that the Wiener-like approach out-performs the ad-hoc regularisation is clear from Fig. 7, where we compare example results of reconstruction methods differing by noise treatment. Obviously, without taking care of the noise, the results are completely wrong. The reconstructed flow magnitude oscillates from pixel to pixel and the noise is boosted by almost twelve orders. Adding an ad-hoc smoothing term improves the performance, however very large values of the smoothing parameter are needed, and even in a such case, the results are not satisfactory; they either still contain a large fraction of propagated random noise, or are very smooth with most of the details of the true velocity field lost. Wiener-like approach provides an optimal solution to the problem. Therefore we use this method further and study its properties.

A Wiener-like approach implemented by us allows a reasonable reconstruction of the flows for cases where the signal-to-noise ratio (S/N) is larger than 20 and positive correlation is found even with a S/N of 10 (meaning that the inverted flow maps are polluted with 10% random noise – see Table 1). S/N values may be understood by following the model of inversion for supergranular flows. The typical magnitude of the horizontal flow components is approximately 300 m s-1, reasonable inversions have the random-noise level of 25 m s-1. In such a case, S/N equals to 12. A S/N of 100 would approximately represent the flow maps obtained after averaging of 100 supergranule individuals.

Table 1

Correlation coefficient ρ of the reconstructed flow components vx, vy and vz with random noise dealt with using the Wiener-like method and the input synthetic velocity field for various noise levels.

The level of random noise is estimated for each depth following the procedure described in Sect. 3.1. As an example, we plotted the estimated RMS of the random noise in the reconstructed horizontal flow in Fig. 9 together with the RMS of the horizontal flow at the same depths for the case where the S/N is 100. At all depths, the S/N ratio of the reconstructed flow is above unity and at both ends of the domain, S/N approaches unity. While at the bottom boundary this is due to the fact that the reconstruction is dominated by noise, at the top boundary, the increase of both the noise RMS and the flow amplitude is an artifact of the normalisation by the new averaging kernel. As seen in Fig. 2, at the top of the domain, there is little power in all original averaging kernels , hence the averaging kernels at these depths have an arbitrary shape with little localisation. As a consequence, the reconstruction of the flow at these depths is physically meaningless.

It should be noted that the reconstruction behaves reasonably in cases of large signal-to-noise ratio. Interestingly, this requirement conforms well with conclusions described at the beginning of this section, where we showed that rather broad averaging kernels lead to a better reconstruction of the regularly gridded estimate of the flow. In SOLA, localisation in the Sun (essentially the width of the averaging kernel) and the level of random noise are balanced by a trade-off parameter in the inversion cost function. Thus, the choice of a broad target function automatically leads to a lower level of random noise and hence a larger S/N for a given tomographic map. The selection of a set of target functions that overlap prepares a set of tomographic maps suitable for the reconstruction of the regularly gridded estimate of a true velocity field.

thumbnail Fig. 5

Effect of the random noise to the reconstruction of the regularly gridded velocity field: the case of the near-surface flows. Displayed are results with varying noise level: a) input data; b) no noise. Other maps were reconstructed with Wiener-like filtering to deal with random-noise boosting with varying S/N levels: c) S/N 1000; d) S/N 100; e) S/N 20 and f) S/N 10. In each cut, the parallel components are indicated by arrows, while the perpendicular component is indicated by the colour. Colour-scale of all panels is identical. The units of the coordinates are in Mm. The maps of all three components at the depth of 1 Mm are displayed in Fig. 6.

Open with DEXTER

thumbnail Fig. 6

Maps of all components of the flow at the depth of 1 Mm for various levels of the random noise. Maps for the x component, the y component and the vertical z component of velocity are found in the left-hand, the middle and the right-hand column, respectively. The rows contain the input velocity field in the first row and reconstructions with increasing level of random noise in order from top to bottom. Panels with given S/N value were reconstructed using a Wiener-like approach. The correlation coefficient for the whole cube between the input velocity field and the reconstruction is given in the titles of the plot panels. Colour-bars have the units of km s-1 and the field-of-view shown here covers 38 Mm × 38 Mm in the horizontal directions.

Open with DEXTER

thumbnail Fig. 7

Testing approaches to deal with random noise. An input datacube with a S/N of 100 (i.e. 1% random noise contribution – first row) was reconstructed with the basic method ignoring the noise (second row), ad-hoc regularisation with a set of values of regularisation parameter (third to fifth row) and the Wiener-like filtering approach (last row). The correlation coefficient for the whole cube between the input velocity field and the reconstruction is given in the titles of the plot panels. Colour-bars have the units of km s-1 (violet indicates fast variations from pixel to pixel) and the field-of-view shown here covers 38 Mm × 38 Mm in the horizontal directions.

Open with DEXTER

thumbnail Fig. 8

Cut through the horizontal flow component at one point for the input velocity field (black) and reconstructions with varying noise levels (coloured lines). The shaded regions indicate the respective error bars. Note that the black and red lines do not have errors.

Open with DEXTER

Even the Wiener-like solution is not ideal. The prior knowledge (or an estimate) of a spatial spectrum of the velocities and of the random noise poses a real challenge. It may be estimated reasonably for the surface, for example, in the way outlined in this paper, however, it is expected from the theory that the spatial spectrum of the convective flows changes with depth. Such variability is very difficult to incorporate in the reconstruction.

In the synthetic test described above, we used the model averaging kernels (Fig. 2), which do not have a cross-talk component, whereas matrix A (Eq. (10)) allows us to use the information even from the cross-talk (off-diagonal) terms. The presence of cross-talk is an issue for inversions for weak quantities, such as the vertical component of the flow. In order to estimate the robustness of our method in the presence of cross-talk, we performed additional tests. Firstly, the averaging kernels were computed in the same way as before. Additionally, we added cross-talk components that have the same shape as the components in the direction of the inversion. Such a situation describes the case when the cross-talk is highly correlated with a desired signal and its contribution is twice larger than the contribution of the flow component we invert for. Despite the amount of large cross-talk, the reconstructed flow resembles the model one with a correlation coefficient of ~0.4 for the horizontal components and ~ 0.2 for the vertical component for the case of S/N = 0.05. Secondly, we kept only the cross-talk components and set the averaging kernels in the direction of the inversion to zero. The flow was recovered (with a correlation coefficient of ~ 0.5) with the model for both vertical and horizontal components.

Additional tests showed that the correlation coefficient between the recovered flow and the model is lower when the cross-talk components of the averaging kernels are similar and comparable in peak magnitude to the averaging kernel in the direction of the inversion. A significant difference in peak magnitudes or an opposite sign actually help our method to improve the trustworthiness of the resulting reconstruction. The performed tests point out the difficulties caused by the presence of cross-talk, however they also show, via these poorly representative examples, that our reconstruction method is still able to recover the true structure of the flow reasonably well.

5. Summary

We derived a method for deconvolution of a set of tomographic maps coming from local helioseismology, which allows for construction of a regularly gridded estimate of the 3D vector velocity field. The method was successfully tested using synthetic data. We showed that it provides reasonable estimates of the true velocity field with a lesser smoothing than the original set of tomographic maps. To deal with random noise, we introduced an approach similar to Wiener filtering known from image processing.

The method can naturally be applied to real data coming from SOLA inversions, which will be the topic of future study. It can be applied to any set of tomographic maps with a reasonable signal-to-noise ratio (larger than 50). The best usability can be expected in studies such as the investigation of the flow structure of solar supergranulation or similar features of convective flows.

thumbnail Fig. 9

Level of random noise in the reconstructed flow (blue) and the RMS of the reconstructed flow (red). Both curves are plotted for the horizontal component of velocity and for the case with a S/N of 100. Obviously, the error increases beyond reasons at the ends of the domain.

Open with DEXTER

Acknowledgments

This work benefits from the student grant awarded in 2014 to M.K. by the Faculty of Mathematics and Physics, Charles University in Prague, which was supervised by M.Š. M.Š. acknowledges the support of the Czech Science Foundation (grant 14-04338S) and of the institute research project RVO:67985815 to Astronomical Institute of Czech Academy of Sciences. The help of Dominik Jančík and Sven Ubik with the visualisation of the flows is greatly acknowledged.

References

  1. Birch, A. C., & Gizon, L. 2007, Astron. Nachr., 328, 228 [NASA ADS] [CrossRef] [Google Scholar]
  2. Burston, R., Gizon, L., & Birch, A. C. 2015, Space Sci. Rev., 196, 201 [NASA ADS] [CrossRef] [Google Scholar]
  3. Couvidat, S., Birch, A. C., & Kosovichev, A. G. 2006, ApJ, 640, 516 [NASA ADS] [CrossRef] [Google Scholar]
  4. DeGrave, K., & Jackiewicz, J. 2015, Sol. Phys., 290, 1547 [NASA ADS] [CrossRef] [Google Scholar]
  5. Hardie, R. C., Barnard, K. J., & Armstrong, E. E. 1997, Trans. Img. Proc., 6, 1621 [NASA ADS] [CrossRef] [Google Scholar]
  6. Kosovichev, A. G. 1996, ApJ, 461, L55 [NASA ADS] [CrossRef] [Google Scholar]
  7. Pijpers, F. P., & Thompson, M. J. 1992, A&A, 262, L33 [NASA ADS] [Google Scholar]
  8. Švanda, M., Gizon, L., Hanasoge, S. M., & Ustyugov, S. D. 2011, A&A, 530, A148 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Ustyugov, S. D. 2008, in Subsurface and Atmospheric Influences on Solar Activity, eds. R. Howe, R. W. Komm, K. S. Balasubramaniam, & G. J. D. Petrie, ASP Conf. Ser., 383, 43 [Google Scholar]
  10. Švanda, M., Roudier, T., Rieutord, M., Burston, R., & Gizon, L. 2013, ApJ, 771, 32 [NASA ADS] [CrossRef] [Google Scholar]

Appendix A: Derivation of Eq. (7)

In Sect. 2, we prescribed an ad-hoc set of Eqs. (7) to be solved to obtain an estimate of the reconstructed 3D vector flow defined on a regular grid. In the following, we show that these equations may be properly obtained by a minimisation procedure.

We define a misfit between the inverted flow map at target depth zd and the modelled tomographic map as (A.1)For each target depth zd and positional vector r, there is a separate Eq. (A.1), however the modelled velocity field v is common to all these equations. Note the inverse weighting by the RMS of the random noise already discussed in Sect. 2.

Equation (A.1) can be seen as a cost function, which we want to minimise with respect to the unknown velocity field v. Using standard methods of calculus we first obtain equation (A.2)which holds for each zd and γ. The minimisation is performed with respect to the γ-component of velocity at point (r1,z1). Since in each of these points, the equation equals zero, we may sum over all of these points in the computational domain. For each γ, we obtain (A.3)We note that the first term reiterates the normalisation condition to the averaging kernels (A.4)where is a Kronecker delta. We solve (A.3) in the Fourier space using a definition given by the pair of Eqs. (5) and (6) to obtain (A.5)Equation (A.5) holds for each zd, each γ and each r. In the following, we use the orthogonality of the functions exp[ik·r] and solve (A.5) in the k-by-k manner. After some algebra, we obtain (A.6)which is Eq. (7).

Appendix B: Estimates of spatial power spectra

In SOLA, the cost function (Eq. (12) in Švanda et al. 2011) allows us to balance between the localisation (the averaging kernel) and the random-noise level in the resulting flow map. Symbolically written (B.1)where misfit evaluates the localisation of the averaging kernel and μ is a trade-off parameter free of choice. For details, we refer the reader to Pijpers & Thompson (1992). A minimisation of this cost function gives, at the end of the whole procedure, a flow map at a given depth together with an averaging kernel and an estimate for a level of random noise. By choosing values of the trade-off parameter μ, one can easily influence the random-noise level in these maps.

thumbnail Fig. B.1

Estimates of power spectra of the signal (blue) and the noise (red) obtained from the real data as a function of angular degree l. The power spectrum of the signal (flows) was normalised so that the RMS of the flow is 100 m s-1. The power spectrum of the random noise was also normalised to RMS of 10 m s-1. Such a situation therefore represents S/N = 10.

Open with DEXTER

From the resulting flow map, one can compute a spatial power spectrum P(k) using standard methods. Let us consider two extreme cases. Firstly, let us choose a large μ, so that the random-noise term multiplied by μ dominates the right-hand side of inversion cost function (B.1). This selection causes the random-noise term to be preferentially minimised and the resulting flow maps contain mostly the signal. The resulting spatial power spectrum P(k) approximates the power spectrum of the flows; let us name it S(k). The drawback of this selection is usually a worse localisation in the Sun (a worse-quality averaging kernels with sidelobes).

As a second extreme, let us select a small μ. Then, the misfit term in the cost function (B.1) is preferred and the resulting flow map contains a large fraction of noise. It may even be noise dominated, meaning the level of random noise is much larger (by many orders) than the expected magnitude of the real flows. The power spectrum P(k) approximates the power spectrum of the noise, named N(k). The localisation in the Sun is usually good in this case.

The estimates for the power spectrum of the flows S and the noise N would not be robust enough if only inversion for a single flow map was considered. Also, in the case of a single map, the difference between the averaging kernels for the two kinds of inversions could be large, meaning that the validity of our approach to obtain the power-spectra estimates might be questioned. To improve the robustness, one needs to average both power spectra overa larger set of flow maps. We used 1100 individual flow maps obtained with both signal- and noise-dominated inversions to estimate power spectra S and N. Averaging over a large sample of flow maps also allows us to tune values of needed μ so that the difference (evaluated by eye) of the resulting averaging kernels for noise- and signal-dominated inversion is not too large.

We believe that the methodology described here allows us to derive robust estimates for spatial power spectra of the flows S and the noise N. Both power spectra might then be easily scaled so that their RMS values attain desired values by simply invoking the Parseval theorem and the linearity between the RMS and the scaling of the random sample.

Examples can be seen in Fig. B.1 for a S/N of 10. In such a situation, obviously the noise dominates the signal at large angular degrees; the situation is largely reversed at supergranular scales of l ~ 120 and then again in the low-l part of the spectrum.

All Tables

Table 1

Correlation coefficient ρ of the reconstructed flow components vx, vy and vz with random noise dealt with using the Wiener-like method and the input synthetic velocity field for various noise levels.

All Figures

thumbnail Fig. 1

Simulated convective flows used for the validation of our reconstruction techniques. The units of the coordinates are in Mm.

Open with DEXTER
In the text
thumbnail Fig. 2

Depth sensitivity of averaging kernels used in construction of seven synthetic tomographic maps.

Open with DEXTER
In the text
thumbnail Fig. 3

Examples of original averaging kernels (dashed) and averaging kernels after deconvolution at the same depths (solid). Only horizontally averaged kernels are plotted.

Open with DEXTER
In the text
thumbnail Fig. 4

Comparison of horizontal cuts through averaging kernels (along y = 0) at the target depth 1.9 Mm for the input averaging kernel (black) and the reconstructed one (red).

Open with DEXTER
In the text
thumbnail Fig. 5

Effect of the random noise to the reconstruction of the regularly gridded velocity field: the case of the near-surface flows. Displayed are results with varying noise level: a) input data; b) no noise. Other maps were reconstructed with Wiener-like filtering to deal with random-noise boosting with varying S/N levels: c) S/N 1000; d) S/N 100; e) S/N 20 and f) S/N 10. In each cut, the parallel components are indicated by arrows, while the perpendicular component is indicated by the colour. Colour-scale of all panels is identical. The units of the coordinates are in Mm. The maps of all three components at the depth of 1 Mm are displayed in Fig. 6.

Open with DEXTER
In the text
thumbnail Fig. 6

Maps of all components of the flow at the depth of 1 Mm for various levels of the random noise. Maps for the x component, the y component and the vertical z component of velocity are found in the left-hand, the middle and the right-hand column, respectively. The rows contain the input velocity field in the first row and reconstructions with increasing level of random noise in order from top to bottom. Panels with given S/N value were reconstructed using a Wiener-like approach. The correlation coefficient for the whole cube between the input velocity field and the reconstruction is given in the titles of the plot panels. Colour-bars have the units of km s-1 and the field-of-view shown here covers 38 Mm × 38 Mm in the horizontal directions.

Open with DEXTER
In the text
thumbnail Fig. 7

Testing approaches to deal with random noise. An input datacube with a S/N of 100 (i.e. 1% random noise contribution – first row) was reconstructed with the basic method ignoring the noise (second row), ad-hoc regularisation with a set of values of regularisation parameter (third to fifth row) and the Wiener-like filtering approach (last row). The correlation coefficient for the whole cube between the input velocity field and the reconstruction is given in the titles of the plot panels. Colour-bars have the units of km s-1 (violet indicates fast variations from pixel to pixel) and the field-of-view shown here covers 38 Mm × 38 Mm in the horizontal directions.

Open with DEXTER
In the text
thumbnail Fig. 8

Cut through the horizontal flow component at one point for the input velocity field (black) and reconstructions with varying noise levels (coloured lines). The shaded regions indicate the respective error bars. Note that the black and red lines do not have errors.

Open with DEXTER
In the text
thumbnail Fig. 9

Level of random noise in the reconstructed flow (blue) and the RMS of the reconstructed flow (red). Both curves are plotted for the horizontal component of velocity and for the case with a S/N of 100. Obviously, the error increases beyond reasons at the ends of the domain.

Open with DEXTER
In the text
thumbnail Fig. B.1

Estimates of power spectra of the signal (blue) and the noise (red) obtained from the real data as a function of angular degree l. The power spectrum of the signal (flows) was normalised so that the RMS of the flow is 100 m s-1. The power spectrum of the random noise was also normalised to RMS of 10 m s-1. Such a situation therefore represents S/N = 10.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.