EDP Sciences
Open Access
Issue
A&A
Volume 621, January 2019
Article Number A73
Number of page(s) 17
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201834041
Published online 11 January 2019

© ESO 2019

Licence Creative Commons
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Wavelets on the sphere (Starck et al. 2015) are now standard tools in astronomy and have been widely used for purposes such as Fermi Large Area Telescope data analysis (Schmitt et al. 2010; McDermott et al. 2016), the recovery of the cosmic microwave background (CMB) intensity and polarized CMB maps (Bobin et al. 2015, 2016), string detection (McEwen et al. 2017), point source removal in CMB data (Sureau et al. 2014), the detection of CMB anomalies (Naidoo et al. 2017; Rassat et al. 2014), or stellar turbulent convection studies (Bessolaz & Brun 2011). While wavelets are well suited for representing isotropic components in an image, they are far from optimal for analyzing anisotropic features such as filamentary structures. This has motivated in the past the construction of so-called multiscale geometric decompositions such as ridgelets, curvelets (Candès & Donoho 2004; Starck et al. 2003), bandelets (Le Pennec & Mallat 2005), or shearlets (Labate et al. 2005b). Extensions to the sphere of ridgelets and curvelets were already presented in Starck et al. (2006), Chan et al. (2017) and McEwen (2015), and also for spherical vector field data sets in Starck et al. (2009) and Leistedt et al. (2017).

For a given data set, we therefore have the choice between many fixed representation spaces (such as pixel domain, harmonics, wavelets, ridgelets, curvelets), which are also called dictionaries. A dictionary is a set of functions, named atoms, and the data can be represented as a linear combination of these atoms. The dictionary can be seen as a kind of prior (Beckouche et al. 2013), and the best representation is the one leading to the most compact representation, one in which the maximum of information is contained in few coefficients. For the previously mentioned fixed dictionaries, there exist fast operators for decomposing the data into the dictionary, and fast operators for reconstructing the image from its coefficients in the dictionary (Starck et al. 2015).

In some cases, it is not clear which dictionary is the best, or even if the existing dictionaries are good enough for a given scientific application. Therefore, new strategies were devised in the Euclidean setting to construct adaptive representations. Among them, sparse dictionary learning (DL) techniques (Engan et al. 1999; Aharon et al. 2006) have been proposed to design a dictionary directly from the data, in such a way that the data can be sparsely represented in that dictionary. DL has been used in astronomy for image denoising (Beckouche et al. 2013), stellar spectral classification (Díaz-Hernández et al. 2014), and morphological galaxy classification (Díaz-Hernández et al. 2016).

An alternative approach for adaptively choosing a dictionary is to start with a large parametrized family of dictionaries, and then to choose the parameter(s), either based on simulations or directly from the data. An example of such a parametrized family of dictionaries is the family of α-shearlets (Labate et al. 2005; Grohs et al. 2016; Voigtlaender & Pein 2017).

In this paper, we propose to extend to the sphere both adaptive representation methods, DL and α-shearlets, and we compare the performance of the two approaches. More precisely, we are concerned with adaptive sparsifying representation systems for data defined on the sphere. In Sect. 2, we present our approach for performing DL on the sphere, while Sect. 3 is devoted to our extension of the α-shearlet transform to data defined on the sphere. We present the scenarios for our comparison of the two approaches in Sect. 4; the results of this comparison are presented in Sect. 5. Finally, we conclude the paper in Sect. 6. The necessary background related to α-shearlets in the Euclidean setting is covered in Appendix A.

2. Dictionary learning on the sphere

Dictionary learning techniques were proposed in the early 2000s (Olshausen & Field 1996; Engan et al. 1999; Aharon et al. 2006) to build adapted linear representations that yield sparse decompositions of the signals of interest. Contrary to fixed dictionaries, in dictionary learning the atoms are estimated from the data (or a proxy, such as simulations or exemplars of the data), and can therefore model more complex geometrical content, which could ultimately result in sparser (and typically redundant) representations. DL techniques have proved their efficiency in many inverse problems in restoration, classification, and texture modeling (see, e.g., Elad & Aharon 2006; Mairal et al. 2008a, 2009; Peyré 2009; Zhang & Li 2010) with improved performance compared to fixed representations (see Beckouche et al. 2013 for denoising astrophysical data). A wide variety of dictionary learning techniques have been proposed to process multivariate data (Mairal et al. 2008a,b); to construct multiscale (Mairal et al. 2008b), translation-invariant (Jost et al. 2006; Aharon & Elad 2008), or hierarchical representations (Jenatton et al. 2011); to estimate coupled dictionaries (Rubinstein & Elad 2014); or to build analysis priors (Rubinstein et al. 2013). Also, online algorithms for dictionary learning have been considered (Mairal et al. 2010).

While fixed structured representations typically have fast direct and inverse transforms, dictionary learning techniques become computationally intractable even for signals of moderate size. Based on the observation that natural images exhibit nonlocal self-similarities, this computational problem is typically overcome by performing dictionary learning on patches extracted from the images that one wants to model. In this section we focus on this patch-based dictionary learning approach, and extend it for signals living on the sphere.

2.1. Sparse representation with patch-based dictionary learning

Given an n × n = N image represented as a vector X ∈ ℝN, we consider square overlapping patches xij in ℝQ, with Q = q × q, where q is typically small; in fact, in the present work we will always have q ≤ 12. Formally,

(1)

where the matrix Rij ∈ ℝQ × N extracts a patch with its upper left corner at position (i, j).

From a training set 𝒯 of such patches {xij}(i, j)∈𝒯, a dictionary with M atoms D ∈ ℝQ × M is then learned such that the codes Λ = {λij}(i, j)∈𝒯 satisfying xij = Dλij are sparse. To perform the training, one typically considers the following inverse problem, or one of its variants:

(2)

where 𝒟 (respectively 𝒞) is a non-empty convex set enforcing some constraints on the dictionary D (respectively the codes Λ), and μ ⋅ ∥λij0 is the weighted ℓ0 pseudo-norm, which enforces sparsity of the codes. To remove the scale indeterminacy in such a minimization problem – that is, if (D, Λ) is a solution, then so is (αD, α −1Λ), at least if αD ∈ 𝒟 and α−1Λ ∈ 𝒞 – the set 𝒟 typically enforces each atom (column) of the dictionary to belong to a unit ℓ2 ball, while 𝒞 can enforce constraints in the code (e.g., non-negativity in non-negative matrix factorization). More details can be found in Starck et al. (2015).

2.2. Extension of patch-based dictionary learning to the sphere

To extend patch-based dictionary learning to data defined on the sphere, we first need to specify how to construct patches on the sphere. We do so by introducing local charts on the sphere. Specifically, in this work we propose to consider the HEALPix framework (Górski et al. 1999, 2005), widely used in astronomy, to construct these charts.

2.2.1. Defining patches on the sphere

HEALPix partitions the sphere into equal area pixels with curvilinear boundaries, defined hierarchically from a set of twelve base quadrilaterals (see Fig. 1). These twelve base elements (or faces) form an atlas of the sphere, and are further partitioned dyadically to obtain finer discretization levels. Consequently, each of the twelve faces is typically considered as a chart with HEALPix pixel positions mapped onto a square grid in [0, 1]×[0, 1].

thumbnail Fig. 1.

HEALPix grid (visualizing Nside = 16) in orthographic projection on the left and Mollweide projection on the right. Faint lines indicate the circles of latitude . The right image also introduces the numbering of the faces, used in the following illustrations.

Open with DEXTER

Using these charts to perform usual Euclidean patch-based dictionary learning is straightforward, and would have the main advantage of applying dictionary learning directly onto the pixel values, without requiring any interpolation. This comes, however, with two drawbacks: first, this approach introduces boundary issues even when using overlapping patches on each face; second, sampling on the sphere leads to patches with local characteristics (e.g., the pixel shape varies along the latitude in HEALPix). The first of these two problems can be overcome by creating the patches based on local neighbors, as defined by HEALPix. Because of the regularity of the HEALPix sampling scheme, all pixels have eight neighbors, except for eight pixels on the sphere that are located at the vertices in between equatorial and polar faces, which only have seven neighbors. The second problem, however, implies that the same signal defined continuously on the sphere, but centered at different patch centers, will likely lead to different patches being extracted (e.g., for a patch in the equatorial region or in the polar caps). We do not take this effect into account, so that these patches may have a different sparse decomposition or different approximation errors. HEALPix is also not suited to efficiently represent band limited signals, since only approximated quadrature rules are then available to compute spherical harmonic coefficients (Doroshkevich et al. 2005).

Provided some care is taken on defining the respective position of each neighbor to a central pixel across the sphere, overlapping patches can be created – even in between the twelve HEALPix faces – without any interpolation, except at the patches crossing the specific points on the HEALPix grid, which only have seven neighbors. Interpolation strategies to compensate for these “missing” neighbors can be envisioned; but in this work we choose not to interpolate, which implies that for a few pixels around these points, we do not construct all overlapping patches. The final covering of the map is illustrated in Fig. 2, also including patches randomly selected on the sphere. Once these patches are extracted, classical dictionary learning techniques can be used to learn a sparse adapted representation.

thumbnail Fig. 2.

Example of our covering of the sphere with overlapping patches based on HEALPix neighborhoods for Nside = 128 and patch width q = 8 (note that in our numerical experiments, Nside = 2048 and the patch width is either q = 8 or q = 12). Several randomly selected patches on the sphere are also represented in color. The plotted value in gray indicates the number of overlapping patches including each pixel. Because the patch width is usually small with respect to the number of pixels per face, the number of overlapping patches varies in small regions around the pixels that only have seven neighbors.

Open with DEXTER

2.2.2. Learning a multiscale representation on the sphere

Our proposed approach for dictionary learning on the sphere can be extended to capture multiscale information as proposed in Ophir et al. (2011), namely, by learning a dictionary from patches extracted from a multiscale decomposition of the data. At lower scales, capturing meaningful information would require an increase in the patch size, and would ultimately lead to a computational burden impossible to handle. To capture this information without increasing the patch size, the decomposition is subsampled.

In this work, we use the Starlet decomposition for data on the sphere (Starck et al. 2006), with one dictionary learned per wavelet scale. Since all scales except the last one are band-limited, subsampling can be performed without loosing information by adapting the Nside parameter to the maximal multipole at the level considered (typically dyadically decreasing, as illustrated in Table 1).

Table 1.

Parameters used for learning the multiscale dictionary for thermal dust data.

The resulting minimization problem for the multiscale dictionary learning problem reads

(3)

where X is the signal on the sphere, 𝒲(s) extracts the scale s of the wavelet transform on the sphere according to the Nside chosen for that scale, Rij is now extracting patches according to neighbors on the sphere for the patch indexed by (i, j) at scale s in training set 𝒯(s), and S is the total number of wavelet scales. For each scale s = 1, …, S, a dictionary D(s) is therefore learned, giving coefficients collected in Λ(s); the hyperparameter μ(s) is also allowed to change with the scale. Because the cost function is separable per scale, the minimization problem Eq. (3) is equivalent to solving S dictionary learning sub-problems associated to each wavelet scale.

2.3. Our algorithm for patch-based dictionary learning on the sphere

In the training phase, the joint nonconvex problems described in Eqs. (2)–(3) are typically handled by alternating sparse coding steps and dictionary update steps. Here, a sparse coding step means that one minimizes Eq. (2) (resp. Eq. (3)) with respect to Λ (resp. Λ(s)), with a fixed previously estimated dictionary. Similarly, a dictionary update step means that one minimizes Eq. (2) (resp. Eq. (3)) with respect to D (resp. D(s)), with the fixed, previously estimated codes. Standard algorithms were proposed for both sub-problems. In this work, we will use the classical dictionary learning technique K-SVD (Aharon et al. 2006) with Orthogonal Matching Pursuit (OMP; Mallat & Zhang 1993; Pati & Krishnaprasad 1993) as a sparse coder. For denoising applications, the sparse coding step will encompass both a maximal sparsity level, and an approximation threshold based on the ℓ2 norm of the residual, similar to the approach in Elad & Aharon (2006). This approach resulted in adapted sparse representations, while not being sensitive to small fluctuations below the targeted level of approximation, and in practice led to faster algorithms. The resulting multiscale dictionary learning algorithm is described in Algorithm 1, from which its variant without the multiscale transform can be obtained for S = 1 and 𝒲(1) = Id.

The first critical choice for this dictionary learning technique is to adapt the patch size q to capture information at the scale of the patch without impacting too much the computational burden of the algorithm (q is at most 12 in this work). The maximal sparsity degree K(s) and the number of atoms M(s) should be selected so that the dictionary leads to small approximation errors, while being able to capture the important features with only a few atoms, in particular for denoising applications. The parameter ϵ(s) is the noise level expected in the denoising application at the considered wavelet scale, and the number of iterations is in practice chosen to be sufficiently large so that the average approximation error does not change with iterations. Because this problem is non-convex, it is crucial to initialize the algorithm with a meaningful dictionary; in our case, the initial dictionary is chosen to be an overcomplete discrete cosine transform (DCT) dictionary as in Elad & Aharon (2006).

3. Extension of α-shearlets to the sphere

3.1. Euclidean α-shearlets

Adaptive dictionaries can also be derived from a parametrized family of representations such as the α-shearlets that generalizes wavelets and shearlets and are indexed by the anisotropy parameter α ∈ [0, 1]. To each parameter α corresponds a dictionary characterized by:

  • atoms with a “shape” governed by height ≈ widthα (see Fig. A.2);

  • a directional selectivity: on scale j, an α-shearlet system can distinguish about 2(1 − α)j different directions (see Fig. A.3);

  • a specific frequency support for the atoms (see Fig. A.3).

A key result (Voigtlaender & Pein 2017) is that α-shearlets are almost optimal for the approximation of so-called Cβ-cartoon-like functions, a model class for natural images. More precisely, the N-term α-shearlet approximation error (that is, the smallest approximation error that can be obtained using a linear combination of Nα-shearlets) for a Cβ-cartoon-like function is decreasing at (almost) the best rate that any dictionary can reach for the class of such functions. For this to hold, the anisotropy parameter α needs to be adapted to the regularity β, that is, one needs to choose α = 1/β. For more details on this, we refer to Appendix A.

In general, given a certain data set, or a certain data model, different types of α-shearlet systems will be better adapted to the given data than other α′-shearlet systems. Thus, having such a versatile, parametrized family of representation systems is valuable to adapt to a variety of signals to recover.

3.2. Defining α-shearlet transforms on the sphere

In order to define the α-shearlet transform on the sphere, similarly to what was discussed for the dictionary learning approach, we need to define the charts on which the Euclidean α-shearlet transform will be applied. HEALPix faces are again an obvious candidate since these base resolution pixels can be interpreted as squares composed of Nside by Nside equally spaced pixels, although their shape is contorted in different ways on the sphere (see Fig. 1).

We could map the sphere to these twelve square faces and then take the α-shearlet transform on every one of them individually. However, as for dictionary learning, this approach to the processing of HEALPix data (e.g., for the task of denoising) is deemed to introduce boundary artifacts for this partition of the sphere. An example of such artifacts can be seen in the upper-left part of Fig. 18 shown in Sect. 5. Besides, contrary to patch-based dictionary learning where the patch size remains typically small compared to a face size, the increasing size of the α-shearlet atoms when going to lower scales can introduce large border effects.

In the following two subsections, we discuss two approaches for handling this problem. Similarly to dictionary learning in Sect. 2.2.1, we do not take into account the variation of the pixel shapes along the sphere when extending α-shearlets to the sphere.

3.2.1. The rotation-based approach

The first strategy to alleviate the block artifacts was proposed for curvelets in Starck et al. (2006). This approach relies on considering overlapping charts that are obtained by considering HEALPix faces after resampling the sphere through a small number of rotations. More precisely, for a given Euclidean α-shearlet system, a HEALPix face f, and a rotation r, the redundant coefficients are obtained by

(4)

where ℛr computes the resampled map by a rotation r of the sphere, Hf is a matrix extracting the pixels that belong to the HEALPix face f, and 𝒮α computes the Euclidean α-shearlet transform on this face. In practice, a bilinear interpolation is performed by the HEALPix rotation routines that are used for the resampling.

The reconstruction is performed using a partition of unity on the sphere (see Fig. 3), which is obtained from weights that are smoothly decaying from 1 in a central region of the faces to 0 at their borders and therefore mitigating border effects. Formally, the reconstruction reads

(5)

thumbnail Fig. 3.

Partition of unity for the rotation-based reconstruction. The weights smoothly decaying toward the border are presented in the top left panel and are copied to each HEALPix face in the top right panel. In the bottom left panel, resampling was first performed using a rotation and bilinear interpolation, and the image shows the weights that would be applied in the original reference coordinates. The resulting covering of the sphere using five rotations is illustrated in the bottom right panel.

Open with DEXTER

where ℛr resamples the sphere with the inverse rotation matrix, 𝒯α computes the inverse α-shearlet transform, M applies weights, and the normalization matrix N is simply a pointwise multiplication with weights chosen such that where 1 is a vector with all entries equal to 1. An example of the weights and normalization maps used to construct this partition of unity is illustrated in Fig. 3.

Since the rotations ℛr and ℛr are implemented using interpolation, it is not true exactly that ℛrrX = X. Therefore, even if the coefficients λα, r, f are obtained through Eq. (4), the reconstruction in Eq. (5) will only satisfy , not . However, the error introduced by the inexact inverse rotation is often negligible, at least for sufficiently smooth signals; Sect. 5.2.4 offers further comment on this.

3.2.2. The “patchwork” approach

The “patchwork” approach is another strategy to eliminate artifacts that would arise if one naively used the disjoint HEALPix faces. Contrary to the rotation-based technique, where an interpolation is performed during the resampling, the patchwork approach is based on extending the HEALPix faces using parts of the surrounding faces so as to avoid interpolation. Similar to the rotation-based approach, the six resulting extended faces (see Fig. 4) form a redundant covering of the sphere, which is beneficial for avoiding boundary artifacts. Once these six extended faces are computed, the α-shearlet transform and all further processing are performed on these faces. Of course, for the reconstruction, the last step consists in combining the redundant faces to get back a proper HEALPix map.

thumbnail Fig. 4.

Left panel: twelve squares corresponding to the faces of the HEALPix framework (see Fig. 1) arranged as a net in the plane. The areas that are covered by multiple of the extended faces – the transition zones – are displayed in gray. The areas where pixels are “missing” are displayed in red. Right panel: six extended faces produced by the patchwork procedure. The two polar faces form the top row, followed by the four equatorial faces below. The shaded area around the transition zone of each composite face indicates the margin, which is later discarded.

Open with DEXTER

Formally, the decomposition can be described as

(6)

where 𝒫f is now the operator that extracts the extended face f from the HEALPix map X. Similarly, the reconstruction reads

(7)

where ℳ is the operator that reconstructs a HEALPix map from data on the six extended faces.

The rest of this section explains how precisely the extended faces are obtained from the original HEALPix faces, and conversely how a HEALPix map can be obtained from data on these six extended faces. For an accompanying visual explanation of the procedure, the reader should consult Figs. 1, 4, and 5.

thumbnail Fig. 5.

Detailed view of two of the six extended faces. The dark outer boundary with width cm is the margin that is discarded after the reconstruction step, and the two dark squares in the corners of the equatorial face on the right are treated likewise. The remaining part of the extended faces has a gray outer boundary of width 2ct. In conjunction with the gray squares in the corners of the equatorial face, this boundary forms the transition zone that contains the values shared with the neighboring extended faces.

Open with DEXTER

Each of the six extended faces consists of an inner square with HEALPix pixels that are unique to this extended face, and a border zone with HEALPix pixels that appear in several of the extended faces. The border itself is again subdivided into an outer margin that is disregarded after the reconstruction step so that the artifacts at the boundary are cut off (not mapped to the sphere), and an inner part that forms a transition zone, where the values of neighboring faces are blended together to prevent visible discontinuities between them.

Instead of extending all twelve original faces, we combine them to six bigger composite faces and extend those. This reduces the number of additional pixels that have to be processed (when using a border of the same size), at the cost of increased memory requirements. The first two composite faces cover the bulk of the north and south polar regions, and particularly the poles itself. Since the four faces of each polar region meet at the poles, we can arrange those four faces to form a square around the pole. It only remains to clip this area to the requested size. Although there is much freedom to set the extent of the individual composite faces, we prefer all squares to be of equal size, so that they can be processed without distinction. The remaining four composite faces are obtained by expanding the equatorial faces. An expansion of the equatorial faces by in each direction results in areas of width , which each contain a fourth of every surrounding polar face. By removing those parts from the polar areas, constructed earlier, those are truncated to the same width (see Fig. 5). Thus, we get six areas of equal size that cover the sphere. Chosen this way, there is still no overlap between the polar and equatorial composite faces; therefore we extend each face further by half the requested width of the transition zone. We chose an extension of width (that is ct in Fig. 5). Since each face enters its neighbors territory by that amount, this results in a transition zone of width between each face. Additionally each face is extended by a margin (that is cm in Fig. 5) to avoid border artifacts. Here, a margin of width was chosen.

However, to extend the equatorial faces, we have to address the problem that there are eight vertexes where two faces of a polar region meet a face of the equatorial region (located on the circles of latitude θ = cos−1(±2/3), depicted in Fig. 1). By arranging the twelve faces as a net in the plane – as illustrated in Fig. 4 – it becomes clear that there are gaps between the polar faces, where no values exist; these areas are marked in red in Fig. 4. We need to fill those gaps in order to obtain rectangular extended faces, to which we can apply the α-shearlet transform. In the end, these parts will be cut away and disregarded like the outer margin of the extension, so the filled-in values will not actually be used for the reconstruction. Nevertheless, we need to be careful, since otherwise we might introduce additional artifacts like the ones at the boundary.

For the sake of simplicity, we will describe the situation at the edge between faces 1 and 2 (see Figs. 1, 4, and 6), which is exemplary for all gaps. From the perspective of face 2, the missing square is expected to feature a rotated copy of face 1, while conversely face 1 expects a rotated copy of face 2. To fabricate a weighted blending of those anticipated values, we divide the empty square, interpreted as [0, 1]2, along the lines 2x = y, x = y, and x = 2y, into quarters, as demonstrated in Fig. 6. On both outer quarters the full weight is assigned to the face which the adjoining face expects, while the two middle quarters serve to produce a smooth transition. All weights are normalized in such a way that every pixel is a convex combination of the pixels of the two faces; that is, the weights are non-negative and their sum is one at each pixel.

thumbnail Fig. 6.

“Missing” square between faces 1 and 2 is divided into four triangles of equal size, separated by the lines 2x = y, x = y, and x = 2y, as seen on the left. The two images in the middle reveal how the rotated faces 1 and 2 are separately weighted along those segments. The data of face 1 have full weight (black) on the outer triangle adjacent to face 2, and no weight (white) on the other outer triangle, while the data of face 2 are treated conversely. A smooth transition is provided by the weights on the triangles in between. The sum of the weighted faces is used to fill the gap, as demonstrated in the right-most illustration.

Open with DEXTER

With this process, we fill the vertex regions with values. We do not actually need to fill the whole square, but only the corner needed for the expansion (the red part in Fig. 4). Having done this, we can piece the equatorial faces together from the various parts of the six surrounding faces and two filler squares. Figure 4 shows the resulting extended faces on the right.

We have now described the operators 𝒫f appearing in Eq. (6), which assign to a given HEALPix map X the six extended faces 𝒫1(X),…,𝒫6(X). On these rectangular faces, we can then apply the usual α-shearlet transform, and do any further processing that is desired (for instance, we can denoise the six extended faces by thresholding the α-shearlet coefficients).

After the processing is done on the six extended faces, the outer margin and filler values are disregarded and the remnant is separated along the boundaries of the original faces. From these pieces, the original faces are put back together. While doing so, all pixels that were part of a transition zone are weighted, similarly to above, as a convex combination of the pixels of the (up to four) involved extended faces.

Since we use only the values provided by the HEALPix grid, and instead of interpolating between pixels use convex combinations of pixel values in the transition zones, the patchwork procedure is invertible, with Eq. (7) describing a left inverse to the “patchwork α-shearlet coefficient operator” described in Eq. (6). Thus, the patchwork-based α-shearlets form a frame. We emphasize, however, that the reconstruction procedure described in Eq. (7) is not necessarily identical to the one induced by the canonical dual frame of the patchwork-based α-shearlet frame.

4. Experiments

To evaluate α-shearlets and dictionary learning, we have selected two different simulated data sets on the sphere:

  • Thermal dust map: a full sky thermal dust map from the Planck Sky Model (100 GHz map) (Planck Collaboration XII 2016), obtained through the Planck Legacy Archive1.

  • Horizon full sky maps: a series of full sky maps from the Horizon N-body simulations describing the dark matter halo distribution between redshift 0 and 1 (Teyssier et al. 2009)2.

While in the former scenario, the signal is smooth and expected to be best represented by multiscale transforms, in the latter the signal is more discontinuous and geometrically composed of filamentary structures joining clusters, with density changing with redshift. These two simulations are therefore illustrative of different scenarios where such adaptive transforms would be useful.

To evaluate the respective performance of DL and α-shearlets for denoising, we have added to the thermal dust map an additive white Gaussian noise with standard deviation 45 μK, which corresponds to the expected level of CMB at such frequency. The resulting map can be seen in Fig. 7.

thumbnail Fig. 7.

Thermal dust simulation map (at 100 GHZ) without (top panel) and with the additive white Gaussian noise added (bottom panel), for evaluation of the methods. The colorscale has been stretched to illustrate the challenge of recovering structures at intermediate latitude. Units are in μK.

Open with DEXTER

The galactic mask used for quantitative comparisons to separate regions of high dust amplitude from regions with lower values at higher galactic latitude is displayed in Fig. 8, along with the location of a region close to the galactic plane where the differences between the methods could be better visualized.

thumbnail Fig. 8.

Left panel: galactic mask used for thermal dust quantitative evaluation, covering 70% of the sky. Right panel: region close to galactic plane where methods are inspected.

Open with DEXTER

For the dark matter halo distribution, we select the first slice of the data cube, and adjust the white noise level to 5, so that filamentary structures are of a similar amplitude to the noise, as can be observed in Fig. 9. This noise does not correspond to something realistic in our actual experiments, but our goal here is only to evaluate how different adaptive representations behave when extracting features embedded in Gaussian noise.

thumbnail Fig. 9.

Dark matter halo distribution for the first slice, without (top panel) and with the additive white Gaussian noise added (bottom panel), for evaluation of the methods. The colorscale has been stretched to visualize filamentary structures.

Open with DEXTER

In the following two subsections, we outline the precise choice of the hyperparameters that we used for the α-shearlets and for the dictionary learning based denoising, respectively.

4.1. Parameters for α-shearlets

For the two α-shearlet approaches, we used 11 values of α, sampled uniformly with a density of 0.1 ranging from 0 to 1. We used four scales of decomposition, using either the rotation-based approach (Eq. (4)), or the patchwork approach (Eq. (6)). For the actual denoising, we performed a hard thresholding of the α-shearlet coefficients. For this, we used different detection thresholds on different scales. To be precise, we used a 4σ detection threshold for the last scale with a lower signal to noise ratio, and a detection threshold of 3σ for the other scales; for the coarse scale, however, we did not do any thresholding. The reconstruction was then performed using either Eq. (5) or (7).

For the rotation-based approach, five rotations were selected as a balance between having “more uniform” weights and the computational burden of this approach. The weight maps were built using a margin and transition (smooth trigonometric variation in between 0 and 1) of size .

For the patchwork approach, we set the size of both the utilized extension and the margin to , which results in increasing the number of pixels that have to be processed by about half (53.1%). A little less than half of the added pixels are used for the sake of redundancy, and the rest is disregarded.

4.2. Dictionary learning parameters

For the thermal dust data where the information is present at several scales, we chose the multiscale dictionary learning technique. Three wavelet scales of the Starlet transform on the sphere (Starck et al. 2006) were first computed from the input simulated dust map without noise. To avoid artifacts for a non band-limited signal, the finest wavelet scale has not been directly computed through its spherical harmonic decomposition. We followed Algorithm 1 for the learning procedure, with the parameters listed in Table 1. The patch size, the number of atoms, and the maximal sparsity were selected experimentally by choosing values that lead to the lowest average approximation error during the training phase.

An example of a dictionary learned for this adaptive multiscale representation of thermal dust is shown in Fig. 10. The dictionaries have captured at various scales both directional and more isotropic structures.

thumbnail Fig. 10.

Atoms learned in the multiscale dictionary learning approach: on the left, scale 3, on the right, scale 2. The dictionaries have departed from the original redundant DCT dictionary and have learned specific features related to the scale. Due to the change of the Nside parameter with the scale, the actual distance between two adjacent pixels has increased, and the atoms for scale 2 are indeed smoother than those for scale 3.

Open with DEXTER

In the second scenario, because information is localized in space, the dictionary was learned directly on patches extracted from the first slice describing the dark matter halo distribution, from a training set of 200 000 patches of size 8 × 8. As in the previous experiment, a stopping criterion was set for the approximation error (which should be less than the targeted level of noise), and a maximal sparsity of 7 was set for OMP. K-SVD was then run for 100 iterations. The learned dictionary is presented in Fig. 11. The atoms essentially contain high frequency information in this case, in contrast to the previously learned distribution on thermal dust.

thumbnail Fig. 11.

Atoms learned in the dictionary learning approach, applied to the dark matter halo distribution data. The dictionary elements are composed of point-like structures and edges.

Open with DEXTER

Once these dictionary are learned, the sparse decomposition step with this representation is used for denoising. The same parameters as above were used for the sparse coding, except for the targeted approximation error, which was set to a value that would not be exceeded by a patch of pure noise with a probability of 0.9545.

5. Results

5.1. Denoising experiments

We tested our adaptive approaches to denoise the data in the two denoising scenarios presented in the previous section, using the parameters described in Sects. 4.1 and 4.2. For the thermal dust simulation, the full sky denoised maps using the three approaches are displayed in Fig. 12, with a zoom to a region close to the galactic plane in Fig. 13, to visually inspect the differences between methods. Residuals on the full sphere are also shown in Fig. 14, and the performance of the different approaches is quantitatively evaluated in Table 2 in the full sky as well as in regions defined by the galactic mask.

thumbnail Fig. 12.

Denoised thermal dust maps for all three approaches. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both for α = 0.6. Bottom panel: representation learned with dictionary learning. Units are in μK.

Open with DEXTER

thumbnail Fig. 13.

Zoom on a region close to the galactic plane to visualize the respective denoising performance of the methods. From top to bottom panels: input map, noisy map (with own colorscale), rotation-based approach with α = 0.6, patchwork approach with α = 0.6, sparse representation learned from data. All units are in μK.

Open with DEXTER

thumbnail Fig. 14.

Residuals for the maps displayed in Fig. 12. Units are in μK.

Open with DEXTER

Table 2.

Statistics on the recovery of spherical thermal dust maps with the proposed approaches.

Similarly, for the dark matter halo distribution, the full sky denoised maps are displayed in Fig. 15 and the residuals are presented in Fig. 16. To better inspect the recovery of the filamentary structures as well as the core regions, a zoom-in was also performed for this dataset; this is shown in Fig. 17. Finally, the results are quantitatively evaluated in Table 3.

thumbnail Fig. 15.

Denoised dark matter maps for all three approaches. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both with α = 1. Bottom panel: representation learned with dictionary learning.

Open with DEXTER

thumbnail Fig. 16.

Amplitude of the residuals for all three approaches, for the dark matter map scenario. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both with α = 1. Bottom panel: representation learned with dictionary learning.

Open with DEXTER

thumbnail Fig. 17.

Dark matter map amplitudes for all three approaches in a zoomed region. From top to bottom and left to right panels: original map, noisy map, rotation-based approach with α = 1, patchwork approach with α = 0, patchwork approach with α = 1, representation learned from data.

Open with DEXTER

Table 3.

Statistics on the recovery of dark matter halo distribution with the proposed approaches.

To inspect the impact of the anisotropy parameter on the recovery of geometrical structures in the different redshift slices, we also computed for the patchwork approach the non-linear approximation curves that display the evolution of the RMSE as a function of given thresholds. This allows for a more comprehensive view of the best α for different density level thresholds. These non-linear approximation curves are illustrated in linear and log scale in Figs. 19 and 20, respectively.

5.2. Discussion

In the following, we discuss several questions concerning the results; in particular, we analyze the relative performance of our different approaches to sparsifying representations on the sphere.

5.2.1. Block artifacts

The first challenge in extending the representation from the Euclidean framework to data defined on the sphere was to avoid the border effects due to considering disjoint charts processed independently. Figure 18 illustrates that all our proposed redundant representations, based on different overlapping charts, are free of these block artifacts when denoising the thermal dust map. A similar result is obtained for denoising the dark matter maps.

thumbnail Fig. 18.

Cartesian projection of the denoised thermal dust maps centered at the intersection of four faces. From left to right and top to bottom panels: denoising each face independently using α-shearlets with α = 1, restoration via the rotation-based approach, patchwork approach with α = 1, dictionary learning with patch width of 12. The colorscale has been stretched to visualize the artifacts seen as a cross-shape discontinuity at the boundaries of the four HEALPix faces in the upper left panel. All of our proposed approaches are free from these artifacts. Units are in μK.

Open with DEXTER

5.2.2. Visual inspection

Qualitatively, Figs. 13 and 17 illustrate the different shapes captured by α-shearlets and dictionary learning atoms. In particular, for the thermal dust maps, the noise appears as curvelet-like structures for the α-shearlet approaches, while for the dictionary learning approach, the noise appears both as isotropic and as directional structures.

For the first slice of the dark matter halo distribution simulations, the dictionary learning approach visually seems to best recover the structures in the data, in particular the filamentary structures and the compact cores.

5.2.3. Which approach is best?

This is confirmed quantitatively in Tables 2 and 3 where the dictionary learning approach outperforms both α-shearlet techniques in the denoising of thermal dust (with a multiscale approach) and dark matter halo distribution. For thermal dust, when looking at specific regions (region inside or outside the galactic mask), the rotation-based approach gives, however, the lowest residuals in the galactic region, while using the learned representation gave the best results outside this region. This can be explained by the wide diversity of amplitudes in the galactic plane, not captured in our training set of 200 000 patches for the first wavelet scale, which corresponds only to 0.4% of the total number of patches over the full sky. Improving performance for dictionary learning in the galactic region would require us either to train the dictionary with a larger training set so that it encompasses more patches from the galactic center, or to sample more densely the galactic region than higher galactic latitudes in this training set.

5.2.4. Is the rotation-based or the patchwork approach preferable?

The rotation-based approach outperforms the patchwork approach in the thermal dust denoising scenario, but conversely the patchwork approach outperforms the rotation-based technique in the dark matter halo distribution scenario. The last result is due to the bilinear interpolation performed when resampling the sphere with rotations, which leads to severe approximation errors when the signal varies greatly at the scale of a few pixels.

5.2.5. What is the best α-value?

Tables 2 and 3 show that for α-shearlets in the denoising of thermal dust, α = 0.6 (system close to the curvelets) gives the best performance, while for the dark matter halo distribution scenario, α = 1.0 (system close to the wavelets) gave the best performance.

However, the second scenario displays a diversity of structures with both high density cores and numerous less dense filaments, with distribution changing in different slices of data corresponding to different redshifts. It would therefore be reductive to investigate a single noise level scenario to set a best α for one of these slices of the data.

We therefore computed for the patchwork approach the non-linear approximation curves for the different slices in redshift. These non-linear approximation curves are illustrated in linear and log scale in Figs. 19 and 20, respectively. These curves illustrate that for large threshold values, corresponding to selected dense core regions, the α = 0.9-shearlet system is most suitable. For slice 600 and 605 (higher redshift), when decreasing the threshold, there is a transition from α = 0.9 to α = 0 (very elongated shearlets) for the best α value. This can be understood as including more and more filamentary structures when the threshold decreases.

thumbnail Fig. 19.

Normalized non-linear approximation curves for four different slices of the dark matter distribution. For each threshold, the α value corresponding to the lowest approximation error is displayed on the top.

Open with DEXTER

thumbnail Fig. 20.

Normalized non-linear log-approximation curves for four different slices of the dark matter distribution. For each threshold, the α value corresponding to the lowest approximation error is displayed on the bottom.

Open with DEXTER

For lower redshift slices on the other hand, the best values are obtained more consistently across thresholds for α = 0.9 or α = 1 because more core structures and less filaments are visible in the data. Overall, this illustrates how adaptive to diverse structures in the data the α-shearlets can be. Furthermore, it shows that the anisotropy parameter α can be used to characterize different types of structure present in the data.

5.3. Computing requirements

All codes were run on the same cluster so that we can assess the relative computing time requirements for the three approaches. For the rotation-based approach, on the current python implementation using pyFFTW3 and also based on a parallelized transform using six cores, denoising a Nside = 2048 map using five rotations and four scales of decomposition takes about 35 min for α = 1 and 1 h for α = 0 (the most redundant transform). The time needed to perform the rotation-based approach scales linearly with the number of rotations. In comparison, denoising with the patchwork approach a Nside = 2048 map using four scales of decomposition (with the same parallelization of the transform as for the rotation-based approach) takes about 9 min for α = 1 and 20 min for α = 0.

For the multiscale dictionary learning algorithm, computing time for the learning phase ranged from about 2.5 h for scale 3 to about 3.5 h for scale 1, when using our C++ code with four cores for the sparse coding. This increase is due to the low value for ϵ(1) and large value for the maximal sparsity K(1), even though the training set is smaller than for scale 3. Learning these dictionaries can be performed in parallel, which was done in practice. For the dark matter scenario, the learning took about 65 min. Once the dictionary was learned, sparse coding of all patches took typically from 15 min (scale 1) to about 22 min (scale 3) for the thermal dust map, and 9 min for the dark matter halo distribution, using 24 cores. Overall, the two α-shearlet approaches are therefore easier to set up, with less parameters to optimize that depend directly on the data, and result in faster denoising than the dictionary learning based approach.

6. Conclusions

We have proposed two new types of adaptive representations on the sphere: a patch-based dictionary learning approach and choosing among a parametrized family of representations, the α-shearlets. To extend these constructs from the Euclidean setting to data defined on the sphere, we proposed to use overlapping charts based on the HEALPix framework. For the dictionary learning technique, a possible multiscale extension was presented by learning dictionaries on each scale after performing a subsampled wavelet decomposition on the sphere. For the α-shearlets, we proposed two approaches to construct the charts: resampling the sphere according to various rotations associated with a partition of unity not sensitive to border effects, or constructing six overlapping charts based on composite extended HEALPix faces.

We evaluated all three approaches by conducting denoising experiments on thermal dust maps, and dark matter maps. Our main findings are as follows:

  • thanks to the use of overlapping charts, all of our proposed approaches are free of the block artifacts that typically appear if one naively uses the disjoint HEALPix faces for doing denoising;

  • in both scenarios investigated, the dictionary learning approach gave the best performance by providing atoms adapted to the structure present in the images, for a given noise level;

  • the performance of the dictionary learning approach depends on setting several hyper-parameters that depend on the signal observed (multiscale or not), and on the training set. This approach therefore requires more computing and tuning time than the other approaches;

  • which of the two α-shearlet approaches performed better depended on the chosen scenario; the rotation-based approach involves interpolation, which is detrimental to capturing signals that vary significantly on the scale of just a few pixels, but it achieved better results for the thermal dust simulations;

  • for different values of the anisotropy parameter α, the α-shearlet system is adapted to different structures (filaments, dense cores) present in the dark matter halo distribution simulation.

The respective performance of these approaches depends on the criteria used: the dictionary learning approach provided the best denoising results in both scenarios, but has a higher number of parameters to set and requires more computing time; among the α-shearlets, the rotation-based approach is best for smooth signals, but the converse is true for signals with significant variation on the scale of a few pixels. The three proposed approaches can therefore be used to process data living on the sphere, and choosing the “best” approach will depend on the scenario considered as well as the computing resources available.

Reproducible research. In the spirit of reproducible research, we make public our codes for sparse representation systems on the sphere on the common repository4. The dictionary learning and alpha-shearlet codes on the sphere are associated with tutorial jupyter notebooks illustrating how to use them for denoising.


Acknowledgments

This work is funded by the DEDALE project, contract no. 665044, within the H2020 Framework Program of the European Commission. The authors thank the Horizon collaboration for making their simulations available.

References

Appendix A: Review of Euclidean α-shearlets

The α-shearlet family of representations generalizes wavelets and shearlets. Like shearlets – originally introduced in Labate et al. (2005) and Guo et al. (2006) – they are a directionally sensitive multiscale system in ℝ2 improving upon wavelets when it comes to handling data that is governed by directional features like edges. They are characterized by an anisotropy parameter α ∈ [0, 1], and were designed to yield optimally sparse representations for the class of Cβ-cartoon-like functions (Kutyniok & Labate 2012; Kutyniok & Lim 2011; Guo & Labate 2007; Voigtlaender & Pein 2017), a model class for natural images (Candès & Donoho 2004) as illustrated in Fig. A.1.

In the remainder of this section, we briefly explain our motivation for choosing α-shearlet systems, discuss the most important mathematical properties of α-shearlet systems, and then comment on the implementation that we used.

thumbnail Fig. A.1.

Example of a cartoon-like function. Such a function f is smooth, apart from a jump discontinuity along a curve γ. Even though f might be discontinuous along γ, the boundary curve γ itself is required to be smooth.

Open with DEXTER

A.1. Motivation

Before giving a formal definition of (α)-shearlet systems, it is instructive to roughly compare the operations used for their construction to the ones used for defining wavelet systems (Daubechies 1992). We recall (see, e.g., Daubechies 1992) that for a scaling function ϕ ∈ L2(ℝd) and a mother wavelet ψ ∈ L2(ℝd), the associated (discrete) wavelet system with sampling density δ >  0 is given by

In other words, the wavelet system consists of all translates of the scaling function ϕ along the lattice δd, together with certain translates of the isotropically dilated scaling functions ψj := 2dj/2ψ(2j • ). Here, the wavelet ψj on the jth scale is translated along the lattice δ ⋅ 2jd, which is adapted to the “size” of ψj.

It is crucial to note that even in dimension d >  1, wavelets use the isotropic dilations x ↦ 2jx, which treat all directions in the same way. Therefore, wavelet systems are not optimally suited for representing functions governed by features with different directions. Admittedly, instead of using a single mother wavelet ψ, it is common to employ wavelet systems that use finitely many mother wavelets ψ(1), …, ψ(N); usually these are obtained by choosing each ψ(j) as a certain tensor product of one-dimensional scaling functions and mother wavelets. However, such a modified wavelet system is again only able to distinguish a fixed number of directions, independent of the scale j, and therefore does not allow a satisfactory directional sensitivity.

To overcome this problem, shearlets (like curvelets) use the parabolic dilation matrices . More generally, α-shearlets employ the α-parabolic dilation matrices

As shown in Fig. A.2, dilating a function ψ with these matrices produces functions that are more elongated along the x2-axis than along the x1-axis, where the anisotropy is more pronounced for larger values of α or j. The support of the dilated function satisfies 2jα ≈ height ≈ widthα.

thumbnail Fig. A.2.

Effect of dilating a “prototype function” ψ (shown at the top of each column) with the matrices Dj(α) to obtain ψ(Dj(α) • ), for different values of the scale j (going from j = 0 (top panels) to j = 2 (bottom panels)) and of the “anisotropy parameter” α ∈ [0, 1].

Open with DEXTER

It is apparent from Fig. A.2 that for α <  1 and large j ∈ ℕ0, the functions have a distinguished direction. More precisely, if (as in the figure) ψ oscillates along the x1-axis, then is similar to a sharp jump along the x2-axis. Since we want our dictionary to be able to represent jumps along arbitrary directions, we have to allow for some way of changing the direction of the elements . The most intuitive way for achieving this is to use rotations, as was done in the construction of (second generation) curvelets (Candès & Donoho 2004). However, later on it was noted in Labate et al. (2005) and Guo et al. (2006) that from an implementation point of view, rotations have the disadvantage that they do not leave the digital grid ℤ2 invariant. Therefore, instead of rotations, (α)-shearlets use the shearing matrices

to adjust the direction of the functions . However the shearing matrices Sx, x ∈ ( − ∞, ∞) can never cause an effect similar to a rotation with angle θ for |θ|> 90°. Therefore, for the definition of a cone-adapted shearlet system, one only uses shearings corresponding to rotations with angle |θ|≤45°, and one then uses a modified mother shearlet ψ to cover the remaining directions.

Collecting all previously described constructs, the cone-adapted α-shearlet system with sampling density δ >  0, associated to a low-pass filter φ ∈ L2(ℝ2) and mother shearlet ψ ∈ L2(ℝ2), is defined as

(A.1)

with , and

For brevity, let us set , and observe with this notation that

(A.2)

with .

A.2. Mathematical properties

The most basic property of α-shearlets that we are interested in is that they indeed form a (redundant) representation system for L2(ℝ2). In mathematical terms, this means that the α-shearlet system forms a frame (Christensen 2016), for a suitable choice of the generators φ, ψ. In particular, if φ, ψ ∈ L2(ℝ2) have compact support and satisfy certain decay and smoothness conditions (see Voigtlaender & Pein 2017, Theorem 5.10 for details), then there is a “minimal sampling density” δ0 >  0, such that the α-shearlet system is indeed a frame for L2(ℝ2), for all 0 <  δ ≤ δ0.

The main motivation for introducing (α)-shearlets was the need for a representation system better adapted to data governed by directional features, which are often present in natural and in astronomical images. One key result relates (α)-shearlets to C1/α-cartoon-like functions. Roughly speaking, a function f ∈ L2(ℝ2) is called a Cβ-cartoon-like function, written f ∈ ℰβ(ℝ2) (with β ∈ (1, 2]), if f = f1 + f2 ⋅ 𝟙B for certain f1, f2 ∈ Ccβ([0, 1]2) and such that the set B ⊂ [0, 1]2 has a boundary curve of regularity Cβ. For a more formal definition, we refer to Voigtlaender & Pein (2017, Definition 6.1).

Using this notion, we have the result that the best N-term approximation error with such a frame of α-shearlets (that is, the smallest approximation error obtained by a linear combination of Nα-shearlets) is decaying at (almost) the best rate that any dictionary Ψ can reach for Cβ-cartoon-like functions (see Voigtlaender & Pein (2017, Theorem 6.3) for a more precise formulation of this result). To obtain this optimal approximation rate, the anisotropy parameter α needs to be adapted to the regularity β of the Cβ-cartoon-like functions, that is, α = 1/β. In general, given a certain data set, or a certain data model, different types of α-shearlet systems will be better adapted to the given data than other α′-shearlet systems.

We close our discussion of the mathematical properties of α-shearlet systems with a brief discussion of the frequency concentration of such systems. To this end, assume for the moment that the “mother shearlet” ψ is concentrated in frequency to the set

which is a union of two opposing “wedges” (highlighted in green in Fig. A.3). From elementary properties of the Fourier transform, one then sees that each α-shearlet has frequency support in , where we denote by AT the transpose of a matrix A. The resulting coverings of the frequency plane for different values of the anisotropy parameter α are shown in Fig. A.3.

thumbnail Fig. A.3.

Frequency concentration of α-shearlets for different values of α. One sees that each “dyadic annulus” {ξ  :  |ξ|≍2j} is split into a number of “wedges” representing the different directions. In fact, .

Open with DEXTER

Together, Figs. A.2 and A.3 show that the parameter α has three different, but related effects:

  • It affects the “shape” of the elements of the α-shearlet system. Indeed, Fig. A.2 shows that height ≈ widthα.

  • It affects the directional selectivity: as seen in Fig. A.3, on scale j, an α-shearlet system can distinguish about 2(1 − α)j different directions.

  • It affects the frequency support of the elements of the α-shearlet system (see Fig. A.3).

A.3. Implementation

The git repository of our implementation of the Euclidean α-shearlet transform can be found online5, with extensive documentation6. Our software package is implemented in Python3 (Van Rossum & Drake 1995), using NumPy (van der Walt et al. 2011).

In this section, we give a rough overview over what the transform computes, and how it can be used. Our software package implements two different versions of the α-shearlet transform: a fully-sampled (non-decimated) version, and a subsampled (decimated) version. For the fully-sampled version, the computed coefficients are the (discrete) convolutions φ * f and (for a certain range of scales j = 0, …, jmax), where the filters φ and are chosen as in Eqs. (A.1) and (A.2). Thus, for a given input image f ∈ ℂN × N, the resulting coefficients form a three-dimensional tensor of dimension Nα, jmax × N × N, where the integer Nα, jmax is the total number of α-shearlet filters that is used, and where each N × N component of the tensor is the discrete convolution of f with one of the α-shearlet filters. When considering jmax many scales (i.e., j = 0, …, jmax − 1) and if α <  1, then

(A.3)

In particular, for α = 0, N0, jmax ≍ 2jmax, so that the redundancy of the fully sampled α-shearlet frame grows very quickly when increasing the number of scales.

To motivate the subsampled transform, we note that according to Eq. (A.1), the α-shearlet system does not contain all translations of the functions φ and . Rather, φ is shifted along the lattice δ2, and – as seen in Eq. (A.2) – is shifted along the lattice , with . Effectively, this means that the full convolution is only sampled at certain points, where the sampling density gets more dense as the scale j increases. The subsampled version of the α-shearlet transform computes these coefficients. Internally, this is achieved by using the “frequency wrapping” approach outlined in Candès et al. (2006, Sects. 3.3 and 6), Woiselle (2010, Chapter 4), and Woiselle et al. (2011) for the case of the curvelet transform. Since each convolution is sampled along a different lattice, the subsampled transform of a given image f is a list of rectangular matrices of varying dimension. This will become clearer in the example below. One can show for the subsampled transform that the total number M = M(α, jmax, N) of α-shearlet coefficients for an N × N image is bounded, that is, M(α, jmax, N)≤M0 ⋅ N2, with M0 independent of α, jmax, N. This is in stark contrast to the fully sampled transform (at least for α <  1), where the total number of coefficients is ≈2(1 − α)jmax ⋅ N2 (see Eq. (A.3)).

The main effect of choosing the fully sampled transform is that one gets a translation-invariant transform (i.e., taking the transform of a shifted image is the same as shifting each component of the coefficient tensor), and the increased redundancy. This increased redundancy can actually be beneficial for certain tasks like denoising, but it can greatly impact the memory footprint and the runtime: computations using the subsampled transform are usually much faster and require much less memory, but yield slightly worse results.

We close this section with a short IPython session showing how our implementation of the α-shearlet transform can be used.

In the line marked with #1, we set up the α-shearlet transform object trafo. Roughly speaking, this precomputes all necessary α-shearlet filters, which are stored in the trafo object. The first two parameters of the constructor simply determine the shape of the images for which the trafo object can be used, while the third parameter determines the number of scales jmax to be used, as well as the value of the anisotropy parameter α. Passing [alpha_0] * N will construct an α-shearlet transform with N scales (plus the low-pass) and with α given by alpha_0. The verbose parameter simply determines how much additional output (like a progress bar) is displayed. The subsampled parameter determines whether the non-decimated, or the decimated transform is used. Finally, the real parameter determines whether real-valued or complex-valued α-shearlet filters are used. Essentially, real-valued filters have frequency support in the union of two opposing wedges (as shown in Fig. A.3), while for complex-valued filters, one gets two filters for each real-valued one: one complex-valued filter has frequency support in the “left” wedge, while the other one is supported in the “right” wedge.

In line #2, we use the transform() method of the constructed trafo object to compute the α-shearlet transform of im. As seen, the result is an ordinary NumPy array of dimension Nα, jmax × N1 × N2, where the input image has dimension N1 × N2, and where Nα, jmax is the total number of α-shearlet filters used by the transform.

The indices property of the trafo object (see line #3) can be used to determine to which α-shearlet filter the individual components of the coeff array are associated. The value -1 represents the low-pass filter, while a tuple of the form (j, l, c) represents the shearlet filter as in Eq. (A.1), where ι = 0 if c is (which stands for the horizontal frequency cone), and where ι = 1 if c is (vertical frequency cone).

To explain the differences between the fully sampled and the subsampled transform, in line #5, we set up a subsampled transform object trafo2 The only difference to the construction of the trafo object is that we pass subsampled=True, and real=False. The reason for this second change is that – at least with the current implementation – the subsampled transform can only be used with complex-valued shearlet filters. We then compute the coefficients (see line #6) just as for the fully sampled transform. We note, however, that the coefficients for the fully sampled transform were a single three-dimensional NumPy array. For the subsampled transform, however, the coefficients are a list of two-dimensional NumPy arrays. The reason for this is that the number of coefficients varies from scale to scale for the subsampled transform.

The indices property (see line #7) for the subsampled transform also differs from that of the fully sampled transform. The reason for this is that we use complex shearlets; therefore, the frequency plane is divided into four cones (top, or ; right, or ; bottom, or ; and left, or ), instead of the two cones that are used for real-valued shearlet filters.

The main advantage of the subsampled transform is revealed in line #8: the redundancy (that is, the number of α-shearlet coefficients divided by the number of pixels of the input image) for the subsampled transform is much lower, which leads to a lower memory consumption and faster computation times. While the advantage of the subsampled transform might not be overwhelming in the given example, it becomes more pronounced if one uses a larger number of scales. For instance, if we use four scales instead of three, then the redundancy of the fully sampled transform is 41, while that of the subsampled transform is only ≈11.4.

All Tables

Table 1.

Parameters used for learning the multiscale dictionary for thermal dust data.

Table 2.

Statistics on the recovery of spherical thermal dust maps with the proposed approaches.

Table 3.

Statistics on the recovery of dark matter halo distribution with the proposed approaches.

All Figures

thumbnail Fig. 1.

HEALPix grid (visualizing Nside = 16) in orthographic projection on the left and Mollweide projection on the right. Faint lines indicate the circles of latitude . The right image also introduces the numbering of the faces, used in the following illustrations.

Open with DEXTER
In the text
thumbnail Fig. 2.

Example of our covering of the sphere with overlapping patches based on HEALPix neighborhoods for Nside = 128 and patch width q = 8 (note that in our numerical experiments, Nside = 2048 and the patch width is either q = 8 or q = 12). Several randomly selected patches on the sphere are also represented in color. The plotted value in gray indicates the number of overlapping patches including each pixel. Because the patch width is usually small with respect to the number of pixels per face, the number of overlapping patches varies in small regions around the pixels that only have seven neighbors.

Open with DEXTER
In the text
thumbnail Fig. 3.

Partition of unity for the rotation-based reconstruction. The weights smoothly decaying toward the border are presented in the top left panel and are copied to each HEALPix face in the top right panel. In the bottom left panel, resampling was first performed using a rotation and bilinear interpolation, and the image shows the weights that would be applied in the original reference coordinates. The resulting covering of the sphere using five rotations is illustrated in the bottom right panel.

Open with DEXTER
In the text
thumbnail Fig. 4.

Left panel: twelve squares corresponding to the faces of the HEALPix framework (see Fig. 1) arranged as a net in the plane. The areas that are covered by multiple of the extended faces – the transition zones – are displayed in gray. The areas where pixels are “missing” are displayed in red. Right panel: six extended faces produced by the patchwork procedure. The two polar faces form the top row, followed by the four equatorial faces below. The shaded area around the transition zone of each composite face indicates the margin, which is later discarded.

Open with DEXTER
In the text
thumbnail Fig. 5.

Detailed view of two of the six extended faces. The dark outer boundary with width cm is the margin that is discarded after the reconstruction step, and the two dark squares in the corners of the equatorial face on the right are treated likewise. The remaining part of the extended faces has a gray outer boundary of width 2ct. In conjunction with the gray squares in the corners of the equatorial face, this boundary forms the transition zone that contains the values shared with the neighboring extended faces.

Open with DEXTER
In the text
thumbnail Fig. 6.

“Missing” square between faces 1 and 2 is divided into four triangles of equal size, separated by the lines 2x = y, x = y, and x = 2y, as seen on the left. The two images in the middle reveal how the rotated faces 1 and 2 are separately weighted along those segments. The data of face 1 have full weight (black) on the outer triangle adjacent to face 2, and no weight (white) on the other outer triangle, while the data of face 2 are treated conversely. A smooth transition is provided by the weights on the triangles in between. The sum of the weighted faces is used to fill the gap, as demonstrated in the right-most illustration.

Open with DEXTER
In the text
thumbnail Fig. 7.

Thermal dust simulation map (at 100 GHZ) without (top panel) and with the additive white Gaussian noise added (bottom panel), for evaluation of the methods. The colorscale has been stretched to illustrate the challenge of recovering structures at intermediate latitude. Units are in μK.

Open with DEXTER
In the text
thumbnail Fig. 8.

Left panel: galactic mask used for thermal dust quantitative evaluation, covering 70% of the sky. Right panel: region close to galactic plane where methods are inspected.

Open with DEXTER
In the text
thumbnail Fig. 9.

Dark matter halo distribution for the first slice, without (top panel) and with the additive white Gaussian noise added (bottom panel), for evaluation of the methods. The colorscale has been stretched to visualize filamentary structures.

Open with DEXTER
In the text
thumbnail Fig. 10.

Atoms learned in the multiscale dictionary learning approach: on the left, scale 3, on the right, scale 2. The dictionaries have departed from the original redundant DCT dictionary and have learned specific features related to the scale. Due to the change of the Nside parameter with the scale, the actual distance between two adjacent pixels has increased, and the atoms for scale 2 are indeed smoother than those for scale 3.

Open with DEXTER
In the text
thumbnail Fig. 11.

Atoms learned in the dictionary learning approach, applied to the dark matter halo distribution data. The dictionary elements are composed of point-like structures and edges.

Open with DEXTER
In the text
thumbnail Fig. 12.

Denoised thermal dust maps for all three approaches. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both for α = 0.6. Bottom panel: representation learned with dictionary learning. Units are in μK.

Open with DEXTER
In the text
thumbnail Fig. 13.

Zoom on a region close to the galactic plane to visualize the respective denoising performance of the methods. From top to bottom panels: input map, noisy map (with own colorscale), rotation-based approach with α = 0.6, patchwork approach with α = 0.6, sparse representation learned from data. All units are in μK.

Open with DEXTER
In the text
thumbnail Fig. 14.

Residuals for the maps displayed in Fig. 12. Units are in μK.

Open with DEXTER
In the text
thumbnail Fig. 15.

Denoised dark matter maps for all three approaches. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both with α = 1. Bottom panel: representation learned with dictionary learning.

Open with DEXTER
In the text
thumbnail Fig. 16.

Amplitude of the residuals for all three approaches, for the dark matter map scenario. Top and middle panels: α-shearlet denoising with rotation-based (top) or patchwork (middle) approach, both with α = 1. Bottom panel: representation learned with dictionary learning.

Open with DEXTER
In the text
thumbnail Fig. 17.

Dark matter map amplitudes for all three approaches in a zoomed region. From top to bottom and left to right panels: original map, noisy map, rotation-based approach with α = 1, patchwork approach with α = 0, patchwork approach with α = 1, representation learned from data.

Open with DEXTER
In the text
thumbnail Fig. 18.

Cartesian projection of the denoised thermal dust maps centered at the intersection of four faces. From left to right and top to bottom panels: denoising each face independently using α-shearlets with α = 1, restoration via the rotation-based approach, patchwork approach with α = 1, dictionary learning with patch width of 12. The colorscale has been stretched to visualize the artifacts seen as a cross-shape discontinuity at the boundaries of the four HEALPix faces in the upper left panel. All of our proposed approaches are free from these artifacts. Units are in μK.

Open with DEXTER
In the text
thumbnail Fig. 19.

Normalized non-linear approximation curves for four different slices of the dark matter distribution. For each threshold, the α value corresponding to the lowest approximation error is displayed on the top.

Open with DEXTER
In the text
thumbnail Fig. 20.

Normalized non-linear log-approximation curves for four different slices of the dark matter distribution. For each threshold, the α value corresponding to the lowest approximation error is displayed on the bottom.

Open with DEXTER
In the text
thumbnail Fig. A.1.

Example of a cartoon-like function. Such a function f is smooth, apart from a jump discontinuity along a curve γ. Even though f might be discontinuous along γ, the boundary curve γ itself is required to be smooth.

Open with DEXTER
In the text
thumbnail Fig. A.2.

Effect of dilating a “prototype function” ψ (shown at the top of each column) with the matrices Dj(α) to obtain ψ(Dj(α) • ), for different values of the scale j (going from j = 0 (top panels) to j = 2 (bottom panels)) and of the “anisotropy parameter” α ∈ [0, 1].

Open with DEXTER
In the text
thumbnail Fig. A.3.

Frequency concentration of α-shearlets for different values of α. One sees that each “dyadic annulus” {ξ  :  |ξ|≍2j} is split into a number of “wedges” representing the different directions. In fact, .

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.