Open Access
Issue
A&A
Volume 670, February 2023
Article Number A132
Number of page(s) 12
Section The Sun and the Heliosphere
DOI https://doi.org/10.1051/0004-6361/202244224
Published online 16 February 2023

© The Authors 2023

Licence Creative CommonsOpen Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article is published in open access under the Subscribe-to-Open model. Subscribe to A&A to support open access publication.

1. Introduction

The research of sunspots is of great significance because sunspots are related to many solar eruption events, for instance, to flares (Atac 1987). With the development of ground-based solar telescopes (Cao et al. 2010; Rao et al. 2020; Rimmele et al. 2020) and adaptive optics systems (Rao et al. 2016, 2018; Zhong et al. 2020; Guo et al. 2022), the resolution of solar images is becoming higher, and more details can be observed. In high-resolution solar photosphere images, fine structures such as the umbra, penumbra, and the light bridge of sunspots can be clearly observed. The umbra is the coldest part of the sunspot and therefore has the lowest intensity on the image, mainly because the strong magnetic field prevents convection and energy exchange with the underlying material. The penumbra resembles a fibrous structure that surrounds the umbra, and its intensity is between that of the umbra and the quiet photosphere (Wiehr et al. 1984). The light bridge consists of bright structures that are inserted into the umbra and play an important role in the evolution of sunspots (Vazquez 1973). In the early days, observers used the naked eye to observe and manually record sunspots (Vaquero 2007). However, with the increase in the number of and the development of a technology for solar telescopes, the amount of observed data increases significantly and the resolution of solar image becomes higher. This means that manual sunspot recognition can no longer satisfy the demand. Therefore, it is necessary to research methods that can automatically recognize sunspots and segment the fine structure.

In a sunspot, the intensity is lower than that of other structures in the photosphere. It is therefore easy to develop an idea of using an intensity threshold to distinguish a sunspot from other structures (Colak & Qahwaji 2008). Colak & Qahwaji (2008) used the mean, standard deviation and the empirical coefficient of the solar image intensity to obtain the threshold, and determined the structure with the intensity that was lower than the threshold as a sunspot. Based on the intensity threshold, morphological methods have been widely applied in the field of sunspot recognition as well (Watson et al. 2009; Zhao et al. 2016). Watson et al. (2009) proposed a sunspot recognition method that is insensitive to slow variations in intensity (e.g., limb darkening). The method first performs a top-hat transform on the full-disk solar image, and then selects the sunspot area according to the set intensity threshold. The method proposed by Zhao et al. (2016) first uses a large structure element to remove sunspots and noise from the solar disk to obtain a clean disk, and then subtracts the original image from the clean disk to obtain the candidate sunspot area. Finally, the finally recognized sunspots are obtained according to the set threshold. In addition, some methods add edge detection (Zharkov et al. 2005), region growing (Curto et al. 2008), wavelet transform (Djafer et al. 2012), or the level-set method (Goel & Mathew 2014) to the algorithm to improve their performance. Zharkov et al. (2005) proposed a method that can extract sunspots from the white-light full-disk solar image. The method first detects edges on the full-disk solar image as a candidate area for sunspots, and then removes the incorrect area by morphological methods to obtain the final recognition result. The method proposed by Curto et al. (2008) is based on the use of the morphological method closing and top-hat transform, combined with the region-growing method to obtain the sunspot regions. Djafer et al. (2012) proposed a sunspot recognition method based on the wavelet transform and applied it to the image in line of CA II K1. Goel & Mathew (2014) used a level-set method called the selective binary and Gaussian function regularized level set (SBGFRLS; Zhang et al. 2010) to recognize sunspots in a full-disk solar image. Unlike these sunspot-recognition algorithms for full-disk solar images, Yang et al. (2018) proposed a sunspot segmentation method for a high-resolution solar image that is able to segment the fine structures of the sunspot. The method uses the level-set method to segment sunspots in high-resolution images. Moreover, the method combines the image binarization method with the adaptive threshold (Otsu 1979) and the morphological method to extract the fine structures of sunspots (umbra, penumbra, and light bridge).

Most of these methods are designed for recognizing sunspots from full-disk solar images. For the full-disk solar image, the sunspot is just a black spot whose intensity is significantly lower than that of the quiet photosphere because the image resolution is low. Therefore, it is feasible to use the intensity of each pixel to recognize the sunspot. However, these methods are based on the intensity of a single pixel and are not feasible for high-resolution images for the following three reasons: (1) The intensity of the dark pixels between the granular structures is similar to that of the sunspot penumbra. (2) The intensities of the pixels in the penumbra may be similar to the intensities of the pixels in the umbra or in the quiet photosphere. (3) Umbral dots can reach an extremely high intensity in the umbra. For the high-resolution method (Yang et al. 2018), the level set is used to recognize sunspots in high-resolution images, but the level-set algorithm is sensitive to the position and shape of the initial contour and parameter setting, which is not conducive to the automatic processing of a large number of solar images.

In order to solve these problems, a superpixel-based sunspot recognition and fine-structure segmentation method is proposed in this paper. The method we propose has three main contributions: (1) To solve the intensity distribution that is not coincident in high-resolution images (for the three reasons analyzed in the previous paragraph), we propose a fine-structures segmentation method for sunspots based on superpixel segmentation. The method oversegments the image into many superpixels and then classifies different superpixels into different fine structures. In this way, the fine-structure segmentation problem is converted from the classification of pixels into superpixels, reducing the influence of pixels with inconsistent features in the local area. In addition, the search space of the algorithm is reduced from the number of pixels to the number of superpixels, which significantly reduces the required time. (2) Satisfactory results cannot be obtained by using only intensity information as a feature of superpixels. Therefore, we add texture and spatial location information to improve the performance of the proposed method. (3) In order to solve the problem that light bridges cannot be segmented from superpixels, we use the morphological method, area threshold, and location information to obtain the light bridges based on the results given by the classification of superpixels.

The following sections are organized as follows: In Sect. 2, we introduce the data we used and discuss the proposed method in detail. Experimental results of the method are presented in Sect. 3. Finally, we summarize the method in Sect. 4.

2. Method

In this section, the proposed method is discussed in detail. As shown in Fig. 1, the method consists of the following five steps: (1) preprocessing, (2) preliminary segmentation of the photosphere, (3) fine-tuning the preliminary segmentation, (4) light-bridge extraction, and (5) postprocessing. In step (1), the image is resized and Gaussian filtered. Then, simple linear iterative clustering (SLIC; Achanta et al. 2012) is used to preprocess the image. After we use SLIC, the image is segmented into multiple oversegmented superpixels, where the pixels in each superpixel have similar intensity. In step (2), the intensity of each superpixel1 is calculated and then adjusted by the texture feature of each superpixel. Subsequently, the Gaussian mixture model (GMM) is used to model the intensity of each superpixel. From the results of clustering by the GMM and the average intensities of clusters, the preliminary segmentation results of the photosphere can be obtained. In step (3), preliminary segmentation results are used to construct spatial location features in order to remove the incorrect segmented parts generated by step (2). The spatial location features are used to further adjust the intensity of each superpixel, and then GMM is used to obtain the segmentation results. In step (4), we use the morphological method to extract light bridges. Finally, some incorrect segmented areas are removed in step (5), such as penumbras that are not connected to umbras, or small quiet photospheres distributed in penumbras. We present each step of the proposed method in detail in Sects. 2.22.6. The parameter setting is discussed in Sect. 2.7.

thumbnail Fig. 1.

Main steps of the proposed method.

2.1. Data source

The Tio band (7057 Å) is widely used to study sunspots. Therefore, we used the Tio band high-resolution solar images observed by the Goode Solar Telescope (GST) at Big Bear Solar Observatory (BBSO; Cao et al. 2010) as the main data source in this paper. The observation data in fits format and observation logs of GST can be requested and downloaded2. The pixel size of the data obtained from BBSO is about 0.034″, and the field of view is about 62.56″ × 62.56″. As shown in Fig. 2, the original image size is 2043 × 2043 pixels, but there are many invalid pixels at the edge, which will affect the final segmentation results. Therefore, we only reserved the valid pixels and cropped the image to the size of 1840 × 1840 pixels. In addition, the Tio band images obtained by the New Vacuum Solar Telescope (NVST; Liu et al. 2014) and the Educational Adaptive-optics Solar Telescope (EAST; Rao & Zhong 2022) were used in the robustness experiment (see Sect. 3.3). To facilitate the processing of the algorithm we propose, these images were cropped to reserve the valid pixels. The cropped images and detailed information of NVST and EAST are shown in Sect. 3.3.

thumbnail Fig. 2.

Sample of GST Tio band data observed at 2016 January 28 19:32:06 UT. (a) The original image downloaded from BBSO. (b) The image with invalid pixels removed.

2.2. Preprocessing

As analyzed above, since high-resolution solar images in the Tio band can clearly present many details of the photosphere, it is difficult to accurately segment the fine structures of sunspots based on the features of a single pixel. Therefore, the SLIC is used to segment the image into multiple superpixels in this section. The SLIC has a linear time complexity and can generate superpixels that adhere to the boundary of objects.

First, we normalized the range of intensity to [0, 255] and reduced the size of the image to improve the speed of the algorithm. Then, the Tio band image I of size N × M was blurred by Gaussian filtering in order to smooth the image and remove noise. Next, the color space of the pixels was converted to the CIELAB color space. The CIELAB color space is a color space with three channels: L is the channel for lightness, a is the channel for red and green, and b is the channel for blue and yellow. The CIELAB color space is widely considered as perceptually uniform for small color distances, so that it can facilitate the creation of more accurate regions of superpixels (Achanta et al. 2010). Since the Tio band solar image is an intensity image, only the value of the L channel was used. According to the set number of superpixels K, the initial cluster centers C = { ( l i , x i , y i ) } i = 1 K $ C=\{(l_i, x_i, \mathit{y}_i)\}_{i=1}^{K} $ (where li represents the value of the L channel in the CIELAB color space of the ith cluster center, and xi and yi represent the coordinates of the ith cluster center) are taken at intervals of S = ( M × N ) / K $ S=\sqrt{(M \times N) / K} $ pixels on the image I. To cluster the pixels with the most similar features into the same superpixel, SLIC uses the k-means algorithm (Hartigan & Wong 1979). Different from the traditional k-means algorithm, the k-means used in SLIC only searches in a rectangular area of 2S × 2S of each cluster center to reduce the computational complexity. The clustering process is to first calculate the distance D from each cluster center to the pixels in the range of 2S × 2S. The equation for distance calculation is as follows:

D = d 1 2 + ( d 2 S ) 2 m 2 , $$ \begin{aligned} D = \sqrt{d_1^2 + (\frac{d_2}{S})^2m^2}, \end{aligned} $$(1)

where d1 is the Euclidean distance of the value of channel L between the pixel and the cluster center, d2 is the Euclidean distance of the coordinates between the current pixel and the center, and m is the compactness factor. When m is small, the value of D is dominated by the value of the L channel (i.e., intensity), and the generated superpixels are closer to the edge of the object. When m is large, the value of D is dominated by coordinates, and the generated superpixels are more compact. According to the report in the SLIC paper, satisfactory results of superpixel segmentation can be obtained by iterating the above steps up to ten times. Finally, the isolated and small clusters were assigned to the nearest clusters to obtain K′ superpixels (K′≤K), and the segmented result P= { p i } i=1 K $ P=\{p_i\}_{i=1}^{K^{\prime}} $ (pi denotes a superpixel) was generated. After these steps, the average intensity of pixels in each pi was calculated to obtain the intensity set of superpixels Z= { z i } i=1 K $ Z=\{z_i\}_{i=1}^{K^{\prime}} $.

An example of the results of solar image superpixel segmentation by SLIC is shown in Fig. 3. The original input image is segmented into small patches with similar features as the internal pixels.

thumbnail Fig. 3.

Example of the results of solar image superpixel segmentation by SLIC using the image in Fig. 2b. The yellow lines represent the boundary of different superpixels.

2.3. Preliminary segmentation of the photosphere

In this step, we use the intensity value zi of each superpixel to perform the preliminary segmentation of the photosphere. Figure 4 shows that the umbra area and the quiet- photosphere area have lower and higher intensity values, respectively, and can easily be distinguished from other areas. The intensity of most penumbra areas is between that of the umbra area and of the quiet-photosphere area, but some penumbra areas have higher intensity values (i.e., the intensity is similar to that of the quiet photosphere), but the standard deviations of the inner pixel intensity are larger (i.e., different texture features). According to this characteristic, we designed Eq. (2) to adjust the average intensity of each superpixel. The main purpose of Eq. (2) is to reduce the intensity value of the brighter penumbra area, decrease the intensity value of the umbra area, and increases the intensity value of the quiet photosphere. After the intensities were updated by Eq. (2), subsequent steps can more easily distinguish superpixels belonging to different fine-structure areas. Equation (2) is defined as follows:

f ( z i ) = { z i + γ i × s i × c 1 γ i 0 z i η i × s i × c 1 γ i > 0 , $$ \begin{aligned} \begin{aligned} f(z_i) = \left\{ \begin{matrix} z_i + \gamma _i \times s_i \times c_1&\gamma _i \le 0\\ z_i - \eta _i \times s_i \times c_1&\gamma _i > 0 \end{matrix}\right. \end{aligned}, \end{aligned} $$(2)

thumbnail Fig. 4.

Example of the mean and standard deviation(std) of intensities of superpixels in different fine structures. The different red arrows point to the mean and standard deviation of the intensities of different superpixels belonging to different fine structures (e.g., the top red arrow points to the mean and standard deviation of the intensity of a superpixel belonging to the quiet photosphere).

where si denotes the standard deviation of the intensity value of each superpixel, the parameter c1 is discussed in Sect. 2.7, and γi and ηi are defined in Eq. (3) as follows:

γ i = z i 1 K j = 1 K z j 1 η i = s i 1 K j = 1 K s j 1 $$ \begin{aligned}&\gamma _i = \frac{z_i}{\frac{1}{K^{\prime }}\sum _{j=1}^{K^{\prime }}{z_j}} - 1\nonumber \\&\eta _i = \frac{s_i}{\frac{1}{K^{\prime }}\sum _{j=1}^{K^{\prime }}{s_j}} - 1 \end{aligned} $$(3)

Each zi in Z is updated by Eq. (2) to obtain the new superpixel intensity set Z = { z i } i = 1 K . $ Z^\prime = \{z_{i}^\prime\}_{i=1}^{K^\prime}. $ We formulate the intensity of each superpixel in Z′ using a GMM as follows:

G ( z i | u j , σ j ) = j = 1 3 α j N ( z i | u j , σ j ) , N ( z i | u j , σ j ) = 1 2 π σ j 2 exp ( z i u j ) 2 2 σ j 2 , $$ \begin{aligned}&G(z_i^{\prime } | u_j,\sigma _j)=\sum _{j=1}^{3}{\alpha _j \mathcal{N} (z_i^{\prime }|u_j, \sigma _j)},\nonumber \\&\mathcal{N} (z_i^{\prime }|u_j, \sigma _j) = \frac{1}{\sqrt{2\pi \sigma _j^2}}\exp {-\frac{(z_i^{\prime }-u_j)^2}{2\sigma _j^2}}, \end{aligned} $$(4)

where z i $ z_i^{\prime} $ represents the intensity of the ith superpixel, and uj, σj, and αj represent the mean, standard deviation, and weight of the jth Gaussian component, respectively. Since we modeled the umbra, penumbra, and quiet photosphere, we used three Gaussian components. The parameters αj, μj, and σj in Eq. (4) can be obtained by iteratively executing Eq. (5). Finally, all superpixels were clustered into three clusters according to the probability of each superpixel in different Gaussian components. Equation (5) is defined as follows:

β ij = α j old N ( z i | u j , σ j ) k = 1 3 α k old N ( z i | u k , σ k ) α j new = 1 K i = 1 K β ij μ j new = i = 1 K z i β ij i = 1 K β ij σ j new = i = 1 K ( z i u j ) 2 β ij i = 1 K β ij . $$ \begin{aligned}&\beta _{ij} =\frac{\alpha _j^\mathrm{old}\mathcal{N} (z_i^{\prime }|u_j,\sigma _j)}{\sum _{k=1}^{3}\alpha _{k}^\mathrm{old}\mathcal{N} (z_i^{\prime }|u_k,\sigma _k)}\nonumber \\&\alpha _j^\mathrm{new} = \frac{1}{K^{\prime }}\sum _{i=1}^{K^{\prime }}{\beta _{ij}} \nonumber \\&\mu _j^\mathrm{new} = \frac{\sum _{i=1}^{K^{\prime }}{z_i^{\prime }\beta _{ij}}}{\sum _{i=1}^{K^{\prime }}{\beta _{ij}}} \\&\sigma _{j}^\mathrm{new} = \frac{\sum _{i=1}^{K^{\prime }}(z_i^{\prime }-u_j)^2\beta _{ij}}{\sum _{i=1}^{K^{\prime }}{\beta _{ij}}}\nonumber . \end{aligned} $$(5)

The photosphere intensities are ranked as follows: the intensity of the quiet photosphere is highest, that of the penumbra is second, and that of the umbra is lowest. Therefore, we can easily distinguish based on their intensity which superpixels belong to umbra area Pumbra, which superpixels belong to penumbra area Ppenumbra, and which superpixels belong to the quiet-photosphere area Pquiet from the three clusters obtained by GMM, where Pumbra ∪ Ppenumbra ∪ Pquiet = P. The result of this step is shown in Fig. 5a, where the quiet-photosphere, penumbra, and umbra areas are composed of superpixels in Pquiet, Ppenumbra, and Pumbra, respectively.

thumbnail Fig. 5.

Intermediate results of the method we propose. (a) Result of the preliminary segmentation of the photosphere. (b) Spatial location feature. (c) Result of fine-tuning the preliminary segmentation.

2.4. Fine-tuning the preliminary segmentation

Figure 5a still shows some independently existing penumbras that fail the requirement that the penumbra must exist near the umbra. In addition, some marginal penumbras are not recalled or are incorrectly recalled. Inspired by Gould et al. (2008), we computed the spatial location features of superpixels based on the results in Sect. 2.3. First, we computed the Euclidean distance between each superpixel centroid and the centroid of the nearest superpixel belonging to the umbra to obtain the spatial location feature. The spatial location feature is shown in Fig. 5b, and the brighter area represents areas farther from the umbra. After obtaining the spatial location feature, according to the Eq. (6), we adjusted the intensity of each superpixel by using this feature. Equation (6) is defined as:

h ( z i ) = z i + ( e dist i 1 ) × c 2 , $$ \begin{aligned} h(z_i^{\prime }) = z_i^{\prime } + (e^{\mathrm{dist}_i} - 1) \times c_2 ,\end{aligned} $$(6)

where disti denotes the spatial location feature of the ith superpixel, and parameter c2 is introduced in Sect. 2.7. After adjusting the value of each z i $ z_i^{\prime} $ in Z′ according to the Eq. (6), we obtained a new superpixel intensity set Z = { z i } i = 1 K $ Z^{\prime\prime}=\{z_i^{\prime\prime}\}_{i=1}^{K^\prime} $. Using the GMM introduced in Sect. 2.3 to formulate each z i $ z_i^{{\prime\prime}} $ in Z″, we can obtain the new superpixel sets P umbra $ P_{\mathrm{umbra}}^\prime $, P penumbra $ P_{\mathrm{penumbra}}^\prime $, P quiet $ P_{\mathrm{quiet}}^\prime $ belonging to the umbra, penumbra, and quiet photosphere, respectively, where P umbra P penumbra P quiet = P $ P_{\mathrm{umbra}}^\prime\cup P_{\mathrm{penumbra}}^\prime\cup P_{\mathrm{quiet}}^\prime=P $. The result of this step is shown in Fig. 5c.

2.5. Extraction of the light bridge

Based on the concept that the light bridge is a structure inserted in the umbra, the method we propose uses the morphological method and the recognized umbra area in the previous step to obtain a result for the light-bridge segmentation. First, the superpixels in P umbra $ P_{\mathrm{umbra}}^\prime $, P penumbra $ P_{\mathrm{penumbra}}^\prime $, P quiet $ P_{\mathrm{quiet}}^\prime $ are merged to obtain the umbra area maskumbra, the penumbra area maskpenumbra, and the quiet-photosphere area maskquiet, respectively. maskx is a two-dimensional matrix of the same size as the image, containing just 0 and 1, where elements with a value of 1 represent the same position in the image belonging to the area of x. In particular, x is abbreviated to light and quiet for the light bridge and quiet photosphere, respectively. Then the umbra area maskumbra (as shown in Fig. 6a) is closed according to Eq. (7) to obtain the new umbra mask mask umbra $ \mathrm{mask}_{\mathrm{umbra}}^\prime $ with small gaps filled (as shown in Fig. 6b). Equation (7) is defined as:

mask umbra = ( mask umbra E ) E , $$ \begin{aligned} \mathrm{mask}_{\rm umbra}^{\prime } = (\mathrm{mask}_{\rm umbra} \oplus E) \ominus E, \end{aligned} $$(7)

thumbnail Fig. 6.

Process of extracting the light bridge. (a) Umbra area segmented as described in Sect. 2.4. (b) Umbra area processed by the morphological closing operation. (c) Candidate light bridges before threshold screening of the area. (d) Candidate light bridges screened by the area threshold. (e) Location relation between light bridges and umbras. The white and gray areas represent the light bridge and the umbra, respectively. The green lines represent the areas in which light bridges intersect umbras, and the red lines represent the areas in which the light bridges do not intersect umbras. (f) Final segmentation results of the light-bridge area.

where E denotes the morphological structure element, and ⊕ and ⊖ denote the dilate and erosion transform, respectively. The candidate light bridge can be obtained from the difference between mask umbra $ \mathrm{mask}_{\mathrm{umbra}}^\prime $ and maskumbra (as shown in Fig. 6c). Figure 6c shows that many small areas are incorrectly recognized as light bridges. Therefore, we removed the incorrectly recognized light bridges by setting the area threshold. The results are shown in Fig. 6d. Finally, the final results for the light-bridge segmentation were obtained from the location relation between light bridges and umbras. When the intersection of a candidate light bridge and the umbra area is large enough (i.e., lenr/(lenr + leng) < Tlen, where lenr and leng denote the pixel lengths of the red lines and the green lines belonging to a light bridge, and Tlen is the threshold), this light bridge is reserved. In addition, when a candidate light bridge penetrates the umbra (as indicated by the red arrows in Fig. 6e), this light bridge is reserved. The parameter settings for the E, area threshold, and Tlen are detailed in Sect. 2.7. Through these steps, the final segmentation result of the light bridge masklight is obtained, as shown in Fig. 6f.

Since the segmented light-bridge area overlaps with other areas, the segmentation results of the umbra, penumbra, and quiet photosphere can be updated according to the following equation:

g ( mask ) = mask ( mask mask light ) . $$ \begin{aligned} g(\mathrm{mask}) = \mathrm{mask} - (\mathrm{mask} \cap \mathrm{mask}_{\rm light}). \end{aligned} $$(8)

The updated segmentation results of the umbra mask umbra g $ \mathrm{mask}_{\mathrm{umbra}}^{\mathrm{g}} $, penumbra mask penumbra g $ \mathrm{mask}_{\mathrm{penumbra}}^{\mathrm{g}} $, and quiet photosphere mask quiet g $ \mathrm{mask}_{\mathrm{quiet}}^{\mathrm{g}} $ are obtained by g(maskumbra), g(maskpenumbra), and g(maskquiet), respectively. The segmentation result of this step is shown in Fig. 7a.

thumbnail Fig. 7.

Results of Sects. 2.5 and 2.6. (a) Result of the fine-structure segmentation result of Sect. 2.5. (b) Final result of the fine-structure segmentation after postprocessing.

2.6. Postprocessing

In this step, the results generated by the above steps are postprocessed to obtain the final segmentation results of the algorithm we propose. For images in which just the quiet photosphere lies in the field of view, this leads to the detection of many incorrect fine structures because the algorithm sets the number of clusters as 3. Intuitively, determining the image with just the quiet photosphere before processing by the proposed method would solve this problem. However, it is difficult to determine whether sunspots exist in the image until we start the method we propose because it is difficult to set the threshold before performing fine-structure segmentation when we use the average intensity of the image to judge whether the image contains just the quiet photosphere (based on sunspots with lower intensity). Therefore, we used the intensity ratio to remove the incorrectly detected parts in images that only contain the quiet photosphere after the fine-structure segmentation method we propose. The average intensity values avgquiet and avgumbra of the quiet-photosphere area (i.e., the area in mask quiet g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{quiet}} $ with a value of 1) and umbra area (i.e., the area in mask umbra g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{umbra}} $ with a value of 1) were calculated. When (avgquiet − avgumbra)/avgquiet < T1 (i.e., the difference between avgquiet and avgumbra is too small), the image was judged to have just the quiet-photosphere area in the field of view. The parameter T1 is detailed in Sect. 2.7. For the image with just the quiet photosphere area, the segmentation result only has the quiet-photosphere area (i.e., mask quiet g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{quiet}} $ becomes an all-ones matrix, and other masks become zero matrices), and conversely, the segmentation result of Sect. 2.5 is unchanged.

Some images only contain sunspots in the early stages of evolution, and these sunspots only have umbra structures (i.e., pores). However, because the algorithm we propose assumes three clusters, it may result in incorrectly segmenting a part of the quiet photosphere as the penumbra in this situation. To solve this problem, we used the same strategy as we did to determie the image with just the quiet photosphere. The average intensity value avgpenumbra of the penumbra area (i.e., the area in mask penumbra g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{penumbra}} $ with a value of 1) was calculated. When (avgquiet − avgpenumbra)/avgpenumbra < T2 (i.e., the difference between avgquiet and avgpenumbra is too small), we adjusted the segmented penumbra area to the quiet-photosphere area (i.e., updated the quiet-photosphere area as mask penumbra g mask quiet g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{penumbra}} \cup \mathrm{mask}^{\mathrm{g}}_{\mathrm{quiet}} $, and then mask penumbra g $ \mathrm{mask}^{\mathrm{g}}_{\mathrm{penumbra}} $ becomes a zero matrix), and conversely, kept the segmentation result of Sect. 2.5 unchanged. The parameter T2 is detailed in Sect. 2.7.

Finally, we verified whether the penumbra areas that we segmented in the above steps were connected to the umbra areas. When a penumbra area was not connected to any umbra area, the penumbra area was adjusted to be a quiet-photosphere area. In addition, when an isolated quiet-photosphere area was smaller than 2S × 2S, it was adjusted as a penumbra area. The final segmentation result after postprocessing is shown in Fig. 7b.

2.7. Parameter setting

In this section, we discuss the key parameters we used in the method we propose. All the experiment results shown in Sect. 2.1 used the uniform parameter values introduced in this part.

In the preprocessing step, we resized images to 0.35 times the original size. To fairly compare experiments (especially the running-time experiment in Sect. 3.2.2), the size of the input images of Yang et al. (2018) was the same as ours. The standard deviation and kernel size of the Gaussian filter were set to 1 and 5 to properly remove noise, respectively. To adapt the images of different sizes and spatial resolutions, the initial number of superpixels K was set by the following equation:

K = 0.5 × M × 0.35 × N × 0.35 × area pixel 0.5 , $$ \begin{aligned} K = \frac{0.5 \times M \times 0.35 \times N \times 0.35 \times \mathrm{area}_{\rm pixel}}{0.5}, \end{aligned} $$(9)

where 0.5 in the denominator is the smallest area of pores reported by Tlatov et al. (2019), and the unit is the millionth solar hemisphere (msh), areapixel is the area of a single pixel with the same unit, 0.35 is the resize ratio of the image, and 0.5 in the numerator is an empirical parameter.

In Eq. (2), c1 = 1.5exp(−b), where b is the standard deviation of the superpixel intensities in Z. In Eq. (6), c 2 = 20 × ( avg penumbra / avg quiet ) , $ c_2 = 20 \times (\mathrm{avg}_{\mathrm{penumbra}}^\prime / \mathrm{avg}_{\mathrm{quiet}}^\prime), $ where avg penumbra $ \mathrm{avg}_{\mathrm{penumbra}}^\prime $ and avg quiet $ \mathrm{avg}_{\mathrm{quiet}}^\prime $ are the average intensities of the penumbra and quiet-photosphere areas segmented by step 2, respectively.

In the step when the light bridge is extracted, the morphological structure element E and area threshold should be set smaller for data with lower spatial resolution and larger for data with higher spatial resolution to obtain satisfactory results. To address this situation, we used an adaptive morphological structure element size and area threshold to replace the fixed values. The shape of the structure element E was circular, and the radius of E was widthlight/(2 × Δ), where Δ is the angular resolution of the image, widthlight is the widest light bridge that can be segmented in this paper and was set to 2.5″. The area threshold was set to (0.4 × S × widthlight)/Δ, where 0.4 is an empirical parameter. Since Tlen is the ratio of lenr and leng, it is not affected by different spatial resolutions and was set to a fixed value of 0.15 to select the light bridges with the required shape. In the postprocessing step, the thresholds T1 and T2 were set to 0.4 and 0.1, respectively, to remove the incorrectly detected parts in images with just quiet photospheres and pores.

3. Experimental results

In this section, we present the experimental results of the proposed algorithm. We conducted the ablation experiment, comparison experiment, and robustness experiment to demonstrate the performance of our algorithm. The three experiments are presented in Sects. 3.13.3. In this section, all experiments were conducted on the same laptop with an Intel(R) Core(TM) i5-8300H CPU and 16G of RAM.

3.1. Ablation experiment

The effectiveness of adding texture feature (introduced in Sect. 2.3) and spatial location feature (introduced in Sect. 2.4) to the algorithm we propose are verified in Sects. 3.1.1 and 3.1.2, respectively. Two Tio band images observed on 2016 January 28 at 19:32:06 UT and on 2019 May 8 at 16:46:06 UT downloaded from BBSO were used in the ablation experiment.

3.1.1. Texture feature

In Fig. 8, we use two images to demonstrate the results of Sect. 2.3 with and without the texture feature (i.e., with or without Eq. (2)), respectively. The second column of Figs. 8a and b shows that many penumbra areas and some umbra areas are not recalled without the texture feature. After applying the texture feature, we can observe that the incorrectly detected photosphere areas are adjusted to the correct penumbra areas, and incorrectly detected penumbra areas are adjusted to the correct umbra areas (e.g., as indicated by the green arrows). These better results benefit from Eq. (2), which increases the difference in the intensity of the superpixels belonging to different fine structures, allowing the GMM to better distinguish between different structures.

thumbnail Fig. 8.

Results of step 2 with and without the texture feature. (a) Processing results of data observed on 2016 January 28 at 19:32:06 UT with and without the texture feature. (b) Processing results of data observed on 2019 May 8 at 16:46:06 UT with and without the texture feature. In panels a and b, the first and second rows show the mask and contour of the results, and the first to third columns show the input image, results without the texture feature, and results with the texture feature, respectively. The green arrows indicate the additional umbra areas found by the algorithm with the texture feature.

3.1.2. Spatial location feature

In this section, the same data as we used in Sect. 3.1.1 are used to demonstrate the results of Sect. 2.4 (i.e., step 3) with and without the spatial location feature, respectively. In Figs. 9a and b, the second column shows the results of step 3 without any operation introduced in Sect. 2.4 (i.e., the same results as in Sect. 2.3 with the texture feature), and the third column shows the results of step 3 processed by all the operations introduced in Sect. 2.4. Fig. 9 shows that many incorrectly detected penumbra areas are correctly segmented into the quiet-photosphere areas after applying the spatial location feature. This is because the spatial location feature increases the intensity of the superpixels far from the umbra area, reducing the probability that superpixels belonging to the quiet photosphere are incorrectly classified as the penumbra area. It should additionally be noted that while incorrectly detected penumbra areas that are not connected to the umbra areas can be removed by the postprocessing step, removing the incorrectly detected penumbra area connected to the umbra area (e.g., as indicated by the green arrow in Fig. 9b) still requires the method introduced in Sect. 2.4.

thumbnail Fig. 9.

Results of step 3 with and without the spatial location feature. (a) Processing results of data observed on 2016 January 28 at 19:32:06 UT with and without the spatial location feature. (b) Processing results of data observed on 2019 May 8 at 16:46:06 UT with and without the spatial location feature. In panels a and b, the first and second rows show the mask and contour of results, and the first to third columns show the input image, results without the spatial location feature, and results with the spatial location feature, respectively. The green arrow in panel b indicates an example of an incorrectly detected penumbra area that is removed by applying the spatial location feature.

3.2. Comparison experiment

In this section, the method we propose is compared with the segmentation method for high-resolution sunspot fine structure of Yang et al. (2018). We compare the results of the fine-structure segmentation of the two methods in Sect. 3.2.1, and the results of the running-time comparison are shown in Sect. 3.2.2.

3.2.1. Segmentation performance

We experimented the two algorithms on six BBSO-acquired images in the Tio band. The sunspots in these six images have different shapes, posing different challenges for fine-structure segmentation. The results of the segmentation are shown in Fig. 10. For easy observation, we draw contour lines of the segmentation results on the original images.

thumbnail Fig. 10.

Results of the fine-structure segmentation of our method and of that of Yang et al. (2018). In panels a–f, the first column is the input image. Columns 2–3 show the segmentation results generated by our algorithm. Columns 4–5 are the results generated by the algorithm of Yang et al. (2018). The purple arrows in the fourth column of panels a and d indicate the representative examples of the penumbra areas that are not recalled by Yang et al. (2018). The green arrows in the fourth column of panels a, b, and f indicate the representative examples of the incorrectly segmented umbra areas by Yang et al. (2018). The orange arrows in the fourth column of panel f indicate the representative examples of pores that Yang et al. (2018) could recall, but our method could not.

Figure 10 shows that our algorithm performs better in segmenting fine structures than the algorithm of Yang et al. (2018). Specifically, we can observe that the method proposed by us can segment the light-bridge areas more accurately than Yang et al. (2018). As shown in Figs. 10a–f, we can observe many incorrectly segmented light-bridge areas at the edges of the umbra areas from the results generated by Yang et al. (2018), but the method proposed by us can generate correct segmentation results of the light-bridge areas. The reason is that we use the ratio of lenr and leng to select the suitable light bridge areas, and this strategy can effectively remove the incorrectly detected light bridges at the edges of the umbra areas. For the detected results of the penumbra areas and umbra areas, the method proposed by us also performs better than Yang et al. (2018). As indicated by the purple arrows in Figs. 10a and d, many penumbra areas are not recalled by Yang et al. (2018). Through our analysis, we found that for the data used in Figs. 10a and d, the sunspots occupy the majority of the field of view, which prevents the level-set algorithm used by Yang et al. (2018) from converging easily, which means that a large number of penumbra areas is not recalled. However, this issue does not affect the performance of the method proposed by us, which is based on superpixels. As indicated by the green arrows in Figs. 10a, b, and f, there are some representative examples of incorrectly segmented umbra areas by Yang et al. (2018). Especially in Fig. 10b, the green arrows indicate incorrectly detected umbra areas, which are inserted into the detected penumbra areas like fibers. This is due to the binarization method used by Yang et al. (2018) to distinguish whether a pixel belongs to the umbra areas. When a pixel with a lower intensity belongs to penumbra areas, it would be incorrectly classified into the umbra areas. The method we propose uses the intensity of superpixels and fuses other features instead of using the intensity of a single pixel, so that the method performs better than Yang et al. (2018) in this case.

As mentioned above, our method outperforms that of Yang et al. (2018) in most situations, but for the situation in which pores are distributed around the main sunspots in the field of view, the segmentation result of Yang et al. (2018) is better. As indicated by the orange arrows in Fig. 10f, the pores are recalled by the method of Yang et al. (2018), but not by our method. This is because the intensity of superpixels belonging to these pores is similar to the intensity of the penumbra area (the intensity of the penumbra area of the image shown in Fig. 10f is low), therefore these superpixels belonging to the pores are classified as the penumbra area. Since the method we propose does not allow the existence of isolated penumbra, these superpixels are classified as part of the quiet photosphere.

3.2.2. Running time

In this section, we compare the running time with the algorithm proposed by Yang et al. (2018) The data we used are the same as in Sect. 3.2.1, and the results are shown in Table 1.

Table 1.

Running time of our method and that of Yang et al. (2018).

Table 1 shows that the average running time of the method we propose (1.75 s) is shorter than that of Yang et al. (2018; 7.52 s). In addition, the stability of our method is significantly better than that of Yang et al. (2018; i.e., smaller standard deviation). The running times of our method on the entire dataset are around the mean value of 1.75 s and do not vary significantly. Yang et al. (2018) shows low running times on some images (e.g., images 3 and 5), even lower than the method we propose in image 5. However, the stability of Yang et al. (2018) is not satisfactory, and it shows long running times in some images (e.g., images 1 and 4), and even reaches 42.25 s in image 1. In summary, the method proposed by us outperforms that of Yang et al. (2018) regarding average running time and stability. The reason is that the method of Yang et al. (2018) is based on the level-set method, which has difficulties to converge when the sunspot occupies most of the region of the image or the shape of the sunspot is complex, resulting in the long running times (e.g., images 1, 4, and 6).

3.3. Robustness experiment

In this section, the robustness of our algorithm is verified. We tested the algorithm on data acquired by the NVST and EAST. The segmentation results of the fine structures are shown in Fig. 11. The contours of the segmentation results are drawn on the original images for easy viewing. The first columns of Figs. 11a–c show the original data acquired by NVST with angular resolutions of 0.0345″, 0.0369″, and 0.0296″, respectively. The first column of Fig. 11d shows the data acquired by EAST with an angular resolution of 0.12″. Our algorithm still shows satisfactory segmentation performance on data acquired by other telescopes. In Fig. 11c, many umbra areas without penumbra areas (i.e., pores) are still correctly segmented. It benefits from the postprocessing step that removes the incorrectly segmented penumbra areas. The spatial resolution of the image used in Fig. 11d is significantly lower than in the other data used in this paper. However, the fine-structure segmentation result is still satisfactory. The reason is that the values of some parameters that are strongly affected by spatial resolution are set adaptively. Especially when the size of E is the same as that in the other data with higher spatial resolutions, many incorrect thick light bridges are segmented.

thumbnail Fig. 11.

Segmentation results for data from different solar telescopes. Panels a, b, and c show data observed by NVST and the fine-structure segmentation results of these data. Panel d shows the data observed by EAST and the fine-structure segmentation result.

In addition, we tested our method on data whose observed field of view contains just quiet Sun. This experiment used two images obtained by GST at different observation times. As shown in Fig. 12, nothing is found, except for the quiet photosphere. It benefits from the postprocessing step that removes the other incorrectly detected structures that do not satisfy the intensity ratio requirement.

thumbnail Fig. 12.

Segmentation results for the quiet Sun. The top and bottom panels show the segmentation results of two images observed at different times.

4. Conclusions

With the development of solar observation technology, progressively more solar images with increasingly higher resolution will be obtained. In this paper, we proposed an algorithm to segment fine structures for sunspots based on superpixel segmentation. The method we proposed extracts the intensity information, texture information, and spatial location information of superpixels as features, and then uses GMM and a morphological method to segment the umbra, penumbra, and light bridge in high-resolution sunspot images. This paper has various parameters, and some are sensitive to different solar telescopes or facilities. Therefore, we designed some strategies for these parameters to adaptively adjust their values to keep the satisfactory results of our method in different situations. Experiments show that our method can accurately and quickly segment the fine structures of sunspots in the data observed by GST. Moreover, the method is robust and shows satisfactory results for the fine-structure segmentation for data obtained from different solar telescopes. Therefore, our method can automatically process a huge amount of observational data and generate reproducible segmentation results of fine structures in sunspots, which can help the research of solar physics and space weather.

Our method does not specifically process the umbral dots and the limb darkening. The umbral dots will not significantly affect the performance of our method because the Gaussian filter will remove the influence of small structures to a certain extent. In addition, our method is based on the superpixel segmentation. Therefore, the intensity of superpixels will not be significantly influenced by the umbral dots. The method does not remove limb darkening because we used a high-resolution image that has no obvious limb darkening. If obvious limb darkening is observed in the images, it will influence the performance of the proposed method due to the change in the distribution of the intensity feature. In this situation, a limb-darkening removal method should be added in the preprocessing step.

Our method has only a limited ability to segment pores and has difficulty in reaching the pixel-level segmentation. In a subsequent study, we will extract more features and modify the superpixel segmentation algorithm to improve the performance of fine-structure segmentation. In addition, our method does not consider the isolated penumbras and removes them all in the postprocessing step. We will reserve the isolated penumbras in a future study.


1

The intensity of a superpixel is the average intensity of all pixels in a superpixel.

Acknowledgments

This work was funded by the National Natural Science Foundation of China (11727805), the Frontier Research Fund of Institute of Optics and Electronics, Chinese Academy of Sciences (C21K002), and the Youth Innovation Promotion Association, Chinese Academy of Sciences (No. 2022386). We gratefully acknowledge the use of data from the Goode Solar Telescope (GST) of the Big Bear Solar Observatory (BBSO). BBSO operation is supported by NJIT and US NSF AGS-1821294 grant. GST operation is partly supported by the Korea Astronomy and Space Science Institute and the Seoul National University.

References

  1. Achanta, R., Shaji, A., Smith, K., et al. 2010, Slic Superpixels, Tech. rep. [Google Scholar]
  2. Achanta, R., Shaji, A., Smith, K., et al. 2012, IEEE Trans. Pattern Anal. Mach. Intell., 34, 2274 [CrossRef] [Google Scholar]
  3. Atac, T. 1987, Astrophys. Space Sci., 129, 203 [NASA ADS] [CrossRef] [Google Scholar]
  4. Cao, W., Gorceix, N., Coulter, R., et al. 2010, Astron. Nachr., 331, 636 [Google Scholar]
  5. Colak, T., & Qahwaji, R. 2008, Sol. Phys., 248, 277 [CrossRef] [Google Scholar]
  6. Curto, J., Blanca, M., & Martínez, E. 2008, Sol. Phys., 250, 411 [NASA ADS] [CrossRef] [Google Scholar]
  7. Djafer, D., Irbah, A., & Meftah, M. 2012, Sol. Phys., 281, 863 [NASA ADS] [CrossRef] [Google Scholar]
  8. Goel, S., & Mathew, S. K. 2014, Sol. Phys., 289, 1413 [NASA ADS] [CrossRef] [Google Scholar]
  9. Gould, S., Rodgers, J., Cohen, D., Elidan, G., & Koller, D. 2008, Int. J. Comput. Vision, 80, 300 [CrossRef] [Google Scholar]
  10. Guo, Y., Zhong, L., Min, L., et al. 2022, Opto-Electron. Adv., 200082 [Google Scholar]
  11. Hartigan, J. A., & Wong, M. A. 1979, J. R. Statist. Soc. Ser. C (Appl. Statist.), 28, 100 [Google Scholar]
  12. Liu, Z., Xu, J., Gu, B.-Z., et al. 2014, RAA, 14, 705 [Google Scholar]
  13. Otsu, N. 1979, IEEE Trans. Syst. Man Cybernet., 9, 62 [CrossRef] [Google Scholar]
  14. Rao, C.-H., & Zhong, L. 2022, RAA [Google Scholar]
  15. Rao, C., Zhu, L., Rao, X., et al. 2016, ApJ, 833, 210 [NASA ADS] [CrossRef] [Google Scholar]
  16. Rao, C., Zhang, L., Kong, L., et al. 2018, Sci. China Phys. Mech. Astron., 61 [CrossRef] [Google Scholar]
  17. Rao, C., Gu, N., Rao, X., et al. 2020, First Light of the 1.8-m Solar Telescope-CLST [Google Scholar]
  18. Rimmele, T. R., Warner, M., Keil, S. L., et al. 2020, Sol. Phys., 295, 1 [NASA ADS] [CrossRef] [Google Scholar]
  19. Tlatov, A., Riehokainen, A., & Tlatova, K. 2019, Sol. Phys., 294, 1 [NASA ADS] [CrossRef] [Google Scholar]
  20. Vaquero, J. M. 2007, AdvSpR, 40, 929 [NASA ADS] [Google Scholar]
  21. Vazquez, M. 1973, Sol. Phys., 31, 377 [NASA ADS] [CrossRef] [Google Scholar]
  22. Watson, F., Fletcher, L., Dalla, S., & Marshall, S. 2009, Sol. Phys., 260, 5 [Google Scholar]
  23. Wiehr, E., Koch, A., Knölker, M., Küveler, G., & Stellmacher, G. 1984, A&A, 140, 352 [NASA ADS] [Google Scholar]
  24. Yang, M., Tian, Y., & Rao, C. 2018, Sol. Phys., 293, 1 [NASA ADS] [CrossRef] [Google Scholar]
  25. Zhang, K., Zhang, L., Song, H., & Zhou, W. 2010, Image Vision Comput., 28, 668 [CrossRef] [Google Scholar]
  26. Zhao, C., Lin, G., Deng, Y., & Yang, X. 2016, PASA, 33 [CrossRef] [Google Scholar]
  27. Zharkov, S., Zharkova, V., Ipson, S., & Benkhalil, A. 2005, EURASIP J. Adv. Signal Proces., 2005, 1 [CrossRef] [Google Scholar]
  28. Zhong, L., Zhang, L., Shi, Z., et al. 2020, A&A, 637, A99 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]

All Tables

Table 1.

Running time of our method and that of Yang et al. (2018).

All Figures

thumbnail Fig. 1.

Main steps of the proposed method.

In the text
thumbnail Fig. 2.

Sample of GST Tio band data observed at 2016 January 28 19:32:06 UT. (a) The original image downloaded from BBSO. (b) The image with invalid pixels removed.

In the text
thumbnail Fig. 3.

Example of the results of solar image superpixel segmentation by SLIC using the image in Fig. 2b. The yellow lines represent the boundary of different superpixels.

In the text
thumbnail Fig. 4.

Example of the mean and standard deviation(std) of intensities of superpixels in different fine structures. The different red arrows point to the mean and standard deviation of the intensities of different superpixels belonging to different fine structures (e.g., the top red arrow points to the mean and standard deviation of the intensity of a superpixel belonging to the quiet photosphere).

In the text
thumbnail Fig. 5.

Intermediate results of the method we propose. (a) Result of the preliminary segmentation of the photosphere. (b) Spatial location feature. (c) Result of fine-tuning the preliminary segmentation.

In the text
thumbnail Fig. 6.

Process of extracting the light bridge. (a) Umbra area segmented as described in Sect. 2.4. (b) Umbra area processed by the morphological closing operation. (c) Candidate light bridges before threshold screening of the area. (d) Candidate light bridges screened by the area threshold. (e) Location relation between light bridges and umbras. The white and gray areas represent the light bridge and the umbra, respectively. The green lines represent the areas in which light bridges intersect umbras, and the red lines represent the areas in which the light bridges do not intersect umbras. (f) Final segmentation results of the light-bridge area.

In the text
thumbnail Fig. 7.

Results of Sects. 2.5 and 2.6. (a) Result of the fine-structure segmentation result of Sect. 2.5. (b) Final result of the fine-structure segmentation after postprocessing.

In the text
thumbnail Fig. 8.

Results of step 2 with and without the texture feature. (a) Processing results of data observed on 2016 January 28 at 19:32:06 UT with and without the texture feature. (b) Processing results of data observed on 2019 May 8 at 16:46:06 UT with and without the texture feature. In panels a and b, the first and second rows show the mask and contour of the results, and the first to third columns show the input image, results without the texture feature, and results with the texture feature, respectively. The green arrows indicate the additional umbra areas found by the algorithm with the texture feature.

In the text
thumbnail Fig. 9.

Results of step 3 with and without the spatial location feature. (a) Processing results of data observed on 2016 January 28 at 19:32:06 UT with and without the spatial location feature. (b) Processing results of data observed on 2019 May 8 at 16:46:06 UT with and without the spatial location feature. In panels a and b, the first and second rows show the mask and contour of results, and the first to third columns show the input image, results without the spatial location feature, and results with the spatial location feature, respectively. The green arrow in panel b indicates an example of an incorrectly detected penumbra area that is removed by applying the spatial location feature.

In the text
thumbnail Fig. 10.

Results of the fine-structure segmentation of our method and of that of Yang et al. (2018). In panels a–f, the first column is the input image. Columns 2–3 show the segmentation results generated by our algorithm. Columns 4–5 are the results generated by the algorithm of Yang et al. (2018). The purple arrows in the fourth column of panels a and d indicate the representative examples of the penumbra areas that are not recalled by Yang et al. (2018). The green arrows in the fourth column of panels a, b, and f indicate the representative examples of the incorrectly segmented umbra areas by Yang et al. (2018). The orange arrows in the fourth column of panel f indicate the representative examples of pores that Yang et al. (2018) could recall, but our method could not.

In the text
thumbnail Fig. 11.

Segmentation results for data from different solar telescopes. Panels a, b, and c show data observed by NVST and the fine-structure segmentation results of these data. Panel d shows the data observed by EAST and the fine-structure segmentation result.

In the text
thumbnail Fig. 12.

Segmentation results for the quiet Sun. The top and bottom panels show the segmentation results of two images observed at different times.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.