Issue 
A&A
Volume 586, February 2016



Article Number  A16  
Number of page(s)  9  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201526960  
Published online  20 January 2016 
Deconvolution of postadaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure
^{1}
Dipartimento di Matematica e Informatica, Università di
Ferrara, via Saragat
1, 44122
Ferrara,
Italy
email:
alessandro.benfenati@unife.it
^{2}
Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria
dei Sistemi (DIBRIS), Università di Genova, via Dodecaneso 35, 16145
Genova,
Italy
^{3}
Laboratoire Lagrange, Université Côte d’Azur, Université Nice
SophiaAntipolis/Observatoire de la Côte d’Azur/CNRS, Parc Valrose, 06100
Nice,
France
Received: 14 July 2015
Accepted: 5 October 2015
Aims. Highdynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright pointwise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources.
Methods. The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized KullbackLeibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the tradeoff between the KullbackLeibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise.
Results. The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources’ position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources’ position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star’s position concerns a connected region instead of isolated pixels.
Key words: methods: numerical / methods: data analysis / techniques: image processing / techniques: high angular resolution
© ESO, 2016
1. Introduction
Highdynamic range (HDR) astrophysical objects, often consisting of both very bright central stars and faint diffuse surrounding structures, are first of all difficult to detect because of the various detector noises, and atmospheric turbulence which, if not sufficiently corrected, does not permit the necessary long exposure.
Recent very efficient adaptive optics (AO) systems, often called extreme AO (XAO) systems such as, for example, the SAXO system (Petit et al. 2012) of the instrument SPHERE aboard VLT (Beuzit et al. 2006), make it possible to reach the high sensitivity needed. Additional highperformance coronagraphs can in addition allow even better detections of these faint circumstellar structures. This, however, is at the price of a very high space variance of the pointspread function (PSF) off axis, which prevents deconvolving the final images.
In the present work, we focus on a new deconvolution method that permits us to benefit from the high sensitivity allowed by the XAO system to treat the two different components of the image in a suitable manner (i.e. the very bright central stars, on the one hand, and the faint circumstellar environment, on the other hand). In order to tackle this method realistically, we also consider the central bright component as composed by a slightly unresolved binary star. Moreover, we compare the results of our method with the previously proposed HDR method in La Camera et al. (2012, 2014).
The paper is organized as follows: in Sect. 2 we detail our methods while Sect. 3 describes the endtoend simulations performed in order to produce realistic data. The obtained results are hence presented in Sect. 4, and conclusions are summarized in Sect. 5.
2. Methods
In this section we describe the mathematical models of the image acquisition system and the variational formulation considered for image reconstruction. Firstly, we present our proposed method, the inexact Bregman procedure, with a brief description of the scaled gradient projection (SGP) method, employed to solve the inner subproblems and tailored for the HDR images case. Then, we briefly present the multicomponent RichardsonLucy (MCRL) method, described in La Camera et al. (2012, 2014). The core of the method is based on the wellknown RichardsonLucy (RL) algorithm (see Richardson 1972 and Lucy 1974), and it is similar to the new method we present. Therefore, we use the MCRL method as a term of comparison for the reconstructed images.
2.1. Generalities
We assume that the acquired image g ∈ R^{m} is the realization of a multivalued random variable with expected value where x^{∗} ∈ R^{n} is a vector representing the unknown object, H ∈ ℳ_{m × n}(R) denotes the imaging matrix coming from the discretization of the (supposedly known, actually estimated) PSF K and satisfying (1)where 1 ∈ R^{n} is a constant vector with 1 on all entries, and b is the (almost everywhere) constant background emission due to the sky emission and to the detector. The noise perturbing the acquired image is a mixture of Gaussian and Poisson noise, the former due to the readout noise (RON) and the latter to the counting process. One can consider the RON as a Poisson process (Snyder et al. 1995) and hence the sole noise affecting the image is Poisson noise.
In image reconstruction problems, the aim is to obtain an estimation of the unknown object x^{∗}: adopting a Bayesian approach, can be computed by solving the following variational problem: (2)where represents the constraints on the solution, such as non–negativity or flux conservation; f_{0} is called fittodata functional: in our case, in presence of Poisson noise, a suitable choice is the generalized KullbackLeibler functional, known as Csizár Idivergence (Csiszar 1991) (3)with the agreement 0log (0) = 0, and where the ratio is defined componentwise. Since in real applications b> 0, the logarithm is well defined.
The functional f_{1} is called regularization functional: it has been introduced to preserve some characteristics on , such as sharp edges or smooth regions. The parameter β is named regularization parameter and it measures the tradeoff between f_{0} and f_{1}. In practical application, one needs the value that gives the best reconstruction, but actually this value is very difficult to estimate (see Bertero et al. 2010; Bardsley & Goldes 2009, for the Poisson case).
The problem we are facing consists in the reconstruction of two maps of a given region of an astrophysical object: the former consists in point sources, such as stars, of very high intensity, the latter of smooth structures surrounding these sources. The standard approach, i.e. simply solving Eq. (2), fails since the presence of the point sources destroys all the information about the smooth structures. Hence, the main idea is to consider the object x as the sum of two components, namely x = x_{P} + x_{E}, where x_{P} represents the point sources and x_{E} the extended source. This approach was proposed for the first time in De Mol & Defrise (2004), assuming that the positions of the point sources are known. Denoting by p_{j} the position of the jth source, we can write (4)where q is the total number of sources, c_{j} is the jth unknown intensity, and δ(p_{j}) is the delta function centred in p_{j}. Thus, x_{P} is a vector with zero entries except in the q positions corresponding to the known locations of the sources. Then, instead of computing f_{1} on the whole object x in Eq. (2), we only regularize the extended source, since the structure induced on x_{P} already works as a regularization. This requires a slightly modification in the computation of f_{0}. We introduce the vector c = (c_{1},c_{2},...,c_{q})^{t} containing the intensities of the sources, and we define and the matrix , where , with h_{j} denoting the jth column of H. We are hence led to solve(5)It may happen that has a loss in contrast, even using the optimal value for β. To overcome this difficulty, we then propose the use of the inexact Bregman procedure, which permits the use of an overestimation of the regularization parameter, and at the same time allows us to obtain a contrast enhancement.
2.2. Inexact Bregman procedure for HDR images
The main idea of the inexact Bregman procedure (Bregman 1967), and it consists in solving Eq. (2) by an iterative procedure in which f_{1} is substituted with its Bregman distance at the current iterate. In computing this distance, we use the εsubgradients, obtaining (6)where ξ ∈ ∂_{ε}f_{1}(y); ∂_{ε}f_{1}(y) denotes the εsubdifferential of f_{1} at y (see Rockafellar 1970 for all the technical details). The whole procedure is described below as follows:

set x^{0} s.t 0 ≡ ξ^{0} ∈ ∂f_{1}(x^{0}), ε^{0} = 0, β> 0. Choose sequences { μ_{k} }, { ν_{k} } s.t. and .
As already stated, the main feature of this procedure consists in its regularization behaviour, and, moreover, it allows to use an overestimation of the parameter β (see Sect. 4 for more details). Furthermore, this overestimation induces a contrast enhancement in the restored image.
2.3. Scaled gradient projection
The choice of the iterative solver for the subproblems of Eq. (7) ranges between a huge number of methods. We make use of the SGP method, presented in Bonettini et al. (2009). The aim of the SGP method is to solve the problem (8)where f is a convex, proper, and differentiable function, and is a convex constraint set. Provided x^{0},y^{0}, its iteration can be generally described as follows: where denotes the projection operator on the set , θ_{i} is the linesearch parameter, α_{i} is the steplength, and S_{i} denotes a diagonal, positive definite scaling matrix. The parameter θ_{i} is determined by a standard monotone linesearch strategy, while α_{i} and S_{i} updating rules are devised in order to induce an acceleration on the convergence rate. In the SGP version used within the numerical experiments, at each ith step the steplength α_{i} is determined by using the BarzilaiBorwein rules (see Barzilai & Borwein 1988; Frassoldati et al. 2008). The S_{i} matrix plays a crucial role in achieving acceleration; in order to obtain this kind of an acceleration and, at the same time, to avoid an increasing computational cost, we define the diagonal entries s_{l} of S_{i}, employing a splitting of the gradient of the objective function f (Lanteri et al. 2002), (11)where 0 <s_{low}<s_{upp} are positive constants, denotes the lth component of the ith iterate and the vector V(x^{i}) is such that (12)It is obvious that this kind of splitting always exists and is not unique.
2.4. Regularization functionals
We provide more details about the implementation of the inexact Bregman procedure, with SGP as inner solver, for the regularization functionals we consider. We define f_{β}(·) = KL(H· + b;g) + βf_{1}(·). The choices made are the classical Tikhonov (T0) regularization (13)and the hyper surface (HS) potential (14)where A_{i} ∈ ℳ_{2 × n}, is the discrete version of the gradient and δ ≥ 0 is a thresholding parameter. We point out that for δ> 0 the HS potential is differentiable, while for δ = 0 we obtain the wellknown total variation functional. We use the HS potential.
The construction of the scaling matrix S_{l} in the SGP method requires the splitting of the gradient of the objective functional as ∇f_{β} = V_{β} − U_{β}, and, since (15)we have (16)For the Tikhonov case, we simply have U_{f1} = 0 and V_{f1}(x) = x , while for the HS case we follow the formulae presented in Zanella et al. (2009).
In both the Tikhonov case and the HS case, the regularization functional is differentiable, thus we can write , and, moreover, ε_{k} = 0 for any k.
Remark 1.From a numerical point of view, the setup of the sequence ν_{k} can be easily obtained from where χ and d are two positive constants. With this approach, at the first iteration, the tolerance is equal to d, while in the successive iterations it becomes more and more severe. A practical rule to set the value for d is to solve the very first subproblem with a mild tolerance and to use a standard stopping criterion, then set d = ∥ η_{1} ∥. Hence, the only setting required is the value of χ.
The convergence property of this tailored version of the Bregman procedure, for a general class of regularization functionals, including the Tikhonov case and the HS case, can be found in Benfenati & Ruggiero (2015).
2.5. Multicomponent Richardson Lucy
As mentioned at the beginning of this section, we briefly describe here the socalled MCRL method. The approach is similar to the method presented in this paper. Again, as the name of the method itself suggests, we consider the unknown object x as the sum of the two components x = x_{P} + x_{E} and we want to solve Eq. (5). For the minimization of the functional f = f_{0}(Hx + b;g) + βf_{1}(x_{E}), we use the splitgradient method (SGM) introduced by Lanteri et al. (2002), as in the previous case for the inner solver SGP.
With respect to the version presented in La Camera et al. (2014), we developed an updated version of the method, which has several regularizations, in particular, Tikhonov and HS, described in the next section.
Each step of MCRL consists of an RL iteration on x_{P} and of an SGM iteration on x_{E}. The algorithm is

given nonnegative and ,

stop the iteration according to a given stopping rule.
In the previous equation the terms U_{f0},U_{f1} and V_{f1} are defined according to Eq. (17). In our simulations we pushed the algorithm to convergence, i.e. we stopped the iterations when the relative variation of the objective function was smaller than a given threshold.
Finally, we remark on the initialization of the two components. We choose , where the position p_{j} of each star is supposed to be known. At each iteration, because of the properties of RL, only the pixels in the positions p_{j} remain positive, while the other pixels are kept fixed to the initial zero value. In other words, the method introduces a sparsity regularization in the pointlike component of the reconstructed object. We initialize with a constant array. We do not have information about the flux of the diffuse component (only on the sum of the two), thus we simply take . We verified in our simulations that the algorithm is able to reproduce the correct flux values for both components.
Remark 2.
Since the Bregman procedure consists in two nested cycles when one does not have the explicit solution of the inner problems of Eq. (7), we need a fast solver for the inner subproblems. The SGP algorithm provides reliable results with a lower number of iterations with respect to the MCRL method (see Sect. 4 for details); thus, we preferred SGP as inner solver for the problems of Eq. (7).
Obviously, the Bregman scheme allows us to choose the inner solver in a wide class of methods. Another valuable option is a particularly tailored version of the alternating direction method of multipliers (ADMM; see Figueiredo & BioucasDias 2010), which also permits the use of nondifferentiable regularization functionals. We focused on the SGP algorithm as inner solver since the main aim of this work lies in the study of the behaviour of the Bregman procedure, regardless of the method used in the inner cycle.
Fig. 1 From left to right: object, PSF, image, noisy image. Pixel size is ~8.74 mas, i.e. (λ/D) / 6.4, and only the central 256×256px part is shown here (i.e. a fieldofview of ~2.24×2.24 arcsec). Each image is normalized to its maximum and represented at the power of 0.1 for sake of clarity. 

Open with DEXTER 
3. Simulations: case study
The HDR case study considered here consists in AOaided (hence high angular resolution) observations of a bright unresolved binary star with a faint circumstellar environment made of dust, with the particular aim of imaging the circumstellar structure. The dust forms at a given closeby distance from the star, there is a gap around the star before the start of the faint environment. We chose the very timely case of nearinfrared images obtained with the help of a groundbased, 8m class telescope equipped with a highStrehl AO system.
We do not focus on characterizing the binarity of the central component (with a binary separation of less than half a resolution element at the central observing band, 2.17 μm). This must be addressed separately, for example by means of the superresolution algorithm described in Anconelli et al. (2005) and applied to real data in Carbillet et al. (2013). Considering this binarity of the central component implies an additional difficulty in properly imaging the morphology of the circumstellar dust, and, in particular, in reconstructing the exact shape of the very closeby and sharp transition between the hole and the dust.
The circumstellar environment is modelled as Gaussian, with a fullwidth at half maximum (FWHM) of ~306 mas (35 px). Its central obscuration, characterizing the hole evoked before, has a diameter of ~79 mas (9 px). The pixel size is equivalent to a resolution element of the telescope at 2.17 μm (hence λ/D ≃ 56 mas), divided by 6.4. The total flux of the object is distributed into onethird for each binary component and onethird for the Gaussian circumstellar environment. The total considered magnitude in the observing band (with a bandwidth of 0.3 μm) is 6.
Then, images in that nearinfrared band, through the instrument SPHERE/IRDIS aboard VLT (Beuzit et al. 2006), were numerically simulated from a deeply detailed numerical modelling involving atmospheric and instrumental wavefront perturbations, AO corrections through the SPHERE XAO system SAXO (Petit et al. 2012), and broadband imaging (with no coronagraph). We performed the entire modelling by means of the adhoc Software Package SPHERE (Carbillet et al. 2008), which is already used in a number of instrumental (Boccaletti et al. 2008; Carbillet et al. 2011) and image processing (Smith et al. 2009; Vigan et al. 2010) works involving SPHERE/IRDIS, and developed within the CAOS problemsolving environment (Carbillet et al. 2004, 2010). We also considered fundamental noises at the level of the detector (photon noise and RON) and sky background. The time exposure is also limited to a saturation limit given by the detector characteristics, and resulting here in a 23sonly frame. Detailed parameters of the performed endtoend simulation are given in Table 1.
Detailed parameters of the endtoend simulation.
The resulting object, PSF, and image are represented in Fig. 1. The blurring effect of the PSF, on the one hand, and the effect of the detector noises, on the other hand, can be clearly noticed.
4. Results
We now introduce the metrics used to evaluate our results. The first is the relative reconstruction error on the true object x^{∗}where x^{k} is the restored image at the kth iteration and ∥ ·∥ _{2} is the Euclidean norm. We define the optimal value β_{opt} for the regularization parameter in terms of ρ^{k}: selecting in an interval [ β_{d},β_{u} ] some values for β, we chose the one that provides the restored image with the minimum reconstruction error. Actually, instead of considering the whole 1024 × 1024 image in computing ρ^{k}, we only take into account a smaller, centred portion of the x_{E} component, within the window w ranging from the 470th pixel to the 555th one in both dimensions. We denote this error ρ_{w} (for sake of completeness, we point out that we enumerate the pixels from 1 to 1024 in both directions). As previously mentioned, in practical applications, only an estimate of β_{opt} can be obtained (see Bertero et al. 2010; Bardsley & Goldes 2009). Moreover, the numerical experience (see Benfenati & Ruggiero 2015) has shown that these methods cannot provide satisfactory results for some classes of images (such as the image treated in this work) because of the hypothesis underlying these techniques.
Another quality measurement is obtained by considering the radius of the reconstructed hole in the centre of the image (see Fig. 1). This radius is computed following the steps below, supposing that the centre of this hole is in the middle of the image:

at each kth step, we detect the pixel P corresponding to the maximum within the window of interest w. We compute its distance d from the centre. If the maximum is attained in more than one pixel, we compute all the distances of these pixels from the centre and then only consider the maximal distance among these values, namely d;

we identify the pixels lying in the circle whose radius is d and we take the square containing this circle; and

via a Canny edge detector algorithm, we identify the profile of the hole and then compute the mean radius r_{k} of the reconstructed hole.
We have then used two different metrics for error measurements: the first (relative reconstruction error) can provide a numerical measurement of the reconstruction. This error measurement is classical, but it is not applicable to real cases, since x^{∗} is not available. The other measurement is suggested by the particular structure of the image treated, where the main interest lies in reconstructing the centred hole. Then we developed a procedure to measure the radius of the hole. We assumed this radius to be a quality index of the restored images. This second technique can also be applied in real case studies for a class of images presenting similar structures.
Regarding MCRL, in the case of HS regularization, the proposed method for finding the optimal value β_{opt} based on the minimization of the reconstruction error failed. Therefore, we analysed the reconstructed objects (at different values of β) and we chose the β_{opt} value that provides the smallest value of the mean radius r_{β} of the reconstructed hole.
Fig. 2 Experiments results. Line plots: the lines depicted represent the 513th row of the image, from pixel 470 to 555. The images shown are the restored images by SGP, MCRL, and inexact Bregman procedure, in columnwise order. The results shown are related to the minimum ρ_{w} obtained. 

Open with DEXTER 
4.1. Exact knowledge of the positions of the sources
A first set of experiments is done by supposing to know the exact positions of the sources.
The first choice for the regularization function is the Tikhonov functional, using SGP algorithm with β_{opt} = 5 × 10^{6}. The stopping criterion used is based on the relative difference of two successive objective function values. Setting the tolerance for this criterion to 10^{7}, in 178 iterations the obtained image has a relative reconstruction error ρ_{w} = 13.4%, and the radius of the hole is 4.90 ± 0.91 pixels (while the exact value is 4.5 pixels). Moreover, we can compute the minimum value attained in the reconstructed hole, which in this case is equal to 2% of the maximum value to be compared with the exact percentage (0%). In the plot on the left of Fig. 2 (first row), we show the profile line of the 513rd row (the one in which the binary star lies) of the true object x^{∗} and the reconstructions of the same row obtained with the different methods. The black dashed line is related to the true object, and the red line and green line represent the SGP and MCRL reconstructions, respectively. Finally, blue line corresponds to the restored image by the Bregman procedure. The SGP method combined with Tikhonov regularization with β_{opt} cannot provide satisfactory results regarding the edges of the hole. In fact, Tikhonov regularization cannot recognize sharp edges.
The MCRL method has a very similar behaviour to SGP, since it cannot recognize the steepness of the hole. The MCRL method provides a restored image with ρ_{w} = 11.1% and a radius of 4.77 ± 0.98 using β = 3 × 10^{6}. Moreover, it reaches a higher minimum value of the hole (6%). On the other hand, MCRL seems to produce fewer artifacts than SGP, as one can observe in the restored image shown again in Fig. 2.
The inexact Bregman procedure is set with β = 10β_{opt}, an initial tolerance of 10^{7} and χ = 1.5. We force the procedure to run for 100 iterations. At the 5th external iteration (with a total number of iterations for SGP of 499), we obtain the restored image with ρ_{w} = 13.2%. As evidenced by the blue profile line in the upper left plot of Fig. 2, the bottom of the hole (i.e. the value 0) is reached, while the hole’s radius is very similar to the previous case: 4.89 ± 0.91. Requiring Δ_{k}< 10^{3}, the procedure has to be terminated at the 9th iteration, obtaining a reconstruction with ρ_{w} = 13.4%, which is slightly higher than the minimum error achieved.
The Tikhonov regularization cannot achieve a satisfactory reconstruction of the image because of the presence of sharp edges: then we use the HS function, with δ = 10^{4}. The results are shown in the second row of Fig. 2. When SGP method is employed with the optimal value for β (β_{opt} = 5.97 × 10^{3}), we obtain a restored image with ρ_{w} = 9.4%, and a reconstructed radius of 4.24 ± 0.80. The minimum intensity achieved in the hole is 11.1%. This intensity is higher than that achieved by Tikhonov regularization, but one can observe that the flatness of the hole’s bottom is well restored by checking the red profile line in Fig. 2.
The MCRL method is used setting β = 10^{3}. The relative reconstruction error ρ_{w} achieved after 783 iterations is 10.1%, while the restored radius is 5.02 ± 0.89. The minimum value reached in the hole is 6%, which is a better result compared to the value provided by the SGP algorithm. Moreover, the MCRL method provides a more accurate reconstruction of the top boundary of the hole as evidenced by the line plot in Fig. 2 (second row). Although the MCRL method gives a general satisfactory estimation of the hole in terms of depth and radius, among the rest of the diffuse component some anomalies arise that are similar to those in the Tikhonov case.
The inexact Bregman procedure is set as in the Tikhonov case, i.e. with β = 10β_{opt}, an initial tolerance of 10^{5} and χ = 1.001. The contrastenhancement behaviour (see Benfenati & Ruggiero 2013) of this procedure is evident here: at the 32nd iteration (362 total iterations for SGP), we obtain a restored image with ρ_{w} = 8.2% and a minimum value achieved in the hole of 2%. On the other hand, this kind of behaviour reveals some artifacts: for example, the peak appearing in the blue profile line, which becomes evident in the restored image. These artifacts are most probably due to the diffraction of the spider arms of the telescope, which are somehow emphasized during the reconstruction process. By adopting the inexact Bregman procedure, the restored hole has a radius equal to 4.33 ± 0.89 at the 32nd iteration, and requiring Δ_{k}< 10^{3} leads us to stop at the 10th iteration, where r_{10} = 4.40 ± 0.91 and ρ_{w} = 8.6%.
Considering the restored images related to the minimum reconstruction error, the HS regularization, as a result of its edgepreserving behaviour, appears more suitable in this type of restoration problem since it allows us to obtain a more satisfactory reconstruction of the hole in terms of shape and depth. Moreover, the Tikhonov regularization seems to produce more artifacts in the diffuse component. Similar artifacts also tend to appear in the HS case, when one lets the methods (SGP, MCRL, and Bregman) proceed with a high number of iterations.
Fig. 3 Surface plots of the restored images. First row: surface plots related to the minimum error; second row: surfaces corresponding to the images for which Δ_{k}< 10^{3}. The regularization functions considered are HS and Tikhonov in columnwise order, respectively. 

Open with DEXTER 
Comparing the results obtained by SGP and MCRL with those provided by the inexact Bregman procedure, one can observe that the enhancing contrast, characteristic of this kind of procedure, allows us to recognize the centred hole, improving its depth and radius. In Fig. 3 we present the surface plots of the restored images in four cases: in the first row, we show the reconstructions obtained by the Bregman procedure related to the minimum ρ_{w} (with HS and Tikhonov regularizations on the left and right, respectively); and, in the second row, we show the restored images, which satisfy Δ_{k}< 10^{3} (with HS and Tikhonov regularizations on the left and right, respectively).
Tikhonov images are affected by numerous artifacts: in fact, looking at the top of the hole, we can see that the surface on its edge present a number of anomalies. We must also point out that the edgepreserving regularization provides results with some artifacts (the peaks are quite evident).
4.2. Inexact positions of the sources
In this experiment, we set the coordinates of one point source on the wrong pixel on purpose. Since in the previous experiments, the edgepreserving functional turned out to be more suitable, we use this type of function in this experiment. The inexact Bregman procedure is set again by taking β = 10β_{opt} with an initial tolerance of 10^{5} and χ = 1.001; in Fig. 4 the image with minimum reconstruction error is shown, together with the lineplot of the 513rd rows of the true object x^{∗} and of the restored image.
Fig. 4 Results obtained by setting the coordinates of one point source to the wrong pixel. Upper panel: line plot of the 513rd row of the original image, from pixel 470 to 555; black dashed line is the original object, while the red one is the line related to the reconstruction. Lower panel: image of the window of interest (normalized to its maximum and represented at the power of 0.1). 

Open with DEXTER 
Setting inexact coordinates for the position of one point source means that a very bright source is present in the diffuse component x_{d}. Hence the method emphasizes a very bright source, as is clearly evident in the large blue spot of high intensity in Fig. 4. Although we set the position of one point source to the wrong coordinates, we can obtain some useful information. In fact, the image in Fig. 4 suggests that we have located the hole (red large spot), but actually one (or more) bright source(s) are present, identified by the high blue peak. Hence, considering the pixels corresponding to the high intensity spot inside the hole, one can search for other bright sources among this region.
4.3. Connected x_{P} region
In order to avoid the case described in previous subsection, we consider that the positions of the sources p_{j},j = 1,...,q are known, but are in a region composed by each p_{j} and the relative 8connected pixels.
We tested this approach in our case study. Since the binary star has a separation of two pixels, considering the eight connected pixels around the stars means that we are considering a region of 3 × 6 pixels in the x_{P} component. We expect to restore a larger hole than in the previous cases. The inexact Bregman procedure is set with β = 15β_{opt} and with an initial tolerance of 10^{5} and χ = 1.001. In Fig. 5 the results with minimum reconstruction errors are shown.
Fig. 5 Considering a larger region for the point source component. Upper panel: line plot of true object (black dashed line) and reconstruction (red line). Lower panel: image of the window of interest (normalized to its maximum and represented at the power of 0.1). 

Open with DEXTER 
As expected, the restored image has a larger hole (its radius is 7.77 ± 0.86), but the position of the hole is clearly identified. The reconstruction error ρ_{w} is higher (32.2%) than in the previous cases because of the large connected region set in the x_{P} component. The positive aspect of this approach regards the satisfactory identification of the hole: hence, we have properly divided the image in its two components x_{P} and x_{E}, i.e. we did not include any bright source in the diffuse part. As a successive step, we suggest reducing the region in which the stars are spread to achieve a better estimation of the positions of the sources.
5. Conclusions
Table 2 gives a summary of the final results obtained. The benefit obtained thanks to the approach presented in the present paper is the opportunity to use an overestimation of the regularization parameter β instead of searching for the optimal one, since in presence of Poisson noise this is a very tricky task. A practical strategy consists in estimating β with one of the methods mentioned previously in the paper (e.g. Bertero et al. 2010; Bardsley & Goldes 2009), and then multiply the obtained value (namely ) by a factor γ> 1. Four scenarios may arise by adopting this strategy:

: using (we suggest γ between 10 and 20) the Bregman procedure acts as previously shown, inducing a contrast enhancement in the restored images;

: in this case, the regularization behaviour of the whole procedure is not evident. The value is so small that even is not sufficient to trigger the regularization behaviour, then γ must be increased;

: the restored images, at each outer iteration, present losses in details and contrast. The regularization functional has too much influence with respect to the data fidelity functional, and γ is too large;

: in this case, the very first step of the Bregman procedure consists in a classical regularization procedure. The successive steps cannot provide better reconstructions (the best one, in terms of relative reconstruction error, is already reached), but it may happen that some benefits in terms of contrast enhancement could emerge.
This kind of strategy, taking previous considerations into account, allows us thus to simplify the selection procedure of β in real applications.
Although with this overestimation, the restored images (both with Tikhonov and HS regularizations) present satisfactory characteristics in terms of quality and numerical results, such as the radius of the hole and the estimation of the depth of the hole.
Final results obtained.
The whole procedure proposed here relies on a previous step of determination of the exact position of the source, or sources, in the case of a central (possibly unresolved) binary star. This is supposed to be performed with the help of a classical RL deconvolution, and even superresolution (as described before). This also assumes a signaltonoise ratio that is high enough for the central object zone of the data to be able to push the iterations of RL far enough (towards a single pixel reconstruction mode). If this step failed to resolve the possible binarity of the central component, it will at least reconstruct a typical elongation, which can be used as described in Sect. 4.3. If the position of one (or more) of the components badly resolved is not exact, the method we propose will feature the typical artifact described in Sect. 4.2, emphasizing the misplaced source, and one can then search again for its correct position. Hence, the inexact Bregman procedure proposed here is not only able to reconstruct the circumstellar environment, but can also help in reconstructing more correctly the central stellar component (if binary/multiple) by successive iterations of the procedure.
A first perspective of this work will be to verify the performance of the method proposed when the PSF is not perfectly calibrated, which was an implied assumption of the present paper (the socalled inverse crime case, a usual and reasonable first step when testing the intrinsic limit of a new method). Nevertheless, we do not expect drastic changes since, e.g. for the SPHERE/VLT instrument taken as a case study here, the PSF calibration procedure is performed very accurately. Hence, the procedure is numerically modelled within the ad hoc Software Package SPHERE (Carbillet et al. 2008), considering 97% of common wavefronts between the calibrated PSF and the PSF used for imaging the object observed.
A successive perspective will be to study the deconvolution of postcoronagraphic data, which present the tricky characteristics of a strong space variance of the PSF around the centre of the field, but can possibly lead to reach higher dynamic ranges.
Acknowledgments
The referee and the editor are thanked for their remarks and questions that permitted us to enhance the readiness of the paper. Nicolas Basalgète is also thanked for proofreading of the revised version of the manuscript, and Mario Bertero is acknowledged for his theoretical help regarding the issue of Poisson noise vs. Gaussian noise. The idea of searching for ad hoc methods permitting to properly reconstruct evolved objects, such as the close binary star surrounded by dust modelled in the present paper, was born during a Master training stage cosupervised by MC and Olivier Chesneau in year 2011 (Belokogne 2011). The present work is dedicated to the memory of the bright scientist Olivier was.
References
 Anconelli, B., Bertero, M., Boccacci, P., & Carbillet, M. 2005, A&A, 431, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bardsley, J. M., & Goldes, J. 2009, Inverse Problems, 25, 095005 [NASA ADS] [CrossRef] [Google Scholar]
 Barzilai, J., & Borwein, J. M. 1988, IMA J. Num. Anal., 8, 141 [CrossRef] [MathSciNet] [Google Scholar]
 Belokogne, I. 2011, How to push the limits of evolved stars observations with SPHERE/VLT (Université de Nice SophiaAntipolis) [Google Scholar]
 Benfenati, A., & Ruggiero, V. 2013, Inverse Problems, 29, 5016 [CrossRef] [Google Scholar]
 Benfenati, A., & Ruggiero, V. 2015, Communications in Nonlinear Science and Numerical Simulation, 21, 210 [NASA ADS] [CrossRef] [Google Scholar]
 Bertero, M., Boccacci, P., Talenti, G., Zanella, R., & Zanni, L. 2010, Inverse Problems, 26, 10500 [NASA ADS] [CrossRef] [Google Scholar]
 Beuzit, J.L., Feldt, M., Dohlen, K., et al. 2006, ESO Messenger, 125, 29 [NASA ADS] [Google Scholar]
 Boccaletti, A., Carbillet, M., Fusco, T., et al. 2008, in Adaptive Optics Systems, SPIE Proc., 7015, 70156 [CrossRef] [Google Scholar]
 Bonettini, S., Zanella, R., & Zanni, L. 2009, Inverse Problems, 25, 015002 [NASA ADS] [CrossRef] [Google Scholar]
 Bregman, L. M. 1967, USSR Computational Mathematics and Mathematical Physics, 7, 200 [CrossRef] [Google Scholar]
 Carbillet, M., Vérinaud, C., Guarracino, M., et al. 2004, in Advancements in Adaptive Optics, SPIE Proc., 5490, 550 [Google Scholar]
 Carbillet, M., Boccaletti, A., Thalmann, C., et al. 2008, in Adaptive Optics Systems, SPIE Proc., 7015, 70156 [CrossRef] [Google Scholar]
 Carbillet, M., Desiderà, G., Augier, E., et al. 2010, in Adaptive Optics Systems II, SPIE Proc., 7736, 773644 [CrossRef] [Google Scholar]
 Carbillet, M., Bendjoya, P., Abe, L., et al. 2011, Exp. Astron., 30, 39 [NASA ADS] [CrossRef] [Google Scholar]
 Carbillet, M., La Camera, A., Chesneau, O., et al. 2013, in Adaptive Optics for Extremely Large Telescopes, Proc. 3rd AO4ELT Conf., Firenz, Italy [Google Scholar]
 Csiszar, I. 1991, Ann. Statist., 19, 2032 [CrossRef] [MathSciNet] [Google Scholar]
 De Mol, C., & Defrise, M. 2004, Proc. Int. Symp. on Electromagnetic Theory, 798, 800 [Google Scholar]
 Figueiredo, M. A. T., & BioucasDias, J. M. 2010, IEEE Transactions on Image Processing, 19, 3133 [NASA ADS] [CrossRef] [Google Scholar]
 Frassoldati, G., Zanni, L., & Zanghirati, G. 2008, J. Industrial and Management Optimization, 4, 299 [CrossRef] [MathSciNet] [Google Scholar]
 La Camera, A., Antoniucci, S., Bertero, M., et al. 2012, in Optical and Infrared Interferometry III, Proc. SPIE, 8445, 84453D [CrossRef] [Google Scholar]
 La Camera, A., Antoniucci, S., Bertero, M., et al. 2014, PASP, 126, 180 [NASA ADS] [CrossRef] [Google Scholar]
 Lanteri, H., Roche, M., & Aime, C. 2002, Inverse Problems, 18, 1397 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Lucy, L. B. 1974, AJ, 79, 745 [NASA ADS] [CrossRef] [Google Scholar]
 Petit, C., Sauvage, J.F., Sevin, A., et al. 2012, in Adaptive Optics Systems III, SPIE Proc., 8447, 84471 [CrossRef] [Google Scholar]
 Richardson, W. H. 1972, J. Opt. Soc. Am., 62, 55 [NASA ADS] [CrossRef] [Google Scholar]
 Rockafellar, R. T. 1970, Convex Analysis (Princeton, NJ: Princeton University Press) [Google Scholar]
 Smith, I., Bertero, M., Boccacci, P., Carbillet, M., & Lantéri, H. 2009, IEEE Trans. on Signal Proc., 57, 904 [NASA ADS] [CrossRef] [Google Scholar]
 Snyder, D. L., Helstrom, C. W., Lanterman, A. D., White, R. L., & Faisal, M. 1995, J. Opt. Soc. Am. A, 12, 272 [NASA ADS] [CrossRef] [Google Scholar]
 Vigan, A., Moutou, C., Langlois, M., et al. 2010, MNRAS, 407, 71 [NASA ADS] [CrossRef] [Google Scholar]
 Zanella, R., Boccacci, P., Zanni, L., & Bertero, M. 2009, Inverse Problems, 25, 045010 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
All Figures
Fig. 1 From left to right: object, PSF, image, noisy image. Pixel size is ~8.74 mas, i.e. (λ/D) / 6.4, and only the central 256×256px part is shown here (i.e. a fieldofview of ~2.24×2.24 arcsec). Each image is normalized to its maximum and represented at the power of 0.1 for sake of clarity. 

Open with DEXTER  
In the text 
Fig. 2 Experiments results. Line plots: the lines depicted represent the 513th row of the image, from pixel 470 to 555. The images shown are the restored images by SGP, MCRL, and inexact Bregman procedure, in columnwise order. The results shown are related to the minimum ρ_{w} obtained. 

Open with DEXTER  
In the text 
Fig. 3 Surface plots of the restored images. First row: surface plots related to the minimum error; second row: surfaces corresponding to the images for which Δ_{k}< 10^{3}. The regularization functions considered are HS and Tikhonov in columnwise order, respectively. 

Open with DEXTER  
In the text 
Fig. 4 Results obtained by setting the coordinates of one point source to the wrong pixel. Upper panel: line plot of the 513rd row of the original image, from pixel 470 to 555; black dashed line is the original object, while the red one is the line related to the reconstruction. Lower panel: image of the window of interest (normalized to its maximum and represented at the power of 0.1). 

Open with DEXTER  
In the text 
Fig. 5 Considering a larger region for the point source component. Upper panel: line plot of true object (black dashed line) and reconstruction (red line). Lower panel: image of the window of interest (normalized to its maximum and represented at the power of 0.1). 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.