Issue 
A&A
Volume 517, July 2010



Article Number  A26  
Number of page(s)  13  
Section  Astronomical instrumentation  
DOI  https://doi.org/10.1051/00046361/200913822  
Published online  28 July 2010 
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
J. Schmitt^{1}  J. L. Starck^{1}  J. M. Casandjian^{1}  J. Fadili^{2}  I. Grenier^{1}
1  CEA, Laboratoire AIM, CEA/DSMCNRSUniversité Paris Diderot,
CEA, IRFU, Service d'Astrophysique, Centre de Saclay, 91191 GifSurYvette Cedex, France
2  GREYC CNRSENSICAENUniversité de Caen, 6 Bd du Maréchal Juin, 14050 Caen Cedex, France
Received 7 December 2009 / Accepted 8 March 2010
Abstract
The Large Area Telescope (LAT), the main instrument of the Fermi
gammaray Space telescope, detects high energy gamma rays with
energies from 20 MeV to more than 300 GeV. The two main
scientific objectives, the study of the Milky Way diffuse
background and the detection of point sources, are complicated by
the lack of photons. That is why we need a powerful Poisson noise
removal method on the sphere which is efficient on low count Poisson
data. This paper presents a new multiscale decomposition on the sphere
for data with Poisson noise, called multiscale variance stabilizing
transform on the sphere (MSVSTS). This method is based on a variance
stabilizing transform (VST), a transform which aims to
stabilize a Poisson data set such that each stabilized sample has a
quasi constant variance. In addition, for the VST used in the
method, the transformed data are asymptotically Gaussian. MSVSTS
consists of decomposing the data into a sparse multiscale dictionary
like wavelets or curvelets, and then applying a VST on the
coefficients in order to get almost Gaussian stabilized coefficients.
In this work, we use the isotropic undecimated wavelet transform
(IUWT) and the curvelet transform as spherical multiscale transforms.
Then, binary hypothesis testing is carried out to detect significant
coefficients, and the denoised image is reconstructed with an iterative
algorithm based on hybrid steepest descent (HSD). To detect
point sources, we have to extract the Galactic diffuse background:
an extension of the method to background separation is then
proposed. In contrary, to study the Milky Way diffuse
background, we remove point sources with a binary mask. The gaps have
to be interpolated: an extension to inpainting is then proposed.
The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Key words: methods: data analysis  techniques: image processing  gamma rays: general
1 Introduction
The Fermi gammaray space telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the highenergy gammaray sky (Atwood et al. 2009). Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20 MeV to greater than 300 GeV. The LAT is much more sensitive than its predecessor, the EGRET telescope on the Compton Gamma Ray Observatory, and is expected to find several thousand gamma ray sources, which is an order of magnitude more than its predecessor EGRET (Hartman et al. 1999).
Even with its effective area, the number of photons detected by the LAT outside the Galactic plane and away from intense sources is expected to be low. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. The basic photonimaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This quantum noise makes the source detection more difficult, consequently it is better to have an efficient denoising method for spherical Poisson data.
Several techniques have been proposed in the literature to estimate Poisson intensity in 2D. A major class of methods adopt a multiscale bayesian framework specifically tailored for Poisson data (Nowak & Kolaczyk 2000), independently initiated by Timmerman & Nowak (1999) and Kolaczyk (1999). Lefkimmiaits et al. (2009) proposed an improved bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform (VST) such as the Anscombe (1948) and the Fisz (1955) transforms, applied respectively in the spatial (Donoho 1993) or in the wavelet domain (Fryzlewicz & Nason 2004). The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independant identically distributed Gaussian noise are then used for denoising. Zhang et al. (2008) proposed a powerful method called multiscale variance stabilizing tranform (MSVST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets or curvelets), yielding asymptotically normally distributed coefficients with known variances. The choice of the multiscale method depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Starck et al. (2009), it was shown that sources can be detected in 3D LAT data (2D+time or 2D+energy) using a specific 3D extension of the MSVST.
There is, to our knowledge, no method for Poisson intensity estimation on spherical data. It is possible to decompose the spherical data into several 2D projections, denoise each projection and reconstitute the denoised spherical data, but the projection induces some caveats like visual artifacts on the borders or deformation of the sources.
In the scope of the Fermi mission, we have two main scientific objectives:
 detection of point sources to build the catalog of gamma ray sources;
 study of the Milky Way diffuse background, which is due to interaction between cosmic rays and interstellar gas and radiation.
The aim of this paper is to introduce a Poisson denoising method on the sphere called multiscale variance stabilizing transform on the sphere (MSVSTS) in order to denoise the Fermi photon count maps. This method is based on the MSVST (Zhang et al. 2008) and on recent on multiscale transforms on the sphere (Abrial et al. 2007,2008; Starck et al. 2006). Section 2 recalls the multiscale transforms on the sphere which are used in this paper, and Gaussian denoising methods based on sparse representations. Section 3 introduces the MSVSTS. Section 4 applies the MSVSTS to spherical data restoration. Section 5 applies the MSVSTS to inpainting. Section 6 applies the MSVSTS to background extraction. Conclusions are drawn in Sect. 7. In this paper, all experiments were performed on HEALPix maps with (Górski et al. 2005), which corresponds to a good pixelisation choice for the GLAST/FERMI resolution. The performance of the method is not dependent on the nside parameter. For a given data set, if nside is small, it just means that we don't want to investigate the finest scales. If nside is large, the number of counts per pixel will be very small, and we may not have enough statistics to get any information at the finest resolution levels. But it will not have any bad effect on the solution. Indeed, the finest scales will be smoothed, since our algorithm will not detect any significant wavelet coefficients in the finest scales. Hence, starting with a fine pixelisation (i.e. large nside), our method will provide a kind of automatic binning, by thresholding wavelets coefficients at scales and at spatial positions where the number of counts is not sufficient.
2 Multiscale analysis on the sphere
New multiscale transforms on the sphere were developed by Starck et al. (2006). These transforms can be inverted and are easy to compute with the HEALPix pixellisation, and were used for denoising, deconvolution, morphological component analysis and impainting applications (Abrial et al. 2007). In this paper, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform.
2.1 Multiscale transforms on the sphere
2.1.1 Isotropic undecimated wavelet transform on the sphere
The isotropic undecimated wavelet transform on the sphere (IUWT) is a
wavelet transform on the sphere based on the spherical harmonics
transform and with a very simple reconstruction algorithm. At
scale j, we denote
the scale coefficients, and
the wavelet coefficients, with
denoting the longitude and
the latitude. Given a scale coefficient a_{j}, the smooth coefficient a_{j+1} is obtained by a convolution with a low pass filter h_{j}:
.
The wavelet coefficients are defined by the difference between two consecutive resolutions:
d_{j+1} = a_{j}  a_{j+1}. A straightforward reconstruction is then given by:
Since this transform is redundant, the procedure for reconstructing an image from its coefficients is not unique and this can be profitably used to impose additional constraints on the synthesis functions (e.g. smoothness, positivity). A reconstruction algorithm based on a variety of filter banks is described in Starck et al. (2006).
2.1.2 Curvelet transform on the sphere
The curvelet transform enables the directional analysis of an image in different scales. The data undergo an Isotropic Undecimated Wavelet Transform on the sphere. Each scale j is then decomposed into smoothly overlapping blocks of sidelength B_{j} in such a way that the overlap between two vertically adjacent blocks is a rectangular array of size B_{j} B_{j} /2, using the HEALPix pixellisation. Finally, the ridgelet transform (Candes & Donoho 1999) is applied on each individual block. The method is best for the detection of anisotropic structures and smooth curves and edges of different lengths. More details can be found in Starck et al. (2006).
2.2 Application to Gaussian denoising on the sphere
Multiscale transforms on the sphere have been used successfully for
Gaussian denoising via nonlinear filtering or thresholding methods.
Hard thresholding, for instance, consists of setting all insignificant
coefficients (i.e. coefficients with an absolute value below a
given threshold) to zero. In practice, we need to estimate the
noise standard deviation
in each band j and a coefficient w_{j} is significant if
,
where
is a parameter typically chosen between 3 and 5. Denoting
the noisy data and
the thresholding operator, the filtered data
are obtained by:
(2) 
where is the multiscale transform (IUWT or curvelet) and is the multiscale reconstruction. is a vector which has the size of the number of bands in the used multiscale transform. The thresholding operation thresholds all coefficients in band j with the threshold .
3 Multiscale variance stabilizing transform on the sphere (MSVSTS)
3.1 Principle of VST
3.1.1 VST of a Poisson process
Given Poisson data , each sample has a variance . Thus, the variance of is signaldependant. The aim of a VST is to stabilize the data such that each coefficient of has an (asymptotically) constant variance, say 1, irrespective of the value of . In addition, for the VST used in this study, is asymptotically normally distributed. Thus, the VSTtransformed data are asymptotically stationary and Gaussian.
The Anscombe (1948) transform is a widely used VST which has a simple squareroot form
We can show that is asymptotically normal as the intensity increases.
It can be shown that the Anscombe VST requires a high underlying intensity to well stabilize the data (typically for ) (Zhang et al. 2008).
3.1.2 VST of a filtered Poisson process
Let be the filtered process obtained by convolving (Y_{i})_{i} with a discrete filter h. We will use Z to denote any of the Z_{j}'s. Let us define for . In addition, we adopt a local homogeneity assumption stating that for all i within the support of h.
We define the squareroot transform T as follows:
where b is a normalizing factor. Lemma 1 proves that T is a VST for a filtered Poisson process (with a nonzeromean filter) in that T(Y) is asymptotically normally distributed with a stabilized variance as becomes large (see Zhang et al. 2008, for a proof).
Figure 1: Normalized value ( ) of the stabilized variances at each scale j as a function of . 

Open with DEXTER 
3.2 MSVSTS
The MSVSTS consists in combining the squareroot VST with a multiscale transform.
3.2.1 MSVSTS + IUWT
This section describes the MSVSTS + IUWT, which is a combination of a squareroot VST with the IUWT. The recursive scheme is:
In (7), the filtering on a_{j1} can be rewritten as a filtering on , i.e., , where for and , where is the Dirac pulse ( on a single pixel and 0 everywhere else). T_{j} is the VST operator at scale j:
Let us define . In Zhang et al. (2008), it has ben shown that, to have an optimal convergence rate for the VST, the constant c^{(j)} associated to h^{(j)} should be set to:
The MSVSTS+IUWT procedure is directly invertible as we have:
Setting , if is constant within the support of the filter. h^{(j)}, then we have (Zhang et al. 2008):
where denotes inner product.
It means that the detail coefficients issued from locally homogeneous
parts of the signal follow asymptotically a central normal distribution
with an intensityindependant variance which relies solely on the
filter h and the current scale for a given filter h. Consequently, the stabilized variances and the constants b^{(j)}, c^{(j)},
can all be precomputed. Let us define
the stabilized variance at scale j for a locally homogeneous part of the signal:
To compute the , b^{(j)}, c^{(j)}, , we only have to know the filters h^{(j)}. We compute these filters thanks to the formula , by applying the IUWT to a Dirac pulse . Then, the h^{(j)} are the scaling coefficients of the IUWT. The have been precomputed for a 6scaled IUWT (Table 1).
Table 1: Precomputed values of the variances of the wavelet coefficients.
We have simulated Poisson images of different constant intensities , computed the IUWT with MSVSTS on each image and observed the variation of the normalized value of ( ) as a function of for each scale j (Fig. 1). We see that the wavelet coefficients are stabilized when except for the first wavelet scale, which is mostly constituted of noise. In Fig. 2, we compare the result of MSVSTS with Anscombe + wavelet shrinkage, on sources of varying intensities. We see that MSVSTS works well on sources of very low intensities, whereas Anscombe doesn't work when the intensity is too low.
Figure 2: Comparison of MSVSTS with Anscombe + wavelet shrinkage on a single HEALPix face. Top left: sources of varying intensity. Top right: sources of varying intensity with Poisson noise. Bottom left: Poisson sources of varying intensity reconstructed with Anscombe + wavelet shrinkage. Bottom right: Poisson sources of varying intensity reconstructed with MSVSTS. 

Open with DEXTER 
3.2.2 MSVSTS + curvelets
As the first step of the algorithm is an IUWT, we can stabilize each resolution level as in Eq. (7). We then apply the local ridgelet transform on each stabilized wavelet band.
It is not as straightforward as with the IUWT to derive the asymptotic noise variance in the stabilized curvelet domain. In our experiments, we derived them using simulated Poisson data of stationary intensity level . After having checked that the standard deviation in the curvelet bands becomes stabilized as the intensity level increases (which means that the stabilization is working properly), we stored the standard deviation for each wavelet scale j and each ridgelet band l (Table 2).
4 Poisson denoising
4.1 MSVST + IUWT
Under the hypothesis of homogeneous Poisson intensity, the stabilized wavelet coefficients d_{j} behave like centered Gaussian variables of standard deviation . We can detect significant coefficients with binary hypothesis testing as in Gaussian denoising.
Table 2: Asymptotic values of the variances of the curvelet coefficients.
Under the null hypothesis
of homogeneous Poisson intensity, the distribution of the stabilized wavelet coefficient d_{j}[k] at scale j and location index k can be written as:
(13) 
The rejection of the hypothesis depends on the doublesided pvalue:
(14) 
Consequently, to accept or reject , we compare each d_{j}[k] with a critical threshold , or 5 corresponding respectively to significance levels. This amounts to deciding that:
 if , d_{j}[k] is significant;
 if , d_{j}[k] is not significant.
We define the multiresolution support
,
which is determined by the set of detected significant coefficients after hypothesis testing:
We formulate the reconstruction problem as a convex constrained minimization problem:
where denotes the IUWT synthesis operator.
This problem is solved with the following iterative scheme: the image is initialised by , and the iteration scheme is, for n=0 to :
(17)  
(18) 
where P_{+} denotes the projection on the positive orthant, denotes the projection on the multiresolution support :
(19) 
and the softthresholding with threshold :
(20) 
We chose a decreasing threshold .
The final estimate of the Poisson intensity is: . Algorithm 1 summarizes the main steps of the MSVSTS + IUWT denoising algorithm.
4.2 Multiresolution support adaptation
When two sources are too close, the less intense source may not be detected because of the negative wavelet coefficients of the brightest source. To avoid such a drawback, we may update the multiresolution support at each iteration. The idea is to withdraw the detected sources and to make a detection on the remaining residual, so as to detect the sources which may have been missed at the first detection.
At each iteration n, we compute the MSVSTS of . We denote d^{(n)}_{j}[k] the stabilised coefficients of . We make a hard thresholding on (d_{j}[k]d^{(n)}_{j}[k]) with the same thresholds as in the detection step. Significant coefficients are added to the multiresolution support .
The main steps of the algorithm are summarized in Algorithm 2. In practice, we use Algorithm 2 instead of Algorithm 1 in our experiments.
4.3 MSVST + Curvelets
Insignificant coefficients are zeroed by using the same hypothesis testing framework as in the wavelet scale. At each wavelet scale j and ridgelet band k, we make a hard thresholding on curvelet coefficients with threshold , or 5. Finally, a direct reconstruction can be performed by first inverting the local ridgelet transforms and then inverting the MSVST + IUWT (Equation (10)). An iterative reconstruction may also be performed.
Algorithm 3 summarizes the main steps of the MSVSTS + Curvelets denoising algorithm.
Figure 3: Top left: Fermi simulated map without noise. Top right: Fermi simulated map with Poisson noise. Middle left: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage. Middle right: Fermi simulated map denoised with MSVSTS + curvelets (Algorithm 3). Bottom left: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Bottom right: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER 
4.4 Experiments
The method was tested on simulated Fermi data. The simulated data are the sum of a Milky Way diffuse background model and 1000 gamma ray point sources. We based our Galactic diffuse emission model intensity on the model obtained at the Fermi Science Support Center (Myers 2009). This model results from a fit of the LAT photons with various gas templates as well as inverse Compton in several energy bands. We used a realistic pointspread function for the sources, based on Monte Carlo simulations of the LAT and accelerator tests, that scale approximately as degrees. The position of the 205 brightest sources were taken from the Fermi 3month source list (Abdo et al. 2009). The position of the 795 remaining sources follow the LAT 1year Point Source Catalog (Myers 2010) sources distribution: each simulated source was randomly sorted in a box of = 5 and = 1 around a LAT 1year catalog source. We simulated each source assuming a powerlaw dependence with its spectral index given by the 3month source list and the first year catalog. We used an exposure of 3 corresponding approximatively to one year of Fermi allsky survey around 1 GeV. The simulated counts map shown here correspond to photons energy from 150 MeV to 20 GeV.
Figure 3 compares the result of denoising with MSVST + IUWT (Algorithm 1), MSVST + curvelets (Algorithm 3) and Anscombe VST + wavelet shrinkage on a simulated Fermi map. Figure 4 shows one HEALPix face of the results. As expected from theory, the Anscombe method produces poor results to denoise Fermi data, because the underlyning intensity is too weak. Both wavelet and curvelet denoising on the sphere perform much better. For this application, wavelets are slightly better than curvelets ( , , ). As this image contains many point sources, thisresult is expected. Indeed wavelet are better than curvelets to represent isotropic objects.
Figure 4: View of a single HEALPix face from the results of Fig. 3. Top left: Fermi simulated map without noise. Top right: Fermi simulated map with Poisson noise. Middle left: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage. Middle right: Fermi simulated map denoised with MSVSTS + curvelets (Algorithm 3). Bottom left: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Bottom right: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER 
5 Milky Way diffuse background study: denoising and inpainting
In order to extract from the Fermi photon maps the galactic diffuse emission, we want to remove the point sources from the Fermi image. As our HSD algorithm is very close to the MCA algorithm (Starck et al. 2004), an idea is to mask the most intense sources and to modify our algorithm in order to interpolate through the gaps exactly as in the MCAInpainting algorithm (Abrial et al. 2007). This modified algorithm can be called MSVSTSInpainting algorithm.
The problem can be reformulated as a convex constrained minimization problem:
where is a binary mask (1 on valid data and 0 on invalid data).
Figure 5: MSVSTS  Inpainting. Left: Fermi simulated map with Poisson noise and the most luminous sources masked. Right: Fermi simulated map denoised and inpainted with wavelets (Algorithm 4). Pictures are in logarithmic scale. 

Open with DEXTER 
The iterative scheme can be adapted to cope with a binary mask, which gives:
(22)  
(23) 
The thresholding strategy has to be adapted. Indeed, for the impainting task we need to have a very large initial threshold in order to have a very smooth image in the beginning and to refine the details progressively. We chose an exponentially decreasing threshold:
where .
6.2 Experiment
We applied this method on simulated Fermi data where we masked the most luminous sources.
The results are in Fig. 5. The MSVST + IUWT + Inpainting method (Algorithm 4) interpolates the missing data very well. Indeed, the missing part can not be seen anymore in the inpainted map, which shows that the diffuse emission component has been correctly reconstructed.
6 Source detection: denoising and background modeling
6.1 Method
In the case of Fermi data, the diffuse gammaray emission from the Milky Way, due to interaction between cosmic rays and interstellar gas and radiation, makes a relatively intense background. We have to extract this background in order to detect point sources. This diffuse interstellar emission can be modelled by a linear combination of gas templates and inverse compton map. We can use such a background model and incorporate a background removal in our denoising algorithm.
We note the data, the background we want to remove, and d^{(b)}_{j}[k] the MSVSTS coefficients of at scale j and position k. We determine the multiresolution support by comparing d_{j}[k]d^{(b)}_{j}[k] with .
We formulate the reconstruction problem as a convex constrained minimization problem:
Then, the reconstruction algorithm scheme becomes:
(26)  
(27) 
The algorithm is illustrated by the theoretical study in Fig. 6. We denoise Poisson data while separating a single source, which is a Gaussian of standard deviation equal to 0.01, from a background, which is a sum of two Gaussians of standard deviation equal to 0.1 and 0.01 respectively.
Figure 6: Theoretical testing for MSVSTS + IUWT denoising + background removal algorithm (Algorithm 5). View on a single HEALPix face. Top left: simulated background: sum of two Gaussians of standard deviation equal to 0.1 and 0.01 respectively. Top right: simulated source: Gaussian of standard deviation equal to 0.01. Bottom left: simulated poisson data. Bottom right: image denoised with MSVSTS + IUWT and background removal. 

Open with DEXTER 
Like Algorithm 1, Algorithm 5 can be adapted to make multiresolution support adaptation.
Experiment
We applied Algorithms 5 on simulated Fermi data. To test the efficiency of our method, we detect the sources with the SExtractor routine (Bertin & Arnouts 1996), and compare the detected sources with the theoretical sources catalog to get the number of true and false detections. Results are shown in Figs. 7 and 8. The SExtractor method was applied on the first wavelet scale of the reconstructed map, with a detection threshold equal to 1. It has been chosen to optimise the number of true detections. SExtractor makes 593 true detections and 71 false detections on the Fermi simulated map restored with Algorithm 2 among the 1000 sources of the simulation. On noisy data, many fluctuations due to Poisson noise are detected as sources by SExtractor, which leads to a big number of false detections (more than 2000 in the case of Fermi data).
Figure 7: Top left: simulated background model. Top right: simulated gamma ray sources. Middle left: simulated Fermi data with Poisson noise. Middle right: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Bottom: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER 
Figure 8: View of a single HEALPix face from the results of Fig. 7. Top left: simulated background model. Top right: simulated gamma ray sources. Middle left: simulated Fermi data with Poisson noise. Middle right: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Bottom: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER 
6.2.1 Sensitivity to model errors
As it is difficult to model the background precisely, it is important to study the sensitivity of the method to model errors. We add a stationary Gaussian noise to the background model, we compute the MSVSTS + IUWT with threshold on the simulated Fermi Poisson data with extraction of the noisy background, and we study the percent of true and false detections with respect to the total number of sources of the simulation and the signalnoise ratio ( ) versus the standard deviation of the Gaussian perturbation. Table 3 shows that, when the standard deviation of the noise on the background model becomes of the same range as the mean of the Poisson intensity distribution ( ), the number of false detections increases, the number of true detections decreases and the signal noise ratio decreases. While the perturbation is not too strong (standard deviation <10), the effect of the model error remains low.
Table 3: Percent of true and false detection and signalnoise ratio versus the standard deviation of the Gaussian noise on the background model.
7 Conclusion
This paper presented new methods for restoration of spherical data with noise following a Poisson distribution. A denoising method was proposed, which used a variance stabilization method and multiscale transforms on the sphere. Experiments have shown it is very efficient for Fermi data denoising. Two spherical multiscale transforms, the wavelet and the curvelets, were used. Then, we have proposed an extension of the denoising method in order to take into account missing data, and we have shown that this inpainting method could be a useful tool to estimate the diffuse emission. Finally, we have introduced a new denoising method the sphere which takes into account a background model. The simulated data have shown that it is relatively robust to errors in the model, and can therefore be used for Fermi diffuse background modeling and source detection.
AcknowledgementsThis work was partially supported by the European Research Council grant ERC228261.
References
 Abdo, A. A., Ackermann, M., Ajello, M., et al. 2009, ApJS, 183, 46 [NASA ADS] [CrossRef] [Google Scholar]
 Abrial, P., Moudden, Y., Starck, J., et al. 2007, Journal of Fourier Analysis and Applications, 13, 729 [CrossRef] [Google Scholar]
 Abrial, P., Moudden, Y., Starck, J., et al. 2008, Statistical Methodology, 5, 289 [NASA ADS] [CrossRef] [Google Scholar]
 Anscombe, F. 1948, Biometrika, 35, 246 [Google Scholar]
 Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, ApJ, 697, 1071 [NASA ADS] [CrossRef] [Google Scholar]
 Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Candes, E., & Donoho, D. 1999, Phil. Trans. R. Soc. Lond. A, 357, 2495 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Donoho, D. 1993, Proc. Symp. Appl. Math., 47, 173 [Google Scholar]
 Fisz, M. 1955, Colloq. Math., 3, 138 [Google Scholar]
 Fryzlewicz, P., & Nason, G. 2004, J. Comp. Graph. Stat., 13, 621 [CrossRef] [Google Scholar]
 Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759 [NASA ADS] [CrossRef] [Google Scholar]
 Hartman, R. C., Bertsch, D. L., Bloom, S. D., et al. 1999, VizieR Online Data Catalog, 212, 30079 [NASA ADS] [Google Scholar]
 Kolaczyk, E. 1999, J. Amer. Stat. Assoc., 94, 920 [CrossRef] [Google Scholar]
 Lefkimmiaits, S., Maragos, P., & Papandreou, G. 2009, IEEE Transactions on Image Processing, 20, 20 [Google Scholar]
 Myers, J. 2009, LAT Background Models, http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html [Google Scholar]
 Myers, J. 2010, LAT 1year Point Source Catalog, http://fermi.gsfc.nasa.gov/ssc/data/access/lat/1yr_catalog/ [Google Scholar]
 Nowak, R., & Kolaczyk, E. 2000, IEEE Trans. Inf. Theory, 45, 1811 [CrossRef] [Google Scholar]
 Starck, J., Fadili, J. M., Digel, S., Zhang, B., & Chiang, J. 2009, A&A, 504, 641 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Starck, J.L., Elad, M., & Donoho, D. 2004, Advances in Imaging and Electron Physics, 132 [Google Scholar]
 Starck, J.L., Moudden, Y., Abrial, P., & Nguyen, M. 2006, A&A, 446, 1191 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Timmerman, K., & Nowak, R. 1999, IEEE Trans. Inf. Theory, 45, 846 [CrossRef] [Google Scholar]
 Yamada, I. 2001, in Inherently Parallel Algorithm in Feasibility and Optimization and their Applications (Elsevier), 473 [Google Scholar]
 Zhang, B., Fadili, J., & Starck, J.L. 2008, IEEE Transactions on Image Processing, 11, 1093 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
Table 1: Precomputed values of the variances of the wavelet coefficients.
Table 2: Asymptotic values of the variances of the curvelet coefficients.
Table 3: Percent of true and false detection and signalnoise ratio versus the standard deviation of the Gaussian noise on the background model.
All Figures
Figure 1: Normalized value ( ) of the stabilized variances at each scale j as a function of . 

Open with DEXTER  
In the text 
Figure 2: Comparison of MSVSTS with Anscombe + wavelet shrinkage on a single HEALPix face. Top left: sources of varying intensity. Top right: sources of varying intensity with Poisson noise. Bottom left: Poisson sources of varying intensity reconstructed with Anscombe + wavelet shrinkage. Bottom right: Poisson sources of varying intensity reconstructed with MSVSTS. 

Open with DEXTER  
In the text 
Figure 3: Top left: Fermi simulated map without noise. Top right: Fermi simulated map with Poisson noise. Middle left: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage. Middle right: Fermi simulated map denoised with MSVSTS + curvelets (Algorithm 3). Bottom left: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Bottom right: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER  
In the text 
Figure 4: View of a single HEALPix face from the results of Fig. 3. Top left: Fermi simulated map without noise. Top right: Fermi simulated map with Poisson noise. Middle left: Fermi simulated map denoised with Anscombe VST + wavelet shrinkage. Middle right: Fermi simulated map denoised with MSVSTS + curvelets (Algorithm 3). Bottom left: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Bottom right: Fermi simulated map denoised with MSVSTS + IUWT (Algorithm 1) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER  
In the text 
Figure 5: MSVSTS  Inpainting. Left: Fermi simulated map with Poisson noise and the most luminous sources masked. Right: Fermi simulated map denoised and inpainted with wavelets (Algorithm 4). Pictures are in logarithmic scale. 

Open with DEXTER  
In the text 
Figure 6: Theoretical testing for MSVSTS + IUWT denoising + background removal algorithm (Algorithm 5). View on a single HEALPix face. Top left: simulated background: sum of two Gaussians of standard deviation equal to 0.1 and 0.01 respectively. Top right: simulated source: Gaussian of standard deviation equal to 0.01. Bottom left: simulated poisson data. Bottom right: image denoised with MSVSTS + IUWT and background removal. 

Open with DEXTER  
In the text 
Figure 7: Top left: simulated background model. Top right: simulated gamma ray sources. Middle left: simulated Fermi data with Poisson noise. Middle right: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Bottom: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER  
In the text 
Figure 8: View of a single HEALPix face from the results of Fig. 7. Top left: simulated background model. Top right: simulated gamma ray sources. Middle left: simulated Fermi data with Poisson noise. Middle right: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Bottom: reconstructed gamma ray sources with MSVSTS + IUWT + background removal (Algorithm 5) with threshold . Pictures are in logarithmic scale. 

Open with DEXTER  
In the text 
Copyright ESO 2010
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.