Issue 
A&A
Volume 635, March 2020



Article Number  A194  
Number of page(s)  14  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201937001  
Published online  02 April 2020 
ORIGIN: Blind detection of faint emission line galaxies in MUSE datacubes^{⋆}
^{1}
Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Bd de l’Observatoire, CS 34229, 06304 Nice Cedex 4, France
email: david.mary@unice.fr
^{2}
Univ Lyon, Univ Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, 69230 SaintGenisLaval, France
Received:
28
October
2019
Accepted:
16
January
2020
Context. One of the major science cases of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph is the detection of Lymanalpha emitters at high redshifts. The ongoing and planned deep fields observations will allow for one large sample of these sources. An efficient tool to perform blind detection of faint emitters in MUSE datacubes is a prerequisite of such an endeavor.
Aims. Several line detection algorithms exist but their performance during the deepest MUSE exposures is hard to quantify, in particular with respect to their actual false detection rate, or purity. The aim of this work is to design and validate an algorithm that efficiently detects faint spatialspectral emission signatures, while allowing for a stable false detection rate over the data cube and providing in the same time an automated and reliable estimation of the purity.
Methods. The algorithm implements (i) a nuisance removal part based on a continuum subtraction combining a discrete cosine transform and an iterative principal component analysis, (ii) a detection part based on the local maxima of generalized likelihood ratio test statistics obtained for a set of spatialspectral profiles of emission line emitters and (iii) a purity estimation part, where the proportion of true emission lines is estimated from the data itself: the distribution of the local maxima in the “noise only” configuration is estimated from that of the local minima.
Results. Results on simulated data cubes providing ground truth show that the method reaches its aims in terms of purity and completeness. When applied to the deep 30 h exposure MUSE datacube in the Hubble Ultra Deep Field, the algorithms allows for the confirmed detection of 133 intermediate redshifts galaxies and 248 Lyα emitters, including 86 sources with no Hubble Space Telescope counterpart.
Conclusions. The algorithm fulfills its aims in terms of detection power and reliability. It is consequently implemented as a Python package whose code and documentation are available on GitHub and readthedocs.
Key words: methods: data analysis / techniques: imaging spectroscopy / galaxies: highredshift / methods: statistical
© D. Mary et al. 2020
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Spectroscopic observations of galaxies at high redshift have recently been revolutionized by the Multi Unit Spectroscopic Explorer (MUSE) instrument in operation at the VLT (Very Large Telescope) since 2014. MUSE is an adaptive optics assisted integral field spectrograph operating in the visible (Bacon et al. 2010, 2014). With its field of view of 1 × 1 arcmin^{2} sampled at 0.2 arcsec and its simultaneous spectral range of 4800–9300 Å at R ∼ 3000, MUSE produces large hyperspectral datacubes of 383 millions voxels, corresponding to about 320 × 320 × 3680 pixels along the spatial (x, y) and spectral (z) axes. Its unique capabilities of providing threedimensional (3D) deep field observations have been demonstrated in the early observations of the Hubble Deep FieldSouth (Bacon et al. 2015) and more recently in the Hubble Ultra Deep Field (HUDF, Bacon et al. 2017) and the CANDELS – GOOD South area (Urrutia et al. 2019).
Thanks to its unrivalled capabilities, MUSE has been able to increase the number of spectroscopic redshifts in these fields by an order of magnitude (see for example Inami 2017). The most spectacular increase is at high redshift (z > 3), where MUSE was able to detect a large number of Lyα emitters. In the deepest exposures (10+ h), MUSE is even able to go beyond the limiting magnitude of the deepest Hubble Space Telescope (HST) exposures. For example in the HUDF, which achieves a 5σ depth of 29.5 in the F775W filter, MUSE was able to detect Lyα emitters without an HST counterpart (Bacon et al. 2017; Maseda et al. 2018). These observations have led to a breakthrough in our understanding of the high redshift Universe, which includes, for example, the discovery of Lyα emission from the circumgalactic medium around individual galaxies (Wisotzki et al. 2016, 2018; Leclercq 2017) or the role of the low mass galaxies and the evolution of the faintend Lyα luminosity function (Drake 2017).
Building a large sample of low luminosity Lyα emitters at high redshift is an important objective for the near future with the upcoming deep fields observations currently executed or planned in the context of the MUSE guaranteed time observations (GTO). The availability of an efficient tool to perform blind detection of faint emitters in MUSE datacubes is a prerequisite of such an endeavor.
Several tools have already been developed to perform blind searches of faint emitters in MUSE datacubes, such as MUSELET, a SExtractor based method available in MPDAF (Piqueras et al. 2017), LSDCAT, a matched filtering method (Herenz & Wisotzki 2017) or SELFI, a Bayesian method (Meillier et al. 2016). These tools have been successfully used so far, for instance LSDCAT and MUSELET in the context of respectively the MUSE Wide Survey (Urrutia et al. 2019) and the analysis of the lensing clusters (e.g., Lagattuta et al. 2019). However, none of these methods currently allow for a reliable estimate of the proportion of false discoveries (or purity) actually present in the list of detected sources. As a consequence their actual performance on the deepest MUSE exposures, for which no ground truth is available indeed, is consequently hard to quantify.
Furthermore, from our experience in the blind search of emitters in the MUSE deep fields, we have learned that when tuned to be efficient for the faintest emitters, every detection method becomes overly sensitive to any residuals left by the imperfect continuum subtraction of bright sources and by the data reduction pipeline (e.g., instrumental signatures or sky subtraction residuals). This leads to a global inflation of the false detections, at levels that are unpredictable and fluctuating in the datacube. This effect requires either to limit the detection power by setting a threshold high enough to stay on the “safe side”, or to consent spending a significant humanexpert time to examine the long list of potential discoveries proposed by the algorithm.
In this context, we have developed an automated method, called ORIGIN, allowing for these methodological and computational challenges. In this paper, we present in detail the algorithm. An overview is given in Sect. 2 and a stepbystep description in Sect. 3. The application of ORIGIN to the deep 30 h exposure MUSE datacube in the HUDF called udf10 is presented in Sect. 4. Mathematical complements, implementation of the method into a publicly released software and parameters values used for the data cube udf10 are given the Appendices. Conclusions and possible improvements are listed in the last section.
2. Overview
2.1. Notations
In the following, we note N_{x}, N_{y} and N_{λ} the two spatial and the spectral dimensions of the data cube. (For udf10 this corresponds to 322 × 323 × 3681 ≈ 383 millions voxels.) Bold lowercase letters as x denote vectors and bold capital letters as X denote matrices. We also use the following notations:
– s_{i} = [s_{1}, ⋯, s_{Nλ}]^{⊤} is one spectrum in the data cube, with i ∈ {1, ⋯, N_{x} × N_{y}}.
– A collection of spectra {s_{i}} from a data cube can be stored in a matrix whose columns are the spectra. For the whole data cube, this matrix is noted S ≔ [s_{1}, ⋯, s_{Nx × Ny}].
– Σ_{i} is the covariance matrix of spectrum s_{i}. It is provided by the data reduction pipeline.
– Whitened (or standardized) vectors and matrices are denoted with a. For instance : .
– 0_{Nx, Ny, Nz} and 1_{Nx, Ny} represent respectively arrays of zeroes and ones of size N_{x} × N_{y} × N_{z} and N_{x} × N_{y}.
– The symbols ⊗ and ⊘ represent respectively convolution and pointwise division.
– For a cube stored in a matrix A, notations A(⋅, ⋅, k) and A(i, j, ⋅) represent respectively the image (“slice”) of the cube in the kth wavelength band and the spectrum of the cube at spatial positions (i, j).
For simplicity, we often drop index i when the described processing is applied sequentially to all spectra.
2.2. Objectives
The detection algorithm ORIGIN is aimed at detecting pointlike or weakly resolved emission line sources with emission lines at unknown redshifts. The algorithm was designed with a threefold objective : (1) Be as powerful as possible for detecting faint emission lines; (2) Be robust to local variations of the data statistics, caused for instance by bright, extended sources, residual instrumental artifacts or different number of coadded exposures; (3) Provide a list of detected sources that achieves a target purity, this target purity being specified by the user. These objectives led us to a detection approach in five main stages outlined below and described step by step in Sect. 3.
2.3. Overview
The flowchart in Fig. 1 shows an overview of the algorithm. We give here a brief summary of each step (purple boxes), a detailed description of which can be found in the Sections indicated in these boxes.
Fig. 1. Overview of the ORIGIN algorithm. A session example is given in Appendix B and a list of the main parameters in Appendix C. 

Open with DEXTER 
2.3.1. Nuisance removal and segmentation
Spatial and spectral components that are not emission lines in the datacube S (e.g., bright sources, diffuse sources, instrumental artifacts) can drastically increase the false detection rate. The first step is thus to detect and to remove such sources, called “nuisances” below. Regions of the field that are free from such sources are called “background”. Nuisance features are removed by combining a continuum removal (using a discrete cosine transform, DCT) and an iterative principal component analysis (PCA). The estimated continuum (C) and the residual data cubes () from the DCT are used to perform a spatial segmentation. This provides for each spectrum a label (nuisance or background. These labels are stored in a matrix (L) and are used by the PCA, which acts differently on background and nuisance spectra.
2.3.2. Test statistics
In the faint residual datacube (F) the point is now to capture the signatures of line emissions. These signatures essentially take the form of “3D blobs” corresponding to the convolution of a spectral line profile by the spatial PSF of the instrument. Several matched filtering (Herenz & Wisotzki 2017) or Generalized Likelihood Ratio (GLR) approaches (Paris et al. 2013; Suleiman et al. 2013, 2014) have been devised for that purpose. Such approaches lead to filter the data cube with a library of possible signatures, built as spectral line profiles spread spatially by the PSF, and to normalize the result.
The filtering and normalization process considered here is also derived from a GLR approach but it is robust to an unknown residual continuum (see Sect. 3.2). It produces two data cubes ( and ), which correspond respectively to intermediate tests statistics obtained when looking for emission and absorption lines. When present in the data, an emission line emitter tends to increase locally the values in , with a local maximum close to the actual position (in x, y and z coordinates) of the line. These local maxima are used as final test statistics for line detection: each local maximum above a given threshold is identified as a line detection. It is important to note that for simplicity is computed so that an absorption also appears as a local maximum in , so that the final test statistics are obtained as the local maxima (called M^{+} and M^{−}) of and .
2.3.3. Purity estimation
Evaluating the purity of the detection results, that is, the fraction of true sources in the total list of detected sources, requires to estimate the number of local maxima that would be larger than the threshold “by chance”, that is, in absence of any emission line. The complexity of the data prevents us from looking for a single, reliable analytical expressions of this distribution. However, the size of the data cube and the nature of our targets (emission lines) allows for the estimation of this distribution from the data – an approach advocated for instance by Walter et al. (2016) and Bacher et al. (2017). When the data contains only noise and under mild assumption on this noise, the statistics obtained when looking for emission lines are the same as those obtained when looking for absorption lines: the local maxima of M^{+} and M− should have the same distribution. Hence, the number of local maxima of M^{+} that should be above any given threshold γ under the null hypothesis can be estimated from the number of local maxima found above this threshold in M^{−}. This provides an estimate of the false discoveries that should be expected for any threshold, and hence of the purity. In practice, this estimation is done for a grid of threshold values. The value of the threshold corresponding to the purity desired by the user is identified and the emission lines correspond to the local maxima of M^{+} exceeding this threshold.
2.3.4. Detection of bright emission lines
As explained in Sect. 2.3.1 the iterative PCA aims at removing all features that do not correspond to faint emission lines. This means that bright emission lines – the most easily detectable ones – can be removed by the PCA step and further be missed by the detection process. It is thus necessary to detect such lines before the PCA, in the DCT residual R. The procedure for detecting these lines and controlling the purity of this detection step strictly mirrors what is done for faint lines, with local maxima computed directly from (whitened versions of) R and −R instead of and .
2.3.5. Line merging and sources’ extraction
The detected bright and faint emission lines are finally merged into sources, for which several informations (refined position, spectral profile and total flux) are computed and stored in the final source catalog (cf. Sect. 3.5).
3. Stepbystep method description
3.1. Nuisance removal and spatial segmentation
The strategy of ORIGIN is to track and suppress nuisance sources while preserving the targets, the line emitters. The test statistics computed from the residuals still have to be robust against unknown fluctuating flux variations under the null hypothesis, but only moderately so if the nuisance removal is efficient (cf. Sect. 3.2).
The removal of nuisance sources is performed spectrumwise in two steps explained below: DCT and iterative PCA. The first stage of DCT (a systematic and fast procedure) helps to remove quickly energetic and smooth fluctuations. In contrast, the version of the iterative PCA designed for this problem can capture adaptively signatures that are much fainter and possibly very complex in shape, a but it is computationally heavy. This combination makes the overall nuisance removal process efficient and tractable in a reasonable time.
3.1.1. Nuisance removal with DCT
Each spectrum s is modeled as
where D is a partial DCT matrix of size N_{λ} × N_{DCT}, α is a N_{DCT} × 1 vector of decomposition coefficients, Dα is the unknown continuum and ϵ is an additive Gaussian noise with covariance matrix Σ. Maximum Likelihood estimation of the continuum of spectrum s leads to a weighted least squares problem, for which the estimated coefficients are obtained by (cf. Sect. 2.1 for the ^{∼} notation):
The estimated continuum and residual r are obtained by
These spectra are collected in continuum and residual data cubes named and R respectively. The parameter N_{DCT} controls the number of DCT basis vectors used in the continuum estimation. A value that is too small lets large scale oscillations in the residual spectrum, while a value that is too large tends to capture spectral features with small extension like emission lines, which become then more difficult to detect in the residual. A satisfactory compromise was found here^{1} with N_{DCT} = 10. This value leaves the lines almost intact: typically, the energy of the line in the DCT residual remains close to 100% until N_{DCT} reaches several hundreds, depending on the line width. The continuum subtraction with N_{DCT} = 10 is not perfect, but a large part of the work is done: for bright objects, 99 % of the continuum’s energy is typically contained in the subspace spanned by the first 10 DCT modes and decreases very slightly afterward. The PCA does the rest.
Before describing the PCA we present a segmentation step, which is required to implement the PCA.
3.1.2. Spatial segmentation
The purpose of spatial segmentation is to locate regions where the data spectra contain nuisance sources and where they are free from them (in which case they are labeled as “background”). This segmentation is necessary to further remove the nuisances. As mentioned before, such spectra can have residuals from continuum or bright emission lines, or correspond to regions exhibiting a particular statistical behavior, caused for instance by the presence of instrumental artifacts or residuals from sky subtraction. The segmentation step relies both on the information extracted in the continuum cube and on the residual cube (which is whitened in order to account for the spectral dependence of the noise variance). In , an energetic spectrum indicates the presence of a continuum. In , a spectrum containing residual signatures from bright sources or artifacts tends to have higher energy than pure noise (background) pixels. For these reasons, we found that the following two tests statistics are both efficient and complementary to locate nuisance sources:
For a spectrum under test x, the segmentation tests are both of the form
where t is either t_{1} or t_{2} in (4), x is either or and γ is a threshold allowing for the tuning of the sensitivity of the tests^{2}. In the data, the spectrum at spatial coordinates (i, j) in the field is considered containing nuisances if at least one of the two tests applied to the corresponding spectra or leads to a result above the test threshold. The log_{10} function in the definition of t_{1} in (4) is there to stabilize numerically the test statistic, as the squared norm of the estimated continuum may vary by several orders of magnitudes within a data cube.
In data like udf10, both distributions appear to be rightskewed, with a roughly Gaussian left tail and a much heavier, possibly irregular right tail caused by nuisance sources. Hence, the distribution of the test statistics that would be obtained with pure noise (without nuisances) is estimated by means of the left part of the empirical distribution, for which a Gaussian approximation provides a reasonable fit (see Fig. 3). This leads to an estimated distribution of the test statistics under the noise only hypothesis which is Gaussian with estimated mean and estimated standard deviation . For the purpose of segmentation (or classification), the user chooses a level of error in the form of an error probability, called below. In order to tune the tests (5) at this target error rate, one needs to find the threshold value γ such that
The higher the value of , the lower the value of γ and the larger the number of spectra wrongly classified as containing nuisances. If we denote by Φ the Cumulative Distribution Function (CDF) of a standard normal variable, the threshold γ for t_{1} is given by
where again t is either t_{1} or t_{2}. Two segmented maps are obtained from thresholding the maps of the N_{x} × N_{y} test statistics at the values γ(t_{1}) and γ(t_{2}) defined above. The nuisances regions of each map are merged into a single merged segmentation map.
Because MUSE data cubes are large, a PCA working on the full data cube would be suboptimal. Indeed, the repeated computation of the eigenvectors of a matrix composed by the whole cube would be computationally prohibitive. Moreover, the aim of the PCA is to capture spectral features corresponding to nuisances. Such nuisances have features that are locally similar, so a PCA working on patches smaller than the whole cube is also more efficient to remove the nuisances. When building these patches (whose size is typically one tenth of the whole data cube for udf10), care must be taken that regions identified as nuisances in the previous step are not split over two such patches. For udf10 the segmentation algorithm starts with N_{Z} = 9 rectangular patches. The nuisance zones intersecting several such patches are iteratively identified and attached to one patch, under the constraint that the final patches have surface in a target range. The minimal and maximal surfaces allowed for patches of udf10 are respectively S_{min} = 80^{2} and S_{max} = 120^{2} pixels^{2}. The results of the segmentation nuisance versus background for udf10 after merging the two maps based on t_{1} and t_{2} is shown in Fig. 2. The figure also shows the segmentation result into a number of N_{Z} = 9 large patches.
Fig. 2. Signal to noise ratio (S/N) image (sum over the wavelength channels of the raw data cube divided by the standard deviation of the noise in each voxel) with, overlaid in black, the zones classified as nuisances by test t_{1} and t_{2}. The large patches considered for the PCA removal are shown in color. 

Open with DEXTER 
3.1.3. Nuisance removal with iterative PCA
The goal of this step is to iteratively locate and remove from residual nuisance sources, that is, any signal that is not the signature of a faint, spatially unresolved emission line. The algorithm cleans the cube one patch at a time. In order to leave in the cleaned data cube a structure compatible with noise, the nuisances are constrained to leave outside a “noise subspace”, which is defined as the sample average of a small fraction (F_{b}) of the spectra flagged as “background” in the patch.
The Algorithm works as follows (see the pseudocode in Algorithm ^{1}). At each iteration, the algorithm classifies spectra of the patch under consideration (Z_{i}) as background or nuisance and stores them in matrices respectively called B and N. This classification is done by means of the test t_{2} in (4)–(5), applied to all the spectra in the patch Z_{i}. For this test a value is chosen by the user, and the corresponding test threshold is computed as in (6) with replacing . In practice, for fields relatively empty as udf10, values of and in the range [0.1, 0.2] typically provide a good tradeoff in cleaning nuisances without impacting Lyα emission lines. Figure 3 shows in black the value of the threshold for four patches Z_{i} of the udf10 data cube.
Fig. 3. Distribution of the test statistics t_{2} for four patches of (blue), Gaussian approximation (red) and thresholds γ(t_{2}) (in the titles and black stems) above which all spectra of the patch are classified as nuisance for . The plots for the five other patches are very similar and not shown. 

Open with DEXTER 
Success rate of redshift assignment for udf10ORIGIN detections.
From the fraction (F_{b} %) of the spectra that show the lowest test statistics, a mean background is estimated and all the nuisance spectra in N are orthogonalized with respect to this background. This results in a matrix of spectra , of which the first eigenvector is computed. The contribution of this vector to all spectra of the patch is removed. If some of the resulting residual spectra are still classified as nuisances, the process is repeated until there is no more spectrum classified as nuisance in the patch.
Figure 4 shows how many times each spectrum was classified as nuisance during the PCA cleaning stage. Comparing with the S/N image (gray and black zones in Fig. 2), this iteration map clearly evidences regions where bright sources and artifacts are present (e.g., horizontal lines close to the borders in the top and bottom right corner). Note also that the process is relatively fast with less than 40 iterations required in each patch to converge to a cleaned data cube.
Fig. 4. Iteration map of greedy PCA for udf10 with . The map shows how many times each spectrum was selected in the nuisance matrix N before all spectra were considered “cleaned” (that is, were classified as background by test t_{2}). 

Open with DEXTER 
3.2. Test statistics
3.2.1. Generalized Likelihood Ratio
For all positions (x, y, z) in the PCA residual data cube F (for “faint”), the algorithm considers subcubes f(x, y, z) of F (called f for short below), centered on position (x, y, z) and having the size of the considered target signatures. For each such subcube we formulate a binary hypothesis test between
– the null hypothesis: there is no emission line centred at position (x, y, z),
Algorithm 1 Iterative greedy PCA algorithm
– the alternative hypothesis: there is one emission line d_{i} centred at position (x, y, z), where d_{i} is a “3D” (spatialspectral) signature among a set of N_{s} possible line signatures.
The statistical model retained to describe these two hypotheses is important, as it should capture as reliably as possible the statistical distribution of the data. This distribution results from a long chain of preprocessing, from the data reduction pipeline to the PCA described in the previous step, and may thus be very complex. On the other hand, a too sophisticated model may lead to untractable processing because it requires, in a GLR approach, maximum likelihood estimation of the corresponding unknown parameters for 380 millions positions in the datacube. We compared the performances of several statistical models and opted for the following, which allows for a good compromise between computational efficiency, detection power and robustness to remaining faint artifacts:
where n ∼ 𝒩(0,I) is the noise assumed to be zero mean Gaussian with Identity covariance matrix under both hypotheses. The term a1, with a ∈ ℝ and 1 a vectorized cube of ones, models a possible residual and unknown nuisance flux, that is considered spatially and spectrally constant in subcube f. The term bd_{i}, with b > 0 and i ∈ {1, ⋯, N_{s}}, corresponds to one of the possible emission signatures. Each signature is a spectral profile of a given width, spatially spread in each channel by the PSF in this channel. The considered signatures define the size of f. Spatially, they have the size of the PSF (25 × 25 for udf10). Spectrally, N_{s} sizes are considered. For the presented udf10 analysis we used N_{s} = 3 Gaussian profiles with FWHM of 2, 6.7 and 12 spectral channels, covering respectively 5, 20 and 31 spectral channels in total.
For binary hypothesis testing involving unknown parameters, the GLR approach (Scharf & Friedlander 1994; Kay 1998) forms the test statistics by plugging in the likelihood ratio the maximum likelihood estimates of these parameters (namely, a under ℋ_{0} and {a, b, i} under ℋ_{1}). This leads for each subcube f to a test statistics in the form of a matched filter (see Appendix A.1):
Algorithm 2 Computation of the test statistics
where the superscript ^{+} refers to positive (emission) lines and denotes the spatialspectral signature d_{i} to which the mean has been subtracted. Equation (8) can be efficiently computed for all positions (x, y, z) using Algorithm ^{2}. The first main loop (rows 2 to 7) processes the cubes channel by channel. The second main loop (rows 8 to 26) processes the result of the first loop profile by profile, with an embedded loop processing spectrum per spectrum (rows 11 to 16). Comparisons of the score obtained for each profile (rows 19 to 25) implement the max and min operators required for and (cf. Eqs. (8) above and (10) below).
3.2.2. Local extrema
The GLR test statistics result in a cube of test statistics values, . When a line emission is present at a position p_{0} := (x_{0}, y_{0}, z_{0}), the values of tend to increase statistically in the vicinity of p_{0} with a local maximum at (or near) p_{0}. For this reason, detection approaches based on local maxima are often advocated, possibly after a matchedfiltering step, when “blobs” of moderate extensions have to be detected in random fields (e.g., Sobey 1992; Bardeen et al. 1986; Vio & Andreani 2016; Vio et al. 2017; Schwartzman & Telschow 2018). We opt for this approach and consider in the following the cube obtained by computing the threedimensional local maxima of , noted M^{+} and defined by:
where v collects the 26 voxels connected by faces, edges or corners touch to voxel (x, y, z).
An example of the resulting cube of test statistics is represented in Fig. 5, right panel. This panel shows in gray scale the maximum over the spectral index of obtained for a region of udf10. The corresponding local maxima (nonzero values of M^{+}) are shown in red. In this panel, and M^{+} reveal very clearly regions where spatially unresolved emission lines are likely to be present (darker blobs of the size of the PSF, and red points). In comparison, the left panel shows the GLR cube obtained when applying Algorithm ^{2} after a preprocessing based solely on a median filtering (instead of the DCT+PCA processing described above). Clearly, a less efficient nuisance removal leads, in the vicinity of bright sources for instance, to wild and undesired variations of the test statistics. Such a loss of efficiency in the preprocessing stage would have to be compensated for by a possibly time consuming postprocessing stage.
Fig. 5. Comparison of two different nuisance removal algorithms on the 100 × 100 top left region of the udf10 field. Left: median filtering with a window of 144 channels. Right: DCT + PCA. Red: local maxima M^{+} larger than v = 3σ after the mean (v = 9.4 for the median and v = 6.6 for the DCT+PCA). The grayscale ranges from 0 (white) to 16 (black). 

Open with DEXTER 
In order to evaluate how significant are the claimed detections, an important question is to know the distribution of the local maxima under ℋ_{0}. This problem has been studied in applications like the heights of waves in stationary sea state (Sobey 1992), the fluctuations of the cosmic background (Bardeen et al. 1986), source detection in radiointerferometric maps (Vio & Andreani 2016; Vio et al. 2017) or brain activity regions in neuroimaging (Schwartzman & Telschow 2018), the random fields in which local maxima are considered being twodimensional for the former three applications and threedimensional for the latter.
In all these works, the distribution of the local maxima are derived from theoretical results regarding Gaussian random fields, see in particular the works of Cheng & Schwartzman (2018) for recent results and a review. We opted here for a different approach however, because the random field (x, y, z) is not guaranteed to be Gaussian nor smooth (owing for instance to the maximum over the spectral profiles computed in (8)). Besides, it is not isotropic and the correlation structure is likely to be not stationary in the spatial dimension owing to the varying number of exposures and to telluric spectral lines. While it might be possible to derive an accurate statistical characterization of the local maxima of MUSE data by combining results of Schwartzman & Telschow (2018) for 3D, non isotropic nonstationary and possibly non Gaussian random fields, this approach would deserve a far more involved study. We present and validate here a relatively more simple and empirical approach. This approach combines inference based on local maxima with an estimation of the distribution under ℋ_{0} directly from the data itself, as considered for instance in Walter et al. (2016) and Bacher et al. (2017) and further motivated here by the large dimension of the data cubes. For this, our approach is in two steps.
– Step 1: Compute GLR scores for absorption lines under ℋ_{0}. Consider that we wish to detect absorption (rather than emission) lines. The model is the same as (7) but with this time b < 0. The GLR leads to the tests statistics
As shown in Appendix A.2, under ℋ_{0}, and share the same distribution.
– Step 2 : Statistics of the local maxima. Since the distribution of and are the same, a training set of test statistics for the local maxima M^{+} can be obtained by the local maxima of , noted M^{−}, which is defined similarly as M^{+} in Eq. (9), with T^{−} replacing T^{+}. In practice, the background region in which the statistics are estimated is obtained by the merged segmentation maps obtained from tests t_{1} and t_{2} in Eq. (5).
3.3. Purity estimation
The purity (or fidelity) is defined as the proportion of true discoveries (called S below) with respect to the total number of discoveries (R). With these notations the number V := R − S is the number of false discoveries and the purity is
where is sometimes called False Discovery Proportion (FDP). We note that when a large number of comparisons is made in multiple testing procedures (as this is the case here with millions of local maxima), controlling the false alarm (or familywise error) rate at low levels leads to drastically increase the test’s threshold, which results in a substantial loss of power. In such situations it can be more efficient to control instead the False Discovery Rate (FDR, Benjamini & Hochberg 1995), defined as
where E(⋅) denotes expectation and by convention FDR = 0 if R = 0. Definition (12) shows that procedures aimed at controlling purity and FDR are indeed connected.
Coming back to our line detection problem, we wish to find a procedure making it possible to decide which local maxima of M^{+} should be selected so that the purity of the resulting selection is guaranteed at a level prescribed by the user. This amounts to find the correspondence between a range of thresholds and the resulting purity.
As mentioned above, our choice is to estimate the number of discoveries V from the data itself, namely from the statistics of the local maxima M^{−}, since their distribution is the same as that of M^{+} when only noise is present. Let us denote by N^{−} the number of voxels found in the background region^{3} and by V^{−}(γ) the number of local maxima of M^{−} with values larger than a given threshold γ > 0 in that region. If M^{+} was obtained from a noise process with the same statistics as M^{−}, an estimate of the number of false discoveries that would be obtained by thresholding the full data cube M^{+} (entailing N voxels) at a value γ is
Hence, if we denote by R(γ) the number of local maxima above the threshold in M^{+}, the purity can be estimated as
This approach is very similar to the Benjamini & Hochberg procedure for controlling the FDR. The difference is that the probabilities of local maxima to be larger than some value under ℋ_{0} (the Pvalues) are estimated directly from M^{−} instead of relying on a theoretical distribution of the local maxima. Figure 6 shows that this procedure allows for an efficient control of the purity. The value of the threshold γ^{*} such that (with P^{*} the purity chosen by the user) is selected and M^{+} is thresholded at this value.
Fig. 6. Comparison of the ORIGIN estimated purity (cf. Eq. (14)) versus the true purity (Eq. (11)) for the udf10 fake datacube. 

Open with DEXTER 
3.4. Predetection of bright emission lines
The nuisance removal via the PCA described in Sect. 3.1.3 is aimed at removing any source that is not a faint unresolved emission line. The counterpart of the efficiency of Algorithm ^{1} is that powerful emission lines are detected by tests t_{2} in Algorithm ^{1}, so their contribution is included in matrix N, captured by the PCA and thus removed from the data cube. It is thus necessary to make a predetection of bright emission lines. Such lines are easily detectable by a high peak emission in the residual data cube . The detection procedure for bright emission lines mirrors that described in Sect. 3.3, simply replacing M^{+} and M^{−} by the local maxima of and . For udf10, the target purity of this stage is set to P^{*} = 0.95.
3.5. Line merging and source extraction
Once the detections are available, we need to group them into sources, where a given source can have multiple lines. This can be a tricky step in regions where bright continuum sources are present because such sources can lead to detections at different spatial positions despite the DCT+PCA (see Fig. 5). To solve this problem we use the information from a segmentation map (that can be provided or computed automatically on the continuum image to identify the regions of bright or extended sources, cf. Sect. 3.1.2) and we adopt a specific method for detections that are in these areas.
First, the detections are merged based on a spatial distance criteria (parameter called tol_spat). Starting from a given detection, the detections within a distance of tol_spat are merged. Then by looking iteratively at the neighbors of the merged detections, these neighbors are merged in the group if their distance to the seed detection is less than tol_spat voxels, or if the distance on the wavelength axis is less than a second parameter, tol_spec (that is, if a line is detected almost at the same wavelength but a different spatial position). This process is repeated for all detections that are not yet merged.
Then we take all the detections that belong to a given region of the segmentation map, and if there is more than one group of lines from the previous step we compute the distances on the wavelength axis between the groups of lines. If the minimum distance in wavelength is less than tol_spec, the groups are merged.
Finally, for each line we then compute a detection image, obtained by summing the GLR datacube on a window centered on the detection peak wavelength and a width of 2 × FWHM, where FWHM is the width of the spectral template that provides the highest correlation peak. We also compute the corresponding spectrum by weighted summation over the spatial extent of the line using the detection image as weight.
4. Application to MUSE HUDF datacube
ORIGIN was initially developed for the blind search of Lyα emitters in the MUSE deep exposure of the Hubble Ultra Deep Field (HUDF). A preliminary version of the code was successfully used for the blind search exploration of the entire HUDF field of view (Bacon et al. 2017). It resulted in the detection of 692Lyα emitters (Inami 2017), including 72 not detected in the HST deep broad band images (Bacon et al. 2017; Maseda et al. 2018).
Here we use the latest version of the udf10 MUSE datacube (see Fig. 1 of Bacon et al. 2017), the single 1 arcmin^{2} datacube that achieves a depth of 30 hours. In this field Inami (2017) report the detection of 158Lyα emitters, including 30 not detected in the HST deep broad band images. Compared to the previous version datacube of the same field used in Bacon et al. (2017), the data benefits from an improved data reduction pipeline, resulting in less systematics. This version is the one used in the MUSE HUDF data release II (Bacon et al., in prep.). In this section we focus on the use and performance of the algorithm with an emphasis on the Lyα emitters search.
4.1. Processing
The released version 1.0 of ORIGIN was used on the MUSE udf10 datacube with the parameters as given in Appendix C. While most of the parameters can be used with their default value, a few need more attention: the probability of false alarm for the PCA (, Sect. 3.1.3) and the allowed minimum purity (P^{*}, Sect. 3.3). A correct setting of the first parameter ensures that the signal coming from bright continuum sources and the systematics left by the data reduction pipeline are properly removed. When is too low the test threshold is too high and too few spectra are cleaned, leaving nuisances in the residual data cube. When is too large the test threshold is too low and the signal from some emitters can be impacted. After some trials, a value of 10% was used (see an example in Fig. 3). Note that with such a value, 7% of the emitters (the brightest ones) are killed by the PCA process, but they are recovered in the last step of the method (Sect. 3.4).
The second parameter, the target purity P^{*}, impacts the detection threshold above which the local maxima of the test statistics are considered as detected lines. A purity value of 80% was selected as a compromise between the number of false and true detections. The impact and the reliability of this parameter on the detection performance is discussed in later in Sect. 4.3.
We also spent some time to the design of an “optimum” input spectral line dictionary (Sect. 3.2). After some trials we found that a set of 3 Gaussian profiles with FWHM of 2, 7 and 12 spectral pixels offers both good detection power (completeness) and affordable computational load. A lower number of profiles degrades the detection power while a higher number does not increase it but requires more computing power. More sophisticated profiles (e.g., asymmetric line shape mimicking those generally found in Lyα emitters) were also considered but not used given their negligible impact on the performances.
4.2. Results
The algorithm detects 791 emission lines belonging to 446 different sources. The algorithm assigns to each detected emission line a significance level, computed as the maximum purity at which this line can be detected (hence, the significance level of all detected lines is above P^{*}, which is 80% here).
After careful inspection by a set of experts, we confirm 248 Lyα emitters covering a broad range of redshifts (2.8–6.7) and 133 lower redshifts galaxies, mostly [O II] emitters but also a few nearby Hα emitters^{4}. Table 1 gives the success rate of redshift assignment for the full sample as a function of the purity estimation. Note that the measured success rate is smaller than the expected purity. This apparent discrepancy is due to the fundamental difference between a line detection and a redshift assignment. The latter is a very complex process involving matching spectral signature with template, searching for multiple lines, measuring line shape and even performing deblending of the source from its environment. Thus at low S/N it is not surprising that some of the real detections do not lead to a confirmed redshift.
A few representative examples of ORIGIN detections are given in Fig. 7. In the following we discuss each case.
Fig. 7. Examples of ORIGIN detections in udf10. For each detected source, we show the white light image reconstructed from the MUSE datacube (WHITE column), the corresponding HST image in the F775W filter (the Hubble ACS broad band filter F775W has an effective wavelength of 7624 Å which is nearly aligned with the central wavelength of MUSE (7000 Å). Its effective band pass of 1299 Å covers a third of the MUSE band pass (4650–9350 Å).) (F775W column), the detection image obtained by the averaging the GLR datacube over the detected wavelengths (CORR column) and the narrow band image obtained by averaging the raw datacube over the same wavelength range (NARROW BAND column). Note that the narrow band image is smoothed with a 0.6″ FWHM Gaussian kernel to enhance visually possible sources. The box size of the images is 5 arcsec. The source spectrum (smoothed with a boxcar of 5 pixels) is shown in the SPECTRUM column. The corresponding noise standard deviation is displayed in magenta (inverted on the y axis) and the green line displays the wavelength of the ORIGIN detection. The last column (LINE) displays the (unsmoothed) spectrum zoomed around the ORIGIN detected wavelength. 

Open with DEXTER 
The first case (ORIGIN ID 310, first row of Fig. 7) shows the detection of a Lyα emitter. The source cannot be seen in the MUSE white light image but is clearly visible in the HST deep F775W image. The corresponding ORIGIN detection peaked at 7459 Å. The pseudo narrow band detection image is obtained by averaging the datacube of the GLR test statistics as described in Sect. 3.2, on a window centered on the detection peak (see Sect. 3.5). A similar image can be obtained by performing the same operation on the raw datacube. As expected the corresponding narrowband image is more noisy compared to the detection image. However, the smoothed narrow band image displays a clear signal, in line with ORIGIN detection. The corresponding bright emission line seen in the source spectrum is broad and asymmetric with a tail on the red side of the peak, a characteristic of Lyα emission line. The redshift of the source is 5.13. The HST matched source has the ID 10185 in the Rafelski catalog (Rafelski et al. 2016), its magnitude is AB 28.8 in F775W and its Bayesian photometric redshift is 0.61, but with a large error bar (0.47–4.27). For such a faint source the photometric redshift accuracy is not much reliable, even in this field which has an exquisite suite of deep images spanning a large wavelength range from UV to IR (Brinchmann 2017). This demonstrates the power of a blind detection method like ORIGIN, which does not rely on prior information.
The second case (ORIGIN ID 204) shows the detection of a bright [O II] emitter at z = 1.3. The source is bright (F775W AB 24.5) and clearly visible in MUSE white light and HST broad band images. The [O II] doublet is prominent in the spectrum. While the reconstructed narrowband image displays a well defined spatial structure that looks like the broad band image, this is not the case of the detection image, which displays a hole at the location of the galaxy center. This is due to the PCA continuum subtraction (Sect. 3.1.3), which has removed the brightest [O II] emission, leaving only the faintest [O II] lines in the outskirts of the galaxy. In this case the [O II] emission was recovered in the predetection stage in the S/N datacube (see Sect. 3.4).
The third source (ORIGIN ID 253) is a good example of the power of ORIGIN blind detection in the vicinity of a bright continuum object. ORIGIN has detected an emission line in the vicinity of the nearby (z = 0.62) bright (AB 22) spiral galaxy ID 23794 (Rafelski et al. 2016). The detection and narrowband images display a coherent structure that spatially matches a small source in the F775W image. Without the ORIGIN detection, one would have assumed that this small source in the HST image is one of the H_{2} region, which can be seen along the spiral arm of the galaxy. The fact that the multiband HST segmentation map did not identify the source as a different object from the main galaxy would have strengthened this (false) assumption. However, the spectrum confirmed that the ORIGIN line does not belong to the main galaxy, but is a Lyα emitter at z = 4.7 with a typical asymmetric line profile.
The next example (ORIGIN ID 334) shows the detection of an emission line in the external part of a galaxy. But contrarily to the previous case, the narrow band and detection images do not show the same structure. The narrow band image points to the galaxy core, while the detection image does not show much signal there. After inspection of the two spectra, one can demonstrate that the line detected by ORIGIN is the H10 Balmer line belonging to the main galaxy spectrum. This line was faint enough to be left by the PCA continuum subtraction. This example shows that care must be taken when analyzing detection results in the region of bright continuum sources.
The last two examples (ORIGIN ID 376 and 329) display detection of faint Lyα emitters without HST counterpart. The sources are clearly visible in the detection images and to some extent also in the narrow band images, but nothing is detected in any HST bands. The spectrum shape is also indicative of Lyα emission. Given the depth of the HST images (AB ∼30) in the Hubble UDF, a low redshift object is excluded and the most likely solution corresponds to high redshift Lyα emitters. The reality of these detections has been confirmed by Maseda et al. (2018), who have demonstrated that the Lymanbreak signal can be recovered by stacking the HST broadband images of the ORIGIN detected Lyα emitters without HST counterparts.
4.3. Purity and completeness
The purity, that is, the fraction of true detections with respect to the total number of detections, is a builtin capability of ORIGIN (see Sect. 3.3). Obviously we would like to minimize the number of false detections in the total list of detections in order to get a purity as close as possible to 100%. On the other hand, targeting a higher purity automatically decreases the completeness, that is, the fraction of lines that are detected by the algorithm with respect to the total number of sources to be discovered. The tradeoff between purity and completeness is a feature of any detection method.
The estimation of completeness is highly dependent of the sources we want to detect. For example, the completeness is different for a population of unresolved bright Lyα emitters, or for a population of unresolved faint Lyα emitters or for a population of diffuse emission sources with broad emission lines. Hence, a generic estimation of completeness is not a builtin function of ORIGIN (in contrast to the estimation of the purity) neither of any detection method. A detailed study of Lyα emitters completeness in the HUDF datacubes is beyond the scope of this paper and will be presented in the upcoming DR2 paper (Bacon et al., in prep.). Nevertheless, we address here the question of completeness with a simpler approach by generating fake Lyα emitters into a datacube with similar characteristics to the HUDF datacube.
In practice we replace the signal of the udf10 datacube by a random Gaussian noise with zero mean and variance equal to the udf10 variance estimate. We then generate fake Lyα emitters using typical number counts representative of the faint end Lyα luminosity function (Drake 2017). The generated Lyα lines are asymmetric with FWHM and skewness that are representative of the Lyα emitters population. The resulting spectral profiles of the Lyα emitters, supposed to be point sources, are convolved with the MUSE PSF. Finally, we add 9 bright continuum sources. The resulting data cube is an idealized version of the real data cube but with the same noise characteristics. Note that this process assumes a Gaussian noise distribution, an assumption checked by Bacon et al. (2017) in the udf10 datacube.
ORIGIN is then run on the fake datacube, varying the fluxes, the locations and the wavelengths of the fake Lyα emitters. The comparison of the ORIGIN detected sources with the input list allows to compute the true purity and completeness.
Figure 6 compares the purity estimated by the algorithm with respect to the true purity. This shows that the estimate provided by ORIGIN is reliable on a wide range of purity levels. Figure 8 shows completeness versus purity plots for two different wavelength ranges and three flux values. As expected bright sources are fully recovered while the algorithm achieves a lower completeness for fainter ones. Note also that completeness is lower in the red for a given flux because of the impact of sky lines (the larger variance in the red end is visible in the magenta plots of Fig. 7, column SPECTRUM). Finally, note that the weak slopes of the completeness versus purity curves indicate that ORIGIN is able to achieve a relatively high completeness (with respect to the “asymptotic” one) with a fairly high purity (0.8 for instance).
Fig. 8. Estimation of completeness versus purity for the udf10 fake datacube. The values are given for three fluxes (in cgs units) and two wavelength ranges: 6000–7000 Å (solid lines) and 8000–9000 Å (dashed lines). 

Open with DEXTER 
4.4. Discussion
With respect to the preliminary, unreleased version of ORIGIN used on the version 0.42 of the udf10 datacube in Bacon et al. (2017), we have increased the number of detections of Lyα emitters by 34% (255 versus 190). Note that the difference is only 7% (174 versus 163) when we restrict the sample to the highest confidence redshifts. This was expected as the improved performance of ORIGIN has allowed the detection of fainter Lyα emitters while the “easiest one” were already identified in the preliminary version of the algorithm.
We also report the detection of 86Lyα emitters without HST counterpart. This is almost 3 times more sources with respect to the 30 sources published in Bacon et al. (2017). The ability to detect such sources, which are invisible in the deepest HST broad band images, is an important feature of ORIGIN.
While some of these improvements are due to the better data quality of the latest version of the datacube, most of them result from the advanced methodological features of the present version of ORIGIN with respect to the preliminary version used in Bacon et al. (2017). The most important improvements regard the continuum subtraction and the use of robust test statistics. Last, but not least, ORIGIN gives now an important reliability factor, the estimated purity, a must for a robust detection method.
Although the DCT and iterative PCA do a good job for the continuum subtraction (in particular better than a basic median filtering, see Sect. 3.1.3), it still leaves some low level residuals that can produce spurious detections (such an example – ORIGIN ID 334– is highlighted in Sect. 4.2). The consequence is that the purity in the regions corresponding to bright continuum sources is lower than in the background region. A precise estimate of the purity in that case is difficult.
Among the many results brought by the analysis of the MUSE deep field observations is the ubiquitous presence of Lyα low surface brightness extended halos surrounding Lyα emitters (Wisotzki et al. 2016, 2018; Leclercq 2017). Given that ORIGIN was designed for point source detection one can ask whether a better strategy would have been to use a kernel larger than the PSF. Even if the total flux of the Lyα halo can be similar to or even larger than the flux of the central component, its surface brightness is approximately two magnitude lower than the central component outside the PSF area (see Fig. 2 of Leclercq 2017). Hence, in practice, the point like assumption does not appear to be an important limiting factor for the blind detection of Lyα emitters.
5. Summary and conclusion
The algorithm ORIGIN is designed to be as powerful as possible for detecting faint spatialspectral emission signatures, to allow for a stable false detection rate over the data cube and to provide an automated and reliable estimation of the resulting purity. We have shown on realistic simulated MUSE data that the algorithm achieves these goals and ORIGIN was applied to the deepest MUSE observations of the Hubble Ultra Deep Field (udf10). In this tiny 1′×1′ field, ORIGIN revealed a large population of Lyα emitters (255), including 86 sources without HST counterpart. These sources could not have been found without such a powerful blind detection method as ORIGIN.
While the algorithm is mostly automated, we have presented a list of the main parameters with guidelines allowing to tune their default values. The algorithm is freely available to the community as a Python package on GitHub.
We have already identified points for improvements to the current version of ORIGIN and we are currently working on them. The first point is the greedy PCA (Sect. 3.1.3). Although this step works quite well and is fast, it may still leave residual nuisances that increase the false detection rate in the region of bright extended sources. Besides, we have not analyzed the behavior of this step in different acquisition settings, for instance crowded fields or fields with much lower signaltonoise ratios than udf10. Finally, we are developing a method for automatically tuning of the parameter involved in this stage.
We also know that there is room for improvement in the detection power (and completeness) of the method, by accounting for a more complex model than model (7) (leading then to GLR test statistics different from (8)). We have already devised slightly more powerful models and tests’ statistics, but those make the total amount of computing power far too demanding, as they impose a processing subcube per subcube (instead of allowing for fast convolutions along the spatial and spectral dimensions). Improvements might yet be found in this direction in the future.
Finally, the estimation of the purity can also be improved. While the current estimation is efficient for data cubes with extended zones of “background”, we know that the procedure may fail for crowded data cubes where no or too few such zones exist. In such cases, the amount of data available to estimate the test statistics under the null hypothesis is insufficient, a problem which is the object of active research in statistics.
Acknowledgments
DM made extensive use of the free software GNU Octave (Eaton et al. 2018) for developping ORIGIN and is in particular thankful to the developers of the Octave packages statistics, signal and image. DM also acknowledges support from the GDR ISIS through the Projets exploratoires program (project TASTY). Part of this work was granted access to the HPC and visualization resources of the Centre de Calcul Interactif hosted by University of Nice Côte d’Azur. RB acknowledges support from the ERC advanced grant 339659MUSICOS. This research made use of the following opensource packages for Python and we are thankful to the developers of these: Astropy (Astropy Collaboration et al. 2013, 2018), Matplotlib (Hunter 2007), MPDAF (Piqueras et al. 2017), Numpy (van der Walt et al. 2011), Photutils (Bradley et al. 2019), Scipy (Jones et al. 2001).
References
 Astropy Collaboration (Robitaille, T. P., et al.) 2013, A&A, 558, A33 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Astropy Collaboration (PriceWhelan, A. M., et al.) 2018, AJ, 156, 123 [NASA ADS] [CrossRef] [Google Scholar]
 Bacher, R., Meillier, C., Chatelain, F., & Michel, O. J. J. 2017, IEEE Trans. Signal Process., 65, 3538 [NASA ADS] [CrossRef] [Google Scholar]
 Bacon, R., Accardo, M., Adjali, L., et al. 2010, SPIE Conf. Ser., 7735, 8 [Google Scholar]
 Bacon, R., Vernet, J., Borosiva, E., et al. 2014, The Messenger, 157, 21 [NASA ADS] [Google Scholar]
 Bacon, R., Brinchmann, J., Richard, J., et al. 2015, A&A, 575, A75 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bacon, R., Conseil, S., Mary, D., et al. 2017, A&A, 608, A1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15 [NASA ADS] [CrossRef] [Google Scholar]
 Benjamini, Y., & Hochberg, Y. 1995, J. R. Stat. Soc. Ser. B (Method.), 57, 289 [Google Scholar]
 Bradley, L., Sipocz, B., Robitaille, T., et al. 2019, https://doi.org/10.5281/zenodo.2533376 [Google Scholar]
 Brinchmann, J., Inami, H., Bacon, R., et al. 2017, A&A, 608, A3 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Cheng, D., & Schwartzman, A. 2018, Bernoulli, 24, 3422 [CrossRef] [Google Scholar]
 Drake, A. B., Guiderdoni, B., Blaizot, J., et al. 2017, MNRAS, 471, 267 [NASA ADS] [CrossRef] [Google Scholar]
 Eaton, J. W., Bateman, D., Hauberg, S., & Wehbring, R. 2018, GNU Octave version 4.4.0 Manual: a Highlevel Interactive Language for Numerical Computations [Google Scholar]
 Herenz, C. H., & Wisotzki, L. 2017, A&A, 602, A111 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hunter, J. D. 2007, Comput. Sci. Eng., 9, 90 [Google Scholar]
 Inami, H., Bacon, R., Brinchmann, J., et al. 2017, A&A, 608, A2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python [Google Scholar]
 Kay, S. M. 1998, Fundamentals of Statistical Signal Processing: Detection Theory, Vol. 2 (PrenticeHall PTR) [Google Scholar]
 Lagattuta, D. J., Richard, J., Bauer, F. E., et al. 2019, MNRAS, 485, 3738 [NASA ADS] [Google Scholar]
 Leclercq, F., Bacon, R., Wisotzki, L., et al. 2017, A&A, 608, A8 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Maseda, M. V., Bacon, R., Franx, M., et al. 2018, ApJ, 865, L1 [NASA ADS] [CrossRef] [Google Scholar]
 Meillier, C., Chatelain, F., Michel, O., et al. 2016, A&A, 588, A140 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Paris, S., Suleiman, R. F. R., Mary, D., & Ferrari, A. 2013, Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on (IEEE), 3947 [CrossRef] [Google Scholar]
 Piqueras, L., Conseil, S., Shepherd, M., et al. 2017, ADASS XXVI proc., in press [arXiv:1710.03554] [Google Scholar]
 Rafelski, M., Gardner, J. P., Fumagalli, M., et al. 2016, ApJ, 825, 87 [NASA ADS] [CrossRef] [Google Scholar]
 Scharf, L. L., & Friedlander, B. 1994, IEEE Trans. on Signal Processing, 42, 2146 [NASA ADS] [CrossRef] [Google Scholar]
 Schwartzman, A., & Telschow, F. 2018, bioRxiv [https://www.biorxiv.org/content/early/2018/06/28/358051.full.pdf] [Google Scholar]
 Sobey, R. 1992, Ocean Eng., 19, 101 [CrossRef] [Google Scholar]
 Suleiman, R., Mary, D., & Ferrari, A. 2013, Proc. ICASSP, 2013 [Google Scholar]
 Suleiman, R., Mary, D., & Ferrari, A. 2014, IEEE Trans. Signal Process., 62, 5973 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Urrutia, T., Wisotzki, L., Kerutt, J., et al. 2019, A&A, 624, A141 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Comput. Sci. Eng., 13, 22 [Google Scholar]
 Vio, R., & Andreani, P. 2016, A&A, 589, A20 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Vio, R., Vergès, C., & Andreani, P. 2017, A&A, 604, A115 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Walter, F., Decarli, R., Aravena, M., et al. 2016, ApJ, 833, 67 [NASA ADS] [CrossRef] [Google Scholar]
 Wisotzki, L., Bacon, R., Blaizot, J., et al. 2016, A&A, 587, A98 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Wisotzki, L., Bacon, R., Brinchmann, J., et al. 2018, Nature, 562, 229 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Mathematical description
A.1. Derivation of
The Generalized Likelihood Ratio computes the ratio of the likelihoods under both hypotheses, with the unknown parameters set as their maximum likelihood estimates. For the composite model (7) this leads to
For i fixed, this problem has been considered by Scharf & Friedlander (1994). From their expression (5.15), where x, S and y correspond respectively to d_{i}, 1 and f, the GLR test statistic is
where denotes the projection on the orthogonal complement of 1:
with n_{f} = n_{x}n_{y}n_{z} (the number of voxels in f). Now note that
Hence, the GLR test statistic for fixed i (A.2) can be rewritten as
and computing the maximum over the index i as in (A.1) leads to
A.2. Statistics of
The statistics of can be derived using an approach similar to the Proposition II.1 of Bacher et al. (2017), which we report and adapt here as our setting is slightly different in the considered model (7) and test statistics (8). First, as Bacher et al. (2017) we make the hypothesis that under ℋ_{0} the noise is symmetric. We do not require the noise to be centred as the considered GLR tests statistics is invariant to an arbitrary shift. Second, note that for any subcube f,
In words, the value of the amplitude estimated when fitting a line profile to a spectrum is the opposite of the value obtained when fitting the profile to the opposite spectrum. Third, note that for any finite set of real numbers {a_{i}},
Consider now in (10). We have
where the first equality comes from (A.7) and the second from (A.8). Since under ℋ_{0}, f and −f have the same distribution, the second equality above shows that and also share the same distribution.
Appendix B: Implementation
ORIGIN was developped in GNU Octave with a twin version ported and optimized in Python as the Python package muse_origin. Its source code is available on GitHub^{5} under a MIT License. A complete documentation is also available on readthedocs^{6}.
As the processing of a MUSE data cube with the ORIGIN algorithm is relatively complex, it is divided in steps corresponding roughly to the steps described in Sect. 3. Each step produces intermediate results. This allows to stop at a given point and to inspect the results. It is possible to save the outputs after each step and to reload a session to continue the processing.
To run ORIGIN, it is first necessary to instantiate a muse_origin.ORIGIN object. In the considered framework this object is a MUSE data cube, which usually contains information about the FSF (if not, this information must be provided separately). The name given to this object is the session name used as the directory name in which outputs are saved (by default this is inside the current directory but this can be overridden with the path argument).
Here is an example:
>>> from muse_origin import ORIGIN >>> orig = ORIGIN(CUBE, name='origtest') INFO : Step 00  Initialization (ORIGIN v3.2) INFO : Read the Data Cube minicube.fits INFO : Compute FSFs from the datacube INFO : mean FWHM of the FSFs = 3.32 pixels INFO : 00 Done
The processing steps described in this article can the be run on this object. For instance for the first step this yields:
>>> orig.set_loglevel('INFO') >>> orig.step01_preprocessing() INFO : Step 01  Preprocessing INFO : DCT computation INFO : Data standardizing INFO : Std signal saved in self.cube_std ... INFO : Compute local maximum of std cube values INFO : Save self.cube_local_max ... INFO : DCT continuum saved in self.cont_dct... INFO : Segmentation based on the continuum INFO : Found 11 regions, threshold=1.94 INFO : Segmentation based on the residual INFO : Found 3 regions, threshold=1.12 INFO : Merging both maps INFO : Segmap saved in self.segmap_merged ... INFO : 01 Done  2.50 sec.
The detailed contents of each step are described in more details in the documentation^{7}, which also contains an Jupyter notebook example.
The other steps can be run with the following commands:
>>> orig.step02_areas() >>> orig.step03_compute_PCA_threshold() >>> orig.step04_compute_greedy_PCA() >>> orig.step05_compute_TGLR() >>> orig.step06_compute_purity_threshold() >>> orig.step07_detection() >>> orig.step08_compute_spectra() >>> orig.step09_clean_results() >>> orig.step10_create_masks() >>> orig.step11_save_sources()
Each step produces various files, which are useful to analyze the algorithm’s behavior and the influence of its parameters. The most important files are the final catalogs with the list of all the detected lines (orig.Cat3_lines) and sources (orig.Cat3_sources). The last step also creates MPDAF Source files^{8}, which gather all the information for each source (GLR cube, images, masks, spectra, cf. Sect. 3.5).
We underline that a substantial amount of time was devoted to optimize as much as possible the numerical implementation of the considered methods in order to minimize the overall computation time. Table B.1 gives the computation times for each step for the udf10 data cube described in Sect. 4. These times were obtained on a computing machine with 80 Intel® Xeon® Gold 6148 CPU at 2.40 GHz CPUs, and using the optimized version of Numpy with the Intel® MKL. As ORIGIN uses intensively linear algebra for the iterative PCA and FFT for the profiles convolution, using the Intel® MKL with Numpy may bring significant performance boost. This is the case by default when using Anaconda or Miniconda, and it is also possible to use the Numpy package from Intel® with pip. Intel® also provides an mklfft^{9} package with a parallelized version of the FFT.
Computation times for each step (minutes) for the udf10 data cube described in Sect. 4.
Appendix C: ORIGIN parameters
ORIGIN parameters used for udf10 processing.
All Tables
Computation times for each step (minutes) for the udf10 data cube described in Sect. 4.
All Figures
Fig. 1. Overview of the ORIGIN algorithm. A session example is given in Appendix B and a list of the main parameters in Appendix C. 

Open with DEXTER  
In the text 
Fig. 2. Signal to noise ratio (S/N) image (sum over the wavelength channels of the raw data cube divided by the standard deviation of the noise in each voxel) with, overlaid in black, the zones classified as nuisances by test t_{1} and t_{2}. The large patches considered for the PCA removal are shown in color. 

Open with DEXTER  
In the text 
Fig. 3. Distribution of the test statistics t_{2} for four patches of (blue), Gaussian approximation (red) and thresholds γ(t_{2}) (in the titles and black stems) above which all spectra of the patch are classified as nuisance for . The plots for the five other patches are very similar and not shown. 

Open with DEXTER  
In the text 
Fig. 4. Iteration map of greedy PCA for udf10 with . The map shows how many times each spectrum was selected in the nuisance matrix N before all spectra were considered “cleaned” (that is, were classified as background by test t_{2}). 

Open with DEXTER  
In the text 
Fig. 5. Comparison of two different nuisance removal algorithms on the 100 × 100 top left region of the udf10 field. Left: median filtering with a window of 144 channels. Right: DCT + PCA. Red: local maxima M^{+} larger than v = 3σ after the mean (v = 9.4 for the median and v = 6.6 for the DCT+PCA). The grayscale ranges from 0 (white) to 16 (black). 

Open with DEXTER  
In the text 
Fig. 6. Comparison of the ORIGIN estimated purity (cf. Eq. (14)) versus the true purity (Eq. (11)) for the udf10 fake datacube. 

Open with DEXTER  
In the text 
Fig. 7. Examples of ORIGIN detections in udf10. For each detected source, we show the white light image reconstructed from the MUSE datacube (WHITE column), the corresponding HST image in the F775W filter (the Hubble ACS broad band filter F775W has an effective wavelength of 7624 Å which is nearly aligned with the central wavelength of MUSE (7000 Å). Its effective band pass of 1299 Å covers a third of the MUSE band pass (4650–9350 Å).) (F775W column), the detection image obtained by the averaging the GLR datacube over the detected wavelengths (CORR column) and the narrow band image obtained by averaging the raw datacube over the same wavelength range (NARROW BAND column). Note that the narrow band image is smoothed with a 0.6″ FWHM Gaussian kernel to enhance visually possible sources. The box size of the images is 5 arcsec. The source spectrum (smoothed with a boxcar of 5 pixels) is shown in the SPECTRUM column. The corresponding noise standard deviation is displayed in magenta (inverted on the y axis) and the green line displays the wavelength of the ORIGIN detection. The last column (LINE) displays the (unsmoothed) spectrum zoomed around the ORIGIN detected wavelength. 

Open with DEXTER  
In the text 
Fig. 8. Estimation of completeness versus purity for the udf10 fake datacube. The values are given for three fluxes (in cgs units) and two wavelength ranges: 6000–7000 Å (solid lines) and 8000–9000 Å (dashed lines). 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.