ORIGIN: Blind detection of faint emission line galaxies in MUSE datacubes

One of the major science cases of the MUSE integral field spectrograph is the detection of Lyman-alpha emitters at high redshifts. The on-going and planned deep fields observations will allow for one large sample of these sources. An efficient tool to perform blind detection of faint emitters in MUSE datacubes is a prerequisite of such an endeavor. Several line detection algorithms exist but their performance during the deepest MUSE exposures is hard to quantify, in particular with respect to their actual false detection rate, or purity. {The aim of this work is to design and validate} an algorithm that efficiently detects faint spatial-spectral emission signatures, while allowing for a stable false detection rate over the data cube and providing in the same time an automated and reliable estimation of the purity. Results on simulated data cubes providing ground truth show that the method reaches its aims in terms of purity and completeness. When applied to the deep 30-hour exposure MUSE datacube in the Hubble Ultra Deep Field, the algorithms allows for the confirmed detection of 133 intermediate redshifts galaxies and 248 Lyman Alpha Emitters, including 86 sources with no HST counterpart. The algorithm fulfills its aims in terms of detection power and reliability. It is consequently implemented as a Python package whose code and documentation are available on GitHub and readthedocs.


Introduction
Spectroscopic observations of galaxies at high redshift have recently been revolutionized by the Multi Unit Spectroscopic Explorer (MUSE) instrument in operation at the VLT (Very Large Telescope) since 2014. MUSE is an adaptive optics assisted integral field spectrograph operating in the visible (Bacon et al. 2010;Bacon et al. 2014). With its field of view of 1 × 1 arcmin 2 sampled at 0.2 arcsec and its simultaneous spectral range of 4800-9300 Å at R∼3000, MUSE produces large hyperspectral datacubes of 383 millions voxels, corresponding to about 320 × 320 × 3680 pixels along the spatial (x, y) and spectral (z) axes. Its unique capabilities of providing three-dimensional (3D) deep field observations have been demonstrated in the early observations of the Hubble Deep Field-South (Bacon et al. 2015) and more recently in the Hubble Ultra Deep Field (HUDF, Bacon et al. (2017)) and the CANDELS -GOOD South area (Urrutia et al. 2018).
Thanks to its unrivalled capabilities, MUSE has been able to increase the number of spectroscopic redshifts in these fields by an order of magnitude (see for example Inami et al. 2017). The ORIGIN: https://github.com/musevlt/origin most spectacular increase is at high redshift (z>3), where MUSE was able to detect a large number of Lyα emitters. In the deepest exposures (10+ hours), MUSE is even able to go beyond the limiting magnitude of the deepest HST exposures. For example in the HUDF, which achieves a 5σ depth of 29.5 in the F775W filter, MUSE was able to detect Lyα emitters without an HST counterpart Maseda et al. 2018). These observations have led to a breakthrough in our understanding of the high redshift Universe, which includes, for example, the discovery of Lyα emission from the circumgalactic medium around individual galaxies (Wisotzki et al. 2016;Leclercq et al. 2017;Wisotzki et al. 2018) or the role of the low mass galaxies and the evolution of the faint-end Lyα luminosity function (Drake et al. 2017).
Building a large sample of low luminosity Lyα emitters at high redshift is an important objective for the near future with the upcoming deep fields observations currently executed or planned in the context of the MUSE guaranteed time observations (GTO). The availability of an efficient tool to perform blind detection of faint emitters in MUSE datacubes is a prerequisite of such an endeavor.
Article number, page 1 of 15 arXiv:2002.00214v1 [astro-ph.IM] 1 Feb 2020 A&A proofs: manuscript no. main Several tools have already been developed to perform blind searches of faint emitters in MUSE datacubes, such as MUSE-LET, a SExtractor based method available in MPDAF (Piqueras et al. 2017), LSDCAT, a matched filtering method (Herenz et al. 2017) or SELFI, a Bayesian method (Meillier et al. 2016). These tools have been successfully used so far, for instance LSDCAT and MUSELET in the context of respectively the MUSE Wide Survey (Urrutia et al. 2018) and the analysis of the lensing clusters (e.g., Lagattuta et al. (2019)). However, none of these methods currently allow for a reliable estimate of the proportion of false discoveries (or purity) actually present in the list of detected sources. As a consequence their actual performance on the deepest MUSE exposures, for which no ground truth is available indeed, is consequently hard to quantify.
Furthermore, from our experience in the blind search of emitters in the MUSE deep fields, we have learned that when tuned to be efficient for the faintest emitters, every detection method becomes overly sensitive to any residuals left by the imperfect continuum subtraction of bright sources and by the data reduction pipeline (e.g., instrumental signatures or sky subtraction residuals). This leads to a global inflation of the false detections, at levels that are unpredictable and fluctuating in the datacube. This effect requires either to limit the detection power by setting a threshold high enough to stay on the 'safe side', or to consent spending a significant human-expert time to examine the long list of potential discoveries proposed by the algorithm.
In this context, we have developed an automated method, called ORIGIN, allowing for these methodological and computational challenges. In this paper, we present in detail the algorithm. An overview is given in Sec. 2 and a step-by-step description in Sec. 3. The application of ORIGIN to the deep 30hours exposure MUSE datacube in the HUDF called udf-10 is presented in Sec. 4. Mathematical complements, implementation of the method into a publicly released software and parameters values used for the data cube udf-10 are given the Appendices. Conclusions and possible improvements are listed in the last section.

Notations
In the following, we note N x , N y and N λ the two spatial and the spectral dimensions of the data cube. (For udf-10 this corresponds to 322 × 323 × 3681 ≈ 383 millions voxels.) Bold lowercase letters as x denote vectors and bold capital letters as X denote matrices. We also use the following notations: is one spectrum in the data cube, with i ∈ {1, . . . , N x × N y }, -A collection of spectra {s i } from a data cube can be stored in a matrix whose columns are the spectra. For the whole data cube, this matrix is noted S := [s 1 , . . . , s N x ×N y ], -Σ i is the covariance matrix of spectrum s i . It is provided by the data reduction pipeline. -Whitened (or standardized) vectors and matrices are denoted with a . For instance : N y ,N z and 1 N x ,N y represent respectively arrays of zeroes and ones of size N x × N y × N z and N x × N y . -The symbols ⊗ and represent respectively convolution and pointwise division. -For a cube stored in a matrix A, notations A(·, ·, k) and A(i, j, ·) represent respectively the image ('slice') of the cube in the k th wavelength band and the spectrum of the cube at spatial positions (i, j).  Fig. 1: Overview of the ORIGIN algorithm. A session example is given in App. B and a list of the main parameters in App. C.
For simplicity, we often drop index i when the described processing is applied sequentially to all spectra.

Objectives
The detection algorithm ORIGIN is aimed at detecting point-like or weakly resolved emission line sources with emission lines at unknown redshifts. The algorithm was designed with a threefold objective : 1) Be as powerful as possible for detecting faint emission lines; 2) Be robust to local variations of the data statistics, caused for instance by bright, extended sources, residual instrumental artifacts or different number of co-added exposures ; 3) Provide a list of detected sources that achieves a target purity, this target purity being specified by the user. These objectives led us to a detection approach in five main stages outlined below and described step by step in Sec. 3.

Overview
The flowchart in Fig. 1 shows an overview of the algorithm. We give here a brief summary of each step (purple boxes), a detailed description of which can be found in the Sections indicated in these boxes.

Nuisance removal and segmentation
Spatial and spectral components that are not emission lines in the datacube S (e.g., bright sources, diffuse sources, instrumental artifacts) can drastically increase the false detection rate. The first step is thus to detect and to remove such sources, called 'nuisances' below. Regions of the field that are free from such sources are called 'background'. Nuisance features are removed by combining a continuum removal (using a Discrete Cosine Transform, DCT) and an iterative Principal Component Analysis (PCA). The estimated continuum (C) and the residual data cubes (R = S − C) from the DCT are used to perform a spatial segmentation. This provides for each spectrum a label (nuisance or background. These labels are stored in a matrix (L) and are used by the PCA, which acts differently on background and nuisance spectra.

Test statistics
In the faint residual datacube (F) the point is now to capture the signatures of line emissions. These signatures essentially take the form of '3D blobs' corresponding to the convolution of a spectral line profile by the spatial PSF of the instrument. Several matched filtering (Herenz et al. 2017) or Generalized Likelihood Ratio (GLR) approaches (Paris et al. 2013;Suleiman et al. 2013Suleiman et al. , 2014 have been devised for that purpose. Such approaches lead to filter the data cube with a library of possible signatures, built as spectral line profiles spread spatially by the PSF, and to normalize the result. The filtering and normalization process considered here is also derived from a GLR approach but it is robust to an unknown residual continuum (see Sec. 3.2). It produces two data cubes (T + GLR and T − GLR ), which correspond respectively to intermediate tests statistics obtained when looking for emission and absorption lines. When present in the data, an emission line emitter tends to increase locally the values in T + GLR , with a local maximum close to the actual position (in x, y and z coordinates) of the line. These local maxima are used as final test statistics for line detection : each local maximum above a given threshold is identified as a line detection. It is important to note that for simplicity T − GLR is computed so that an absorption also appears as a local maximum in T − GLR , so that the final test statistics are obtained as the local maxima (called M + and M − ) of T + GLR and T − GLR .

Purity estimation
Evaluating the purity of the detection results, that is, the fraction of true sources in the total list of detected sources, requires to estimate the number of local maxima that would be larger than the threshold 'by chance', that is, in absence of any emission line. The complexity of the data prevents us from looking for a single, reliable analytical expressions of this distribution. However, the size of the data cube and the nature of our targets (emission lines) allows for the estimation of this distribution from the data -an approach advocated for instance by Walter et al. (2016) and Bacher et al. (2017)). When the data contains only noise and under mild assumption on this noise, the statistics obtained when looking for emission lines are the same as those obtained when looking for absorption lines: the local maxima of M + and M− should have the same distribution. Hence, the number of local maxima of M + that should be above any given threshold γ under the null hypothesis can be estimated from the number of local maxima found above this threshold in M − . This provides an estimate of the false discoveries that should be expected for any threshold, and hence of the purity. In practice, this estimation is done for a grid of threshold values. The value of the threshold corresponding to the purity desired by the user is identified and the emission lines correspond to the local maxima of M + exceeding this threshold.

Detection of bright emission lines
As explained in Sec. 2.3.1 the iterative PCA aims at removing all features that do not correspond to faint emission lines. This means that bright emission lines -the most easily detectable ones -can be removed by the PCA step and further be missed by the detection process. It is thus necessary to detect such lines before the PCA, in the DCT residual R. The procedure for detecting these lines and controlling the purity of this detection step strictly mirrors what is done for faint lines, with local maxima computed directly from (whitened versions of) R and −R instead of T + GLR and T − GLR .

Line merging and sources' extraction
The detected bright and faint emission lines are finally merged into sources, for which several informations (refined position, spectral profile and total flux) are computed and stored in the final source catalog (cf Sec. 3.5).

3.
Step-by-step method description

Nuisance removal and spatial segmentation
The strategy of ORIGIN is to track and suppress nuisance sources while preserving the targets, the line emitters. The test statistics computed from the residuals still have to be robust against unknown fluctuating flux variations under the null hypothesis, but only moderately so if the nuisance removal is efficient (cf Sec. 3.2). The removal of nuisance sources is performed spectrumwise in two steps explained below: DCT and iterative PCA. The first stage of DCT (a systematic and fast procedure) helps to remove quickly energetic and smooth fluctuations. In contrast, the version of the iterative PCA designed for this problem can capture adaptively signatures that are much fainter and possibly very complex in shape, a but it is computationally heavy. This combination makes the overall nuisance removal process efficient and tractable in a reasonable time.

Nuisance removal with DCT
Each spectrum s is modeled as where D is a partial DCT matrix of size N λ × N DCT , α is a N DCT × 1 vector of decomposition coefficients, Dα is the un-known continuum and is an additive Gaussian noise with covariance matrix Σ. Maximum Likelihood estimation of the continuum of spectrum s leads to a weighted least squares problem, for which the estimated coefficients α are obtained by (cf sec. 2.1 for the notation): The estimated continuumĉ and residual r are obtained by These spectra are collected in continuum and residual data cubes named C and R respectively. The parameter N DCT controls the number of DCT basis vectors used in the continuum estimation. A value that is too small lets large scale oscillations in the residual spectrum, while a value that is too large tends to capture spectral features with small extension like emission lines, which become then more difficult to detect in the residual. A satisfactory compromise was found here 1 with N DCT = 10. This value leaves the lines almost intact: typically, the energy of the line in the DCT residual remains close to 100% until N DCT reaches several hundreds, depending on the line width. The continuum subtraction with N DCT = 10 is not perfect, but a large part of the work is done: for bright objects, 99 % of the continuum's energy is typically contained in the subspace spanned by the first 10 DCT modes and decreases very slightly afterward. The PCA does the rest. Before describing the PCA we present a segmentation step, which is required to implement the PCA.

Spatial segmentation
The purpose of spatial segmentation is to locate regions where the data spectra contain nuisance sources and where they are free from them (in which case they are labeled as 'background'). This segmentation is necessary to further remove the nuisances. As mentioned before, such spectra can have residuals from continuum or bright emission lines, or correspond to regions exhibiting a particular statistical behavior, caused for instance by the presence of instrumental artifacts or residuals from sky subtraction . The segmentation step relies both on the information extracted in the continuum cube C and on the residual cube R (which is whitened in order to account for the spectral dependence of the noise variance). In C, an energetic spectrum c indicates the presence of a continuum. In R, a spectrum r containing residual signatures from bright sources or artifacts tends to have higher energy than pure noise (background) pixels. For these reasons, we found that the following two tests statistics are both efficient and complementary to locate nuisance sources: For a spectrum under test x, the segmentation tests are both of the form where t is either t 1 or t 2 in (4), x is either c or r and γ is a threshold allowing for the tuning of the sensitivity of the tests 2 . In the data, the spectrum at spatial coordinates (i, j) in the field is considered containing nuisances if at least one of the two tests applied to the corresponding spectra c or r leads to a result above the test threshold. The log 10 function in the definition of t 1 in (4) is there to stabilize numerically the test statistic, as the squared norm of the estimated continuum may vary by several orders of magnitudes within a data cube. In data like udf-10, both distributions appear to be rightskewed, with a roughly Gaussian left tail and a much heavier, possibly irregular right tail caused by nuisance sources. Hence, the distribution of the test statistics that would be obtained with pure noise (without nuisances) is estimated by means of the left part of the empirical distribution, for which a Gaussian approximation provides a reasonable fit (see Fig. 3). This leads to an estimated distribution of the test statistics under the noise only hypothesis which is Gaussian with estimated mean µ(t) and estimated standard deviation σ(t). For the purpose of segmentation (or classification), the user chooses a level of error in the form of an error probability, called P seg FA below. In order to tune the tests (5) at this target error rate, one needs to find the threshold value γ such that The higher the value of P seg FA , the lower the value of γ and the larger the number of spectra wrongly classified as containing nuisances. If we denote by Φ the Cumulative Distribution Function (CDF) of a standard normal variable, the threshold γ for t 1 Fig. 3: Distribution of the test statistics t 2 for four patches of R (blue), Gaussian approximation (red) and thresholds γ(t 2 ) (in the titles and black stems) above which all spectra of the patch are classified as nuisance for P PCA FA = 0.1. The plots for the five other patches are very similar and not shown. is given by where again t is either t 1 or t 2 . Two segmented maps are obtained from thresholding the maps of the N x × N y test statistics at the values γ(t 1 ) and γ(t 2 ) defined above. The nuisances regions of each map are merged into a single merged segmentation map. Because MUSE data cubes are large, a PCA working on the full data cube would be sub-optimal. Indeed, the repeated computation of the eigenvectors of a matrix composed by the whole cube would be computationally prohibitive. Moreover, the aim of the PCA is to capture spectral features corresponding to nuisances. Such nuisances have features that are locally similar, so a PCA working on patches smaller than the whole cube is also more efficient to remove the nuisances. When building these patches ( whose size is typically one tenth of the whole data cube for udf-10), care must be taken that regions identified as nuisances in the previous step are not split over two such patches. For udf-10 the segmentation algorithm starts with N Z = 9 rectangular patches. The nuisance zones intersecting several such patches are iteratively identified and attached to one patch, under the constraint that the final patches have surface in a target range. The minimal and maximal surfaces allowed for patches of udf-10 are respectively S min = 80 2 and S max = 120 2 pixels 2 . The results of the segmentation nuisance versus background for udf-10 after merging the two maps based on t 1 and t 2 is shown in Fig. 2. The Figure also shows the segmentation result into a number of N Z = 9 large patches.

Nuisance removal with iterative PCA
The goal of this step is to iteratively locate and remove from R residual nuisance sources, that is, any signal that is not the signature of a faint, spatially unresolved emission line. The algorithm cleans the cube one patch at a time. In order to leave in the cleaned data cube a structure compatible with noise, the nuisances are constrained to leave outside a 'noise subspace', which is defined as the sample average of a small fraction (F b ) of the spectra flagged as 'background' in the patch. 1. The map shows how many times each spectrum was selected in the nuisance matrix N before all spectra were considered 'cleaned' (that is, were classified as background by test t 2 ).
The Algorithm works as follows (see the pseudo-code in Algorithm 1). At each iteration, the algorithm classifies spectra of the patch under consideration (Z i ) as background or nuisance and stores them in matrices respectively called B and N. This classification is done by means of the test t 2 in (4) -(5), applied to all the spectra in the patch Z i . For this test a value P PCA FA is chosen by the user, and the corresponding test threshold is computed as in (6) with P PCA FA replacing P seg FA . In practice, for fields relatively empty as udf-10, values of P PCA FA and P seg FA in the range [0.1, 0.2] typically provide a good trade-off in cleaning nuisances without impacting Lyα emission lines. Fig. 3 shows in black the value of the threshold for four patches Z i of the udf-10 data cube.
From the fraction (F b %) of the spectra that show the lowest test statistics, a mean background b is estimated and all the nuisance spectra in N are orthogonalized with respect to this background. This results in a matrix of spectra N, of which the first eigenvector is computed. The contribution of this vector to all spectra of the patch is removed. If some of the resulting residual spectra are still classified as nuisances, the process is repeated until there is no more spectrum classified as nuisance in the patch. Fig. 4 shows how many times each spectrum was classified as nuisance during the PCA cleaning stage. Comparing with the S/N image (gray and black zones in Fig. 2), this iteration map clearly evidences regions where bright sources and artifacts are present (e.g., horizontal lines close to the borders in the top and bottom right corner). Note also that the process is relatively fast with less than 40 iterations required in each patch to converge to a cleaned data cube.

Generalized Likelihood Ratio
For all positions (x, y, z) in the PCA residual data cube F (for 'faint'), the algorithm considers sub-cubes f(x, y, z) of F (called f for short below), centered on position (x, y, z) and having the Article number, page 5 of 15 A&A proofs: manuscript no. main Algorithm 1: Iterative greedy PCA algorithm Inputs : R, F b , P PCA FA , empty data cube F. Output: 'Cleaned' data cube F.
Compute B and N using t 2 (X) as in (4)   5 while N is not empty do 6 Estimate b as the mean of the F b % of the spectra having the lowest test statistics in t 2 Compute B and N using t 2 (X) as in (4) 11 end 12 Spectra of F i ← X 13 end size of the considered target signatures. For each such subcube we formulate a binary hypothesis test between the null hypothesis: there is no emission line centred at position (x, y, z), the alternative hypothesis: there is one emission line d i centred at position (x, y, z), where d i is a '3D' (spatial-spectral) signature among a set of N s possible line signatures.
The statistical model retained to describe these two hypotheses is important, as it should capture as reliably as possible the statistical distribution of the data. This distribution results from a long chain of preprocessing, from the data reduction pipeline to the PCA described in the previous step, and may thus be very complex. On the other hand, a too sophisticated model may lead to untractable processing because it requires, in a GLR approach, maximum likelihood estimation of the corresponding unknown parameters for 380 millions positions in the datacube. We compared the performances of several statistical models and opted for the following, which allows for a good compromise between computational efficiency, detection power and robustness to remaining faint artifacts: where n ∼ N(0,I) is the noise assumed to be zero mean Gaussian with Identity covariance matrix under both hypotheses. The term a1, with a ∈ R and 1 a vectorized cube of ones, models a possible residual and unknown nuisance flux, that is considered spatially and spectrally constant in subcube f. The term bd i , with b > 0 and i ∈ {1, . . . , N s }, corresponds to one of the possible emission signatures. Each signature is a spectral profile of a given width, spatially spread in each channel by the PSF in this channel. The considered signatures define the size of f. Spatially, they have the size of the PSF (25 × 25 for udf-10). Spectrally, N s sizes are considered. For the presented udf-10 analysis we used N s = 3 Gaussian profiles with FWHM of 2, 6.7 and 12 spectral channels, covering respectively 5, 20 and 31 spectral channels in total. For binary hypothesis testing involving unknown parameters, the GLR approach (Scharf & Friedlander (1994); Kay (1998)) forms the test statistics by plugging in the likelihood Algorithm 2: Computation of the test statistics Inputs : F, N x , N y , N z ,N s , dictionary of PSF {p i }, with i = 1, . . . , N z , dictionary of spectral profiles {d} i with i = 1, . . . , N p .
Convolve band i of F with p i and store in T: where the superscript + refers to positive (emission) lines and d i denotes the spatial-spectral signature d i to which the mean has been subtracted. Eq. (8) can be efficiently computed for all positions (x, y, z) using Algorithm 2. The first main loop (rows 2 to 7) processes the cubes channel by channel. The second main loop (rows 8 to 26) processes the result of the first loop profile by profile, with an embedded loop processing spectrum per spectrum (rows 11 to 16). Comparisons of the score obtained for each profile (rows 19 to 25) implement the max and min operators required for T + GLR and T − GLR (cf Eqs. (8) above and (10) below).

Local extrema
The   (2018)). We opt for this approach and consider in the following the cube obtained by computing the three-dimensional local maxima of T + GLR , noted M + and defined by: where v collects the 26 voxels connected by faces, edges or corners touch to voxel T + GLR (x, y, z). An example of the resulting cube of test statistics is represented in Fig. 5, right panel. This panel shows in gray scale the maximum over the spectral index of T + GLR obtained for a region of udf-10. The corresponding local maxima (nonzero values of M + ) are shown in red. In this panel, T + GLR and M + reveal very clearly regions where spatially unresolved emission lines are likely to be present (darker blobs of the size of the PSF, and red points). In comparison, the left panel shows the GLR cube obtained when applying Algorithm 2 after a preprocessing based solely on a median filtering (instead of the DCT+PCA processing described above). Clearly, a less efficient nuisance removal leads, in the vicinity of bright sources for instance, to wild and undesired variations of the test statistics. Such a loss of efficiency in the preprocessing stage would have to be compensated for by a possibly time consuming postprocessing stage.
In order to evaluate how significant are the claimed detections, an important question is to know the distribution of the local maxima under H 0 . This problem has been studied in applications like the heights of waves in stationary sea state (Sobey 1992), the fluctuations of the cosmic background (Bardeen et al. 1986), source detection in radiointerferometric maps (Vio & Andreani 2016;Vio et al. 2017) or brain activity regions in neuroimaging (Schwartzman & Telschow 2018), the random fields in which local maxima are considered being two-dimensional for the former three applications and three-dimensional for the latter.
In all these works, the distribution of the local maxima are derived from theoretical results regarding Gaussian random fields, see in particular the works of Cheng & Schwartzman (2018) for recent results and a review. We opted here for a different approach however, because the random field T + GLR (x, y, z) is not guaranteed to be Gaussian nor smooth (owing for instance to the maximum over the spectral profiles computed in (8)). Besides, it is not isotropic and the correlation structure is likely to be not stationary in the spatial dimension owing to the varying number of exposures and to telluric spectral lines. While it might be possible to derive an accurate statistical characterization of the local maxima of MUSE data by combining results of Schwartzman & Telschow (2018) for 3D, non isotropic non-stationary and possibly non Gaussian random fields, this approach would deserve a far more involved study. We present and validate here a relatively more simple and empirical approach. This approach combines inference based on local maxima with an estimation of the distribution under H 0 directly from the data itself, as considered for instance in Walter et al. (2016) and Bacher et al. (2017) and further motivated here by the large dimension of the data cubes. For this, our approach is in two steps. - Step 1 : Compute GLR scores for absorption lines under H 0 . Consider that we wish to detect absorption (rather than emission) lines. The model is the same as (7) but with this time b < 0. The GLR leads to the tests statistics As shown in Appendix A.2, under H 0 , T + GLR and T − GLR share the same distribution.
-Step 2 : Statistics of the local maxima. Since the distribution of T + GLR and T − GLR are the same, a training set of test statistics for the local maxima M + can be obtained by the local maxima of T − GLR , noted M − , which is defined similarly as M + in (9), with T − replacing T + . In practice, the background region in which the statistics are estimated is obtained by the merged segmentation maps obtained from tests t 1 and t 2 in (5).

Purity estimation
The purity (or fidelity) is defined as the proportion of true discoveries (called S below) with respect to the total number of discoveries (R). With these notations the number V := R − S is the number of false discoveries and the purity is where V R is sometimes called False Discovery Proportion (FDP). We note that when a large number of comparisons is made in multiple testing procedures (as this is the case here with millions of local maxima), controlling the false alarm (or familywise error) rate at low levels leads to drastically increase the test's threshold, which results in a substantial loss of power. In such situations it can be more efficient to control instead the False Discovery Rate (FDR, Benjamini & Hochberg (1995)), defined as where E(·) denotes expectation and by convention FDR= 0 if R = 0. Definition (12) shows that procedures aimed at controlling purity and FDR are indeed connected. Coming back to our line detection problem, we wish to find a procedure making it possible to decide which local maxima of Article number, page 7 of 15 A&A proofs: manuscript no. main M + should be selected so that the purity of the resulting selection is guaranteed at a level prescribed by the user. This amounts to find the correspondence between a range of thresholds and the resulting purity.
As mentioned above, our choice is to estimate the number of discoveries V from the data itself, namely from the statistics of the local maxima M − , since their distribution is the same as that of M + when only noise is present. Let us denote by N − the number of voxels found in the background region 3 and by V − (γ) the number of local maxima of M − with values larger than a given threshold γ > 0 in that region. If M + was obtained from a noise process with the same statistics as M − , an estimate of the number of false discoveries V that would be obtained by thresholding the full data cube M + (entailing N voxels) at a value γ is Hence, if we denote by R(γ) the number of local maxima above the threshold in M + , the purity can be estimated as This approach is very similar to the Benjamini & Hochberg procedure for controlling the FDR. The difference is that the probabilities of local maxima to be larger than some value under H 0 (the P-values) are estimated directly from M − instead of relying on a theoretical distribution of the local maxima. Fig. 7 shows that this procedure allows for an efficient control of the purity. The value of the threshold γ * such that P(γ * ) = P * (with P * the purity chosen by the user) is selected and M + is thresholded at this value.

Pre-detection of bright emission lines
The nuisance removal via the PCA described in Sec.3.1.3 is aimed at removing any source that is not a faint unresolved emission line. The counterpart of the efficiency of Algorithm 1 is that powerful emission lines are detected by tests t 2 in Algorithm 1, so their contribution is included in matrix N, captured by the PCA and thus removed from the data cube. It is thus necessary to make a predetection of bright emission lines. Such lines are easily detectable by a high peak emission in the residual data cube R. The detection procedure for bright emission lines mirrors that described in Sec.3.3, simply replacing M + and M − by the local maxima of R and − R. For udf-10, the target purity of this stage is set to P * = 0.95.

Line merging and source extraction
Once the detections are available, we need to group them into sources, where a given source can have multiple lines. This can be a tricky step in regions where bright continuum sources are present because such sources can lead to detections at different spatial positions despite the DCT+PCA (see Fig. 5). To solve this problem we use the information from a segmentation map (that can be provided or computed automatically on the continuum image to identify the regions of bright or extended sources, cf sec. 3.1.2) and we adopt a specific method for detections that are in these areas.
First, the detections are merged based on a spatial distance criteria (parameter called tol_spat). Starting from a given detection, the detections within a distance of tol_spat are merged. Then by looking iteratively at the neighbors of the merged detections, these neighbors are merged in the group if their distance to the seed detection is less than tol_spat voxels, or if the distance on the wavelength axis is less than a second parameter, tol_spec (that is, if a line is detected almost at the same wavelength but a different spatial position). This process is repeated for all detections that are not yet merged.
Then we take all the detections that belong to a given region of the segmentation map, and if there is more than one group of lines from the previous step we compute the distances on the wavelength axis between the groups of lines. If the minimum distance in wavelength is less than tol_spec, the groups are merged.
Finally, for each line we then compute a detection image, obtained by summing the GLR datacube on a window centered on the detection peak wavelength and a width of 2×FWHM, where FWHM is the width of the spectral template that provides the highest correlation peak. We also compute the corresponding spectrum by weighted summation over the spatial extent of the line using the detection image as weight.

Application to MUSE HUDF datacube
ORIGIN was initially developed for the blind search of Lyα emitters in the MUSE deep exposure of the Hubble Ultra Deep Field (hereafter HUDF). A preliminary version of the code was successfully used for the blind search exploration of the entire HUDF field of view . It resulted in the detection of 692 Lyα emitters (Inami et al. 2017), including 72 not detected in the HST deep broad band images Maseda et al. 2018).
Here we use the latest version of the udf-10 MUSE datacube (see Fig. 1 of Bacon et al. (2017)), the single 1 arcmin 2 datacube that achieves a depth of 30 hours. In this field Inami et al. (2017) report the detection of 158 Lyα emitters, including 30 not detected in the HST deep broad band images. Compared to the previous version datacube of the same field used in Bacon et al. (2017), the data benefits from an improved data reduction pipeline, resulting in less systematics. This version is the one used in the MUSE HUDF data release II (Bacon et al, in prep). In this section we focus on the use and performance of the algorithm with an emphasis on the Lyα emitters search.

Processing
The released version 1.0 of ORIGIN was used on the MUSE udf-10 datacube with the parameters as given in Appendix C. While most of the parameters can be used with their default value, a few need more attention: the probability of false alarm for the PCA (P PCA FA , sec. 3.1.3) and the allowed minimum purity (P * , sec. 3.3). A correct setting of the first parameter ensures that the signal coming from bright continuum sources and the systematics left by the data reduction pipeline are properly removed. When P PCA FA is too low the test threshold is too high and too few spectra are cleaned, leaving nuisances in the residual data cube. When P PCA FA is too large the test threshold is too low and the signal from some emitters can be impacted. After some trials, a value of 10% was used (see an example in Fig. 3). Note that with such a value, 7% of the emitters (the brightest ones) are killed by the PCA process, but they are recovered in the last step of the method (sec. 3.4).
The second parameter, the target purity P * , impacts the detection threshold above which the local maxima of the test statistics are considered as detected lines. A purity value of 80% was selected as a compromise between the number of false and true detections. The impact and the reliability of this parameter on the detection performance is discussed in later in section 4.3.
We also spent some time to the design of an 'optimum' input spectral line dictionary (sec. 3.2). After some trials we found that a set of 3 Gaussian profiles with FWHM of 2, 7 and 12 spectral pixels offers both good detection power (completeness) and affordable computational load. A lower number of profiles degrades the detection power while a higher number does not increase it but requires more computing power. More sophisticated profiles (e.g., asymmetric line shape mimicking those generally found in Lyα emitters) were also considered but not used given their negligible impact on the performances.

Results
The algorithm detects 791 emission lines belonging to 446 different sources. The algorithm assigns to each detected emission line a significance level, computed as the maximum purity at which this line can be detected (hence, the significance level of all detected lines is above P * , which is 80% here).
After careful inspection by a set of experts, we confirm 248 Lyα emitters covering a broad range of redshifts (2.8-6.7) and 133 lower redshifts galaxies, mostly [O ii] emitters but also a few nearby Hα emitters 4 . Table 1 gives the success rate of redshift assignment for the full sample as a function of the purity estimation. Note that the measured success rate is smaller than the expected purity. This apparent discrepancy is due to the fundamental difference between a line detection and a redshift assignment. The latter is a very complex process involving matching spectral signature with template, searching for multiple lines, measuring line shape and even performing deblending of the source from its environment. Thus at low S/N it is not surprising that some of the real detections do not lead to a confirmed redshift.
A few representative examples of ORIGIN detections are given in Fig. 6. In the following we discuss each case.
The first case (ORIGIN ID 310, first row of Fig. 6) shows the detection of a Lyα emitter. The source cannot be seen in the MUSE white light image but is clearly visible in the HST deep F775W image. The corresponding ORIGIN detection peaked at 7459 Å. The pseudo narrow band detection image is obtained by averaging the datacube of the GLR test statistics as described in sec. 3.2, on a window centered on the detection peak (see sec. 3.5). A similar image can be obtained by performing the same operation on the raw datacube. As expected the corresponding narrow-band image is more noisy compared to the detection image. However, the smoothed narrow band image displays a clear signal, in line with ORIGIN detection. The corresponding bright emission line seen in the source spectrum is broad and asymmetric with a tail on the red side of the peak, a characteristic of Lyα emission line. The redshift of the source is 5.13. The HST matched source has the ID 10185 in the Rafelski catalog (Rafelski et al. 2016), its magnitude is AB 28.8 in F775W and its Bayesian photometric redshift is 0.61, but with a large error bar (0. 47-4.27). For such a faint source the photometric redshift accuracy is not much reliable, even in this field which has an exquisite suite of deep images spanning a large wavelength range from UV to IR (Brinchmann et al. 2017). This demonstrates the power of a blind detection method like ORI-GIN, which does not rely on prior information.
The second case (ORIGIN ID 204) shows the detection of a bright [O ii] emitter at z=1.3. The source is bright (F775W AB 24.5) and clearly visible in MUSE white light and HST broad band images. The [O ii] doublet is prominent in the spectrum. While the reconstructed narrow-band image displays a well defined spatial structure that looks like the broad band image, this is not the case of the detection image, which displays a hole at the location of the galaxy center. This is due to the PCA continuum subtraction (sec. 3.1.3 The third source (ORIGIN ID 253) is a good example of the power of ORIGIN blind detection in the vicinity of a bright continuum object. ORIGIN has detected an emission line in the vicinity of the nearby (z=0.62) bright (AB 22) spiral galaxy ID 23794 (Rafelski et al. 2016). The detection and narrow-band images display a coherent structure that spatially matches a small source in the F775W image. Without the ORIGIN detection, one would have assumed that this small source in the HST image is one of the H 2 region, which can be seen along the spiral arm of the galaxy. The fact that the multi-band HST segmentation map did not identify the source as a different object from the main galaxy would have strengthened this (false) assumption. However, the spectrum confirmed that the ORIGIN line does not belong to the main galaxy, but is a Lyα emitter at z=4.7 with a typical asymmetric line profile.
The next example (ORIGIN ID 334) shows the detection of an emission line in the external part of a galaxy. But contrarily to the previous case, the narrow band and detection images do not show the same structure. The narrow band image points to the galaxy core, while the detection image does not show much signal there. After inspection of the two spectra, one can demonstrate that the line detected by ORIGIN is the H10 Balmer line belonging to the main galaxy spectrum. This line was faint enough to be left by the PCA continuum subtraction. This example shows that care must be taken when analyzing detection results in the region of bright continuum sources.
The last two examples (ORIGIN ID 376 and 329) display detection of faint Lyα emitters without HST counterpart. The sources are clearly visible in the detection images and to some extent also in the narrow band images, but nothing is detected in any HST bands. The spectrum shape is also indicative of Lyα emission. Given the depth of the HST images (AB ∼ 30) in the Hubble UDF, a low redshift object is excluded and the most likely solution corresponds to high redshift Lyα emitters. The reality of these detections has been confirmed by Maseda et al. (2018), who have demonstrated that the Lyman-break signal can be recovered by stacking the HST broad-band images of the ORIGIN detected Lyα emitters without HST counterparts.

Purity and completeness
The purity, that is, the fraction of true detections with respect to the total number of detections, is a built-in capability of ORIGIN (see sec. 3.3). Obviously we would like to minimize the number of false detections in the total list of detections in order to get a purity as close as possible to 100%. On the other hand, targeting a higher purity automatically decreases the completeness, that is, the fraction of lines that are detected by the algorithm with respect to the total number of sources to be discovered. The trade-off between purity and completeness is a feature of any detection method. The estimation of completeness is highly dependent of the sources we want to detect. For example, the completeness is dif-ferent for a population of unresolved bright Lyα emitters, or for a population of unresolved faint Lyα emitters or for a population of diffuse emission sources with broad emission lines. Hence, a generic estimation of completeness is not a built-in function of ORIGIN (in contrast to the estimation of the purity) neither of Table 1: Success rate of redshift assignment for udf-10 ORIGIN detections. Ndetect corresponds to the number of sources detected in the considered purity range, withZ (resp. noZ) correspond respectively to the number of successful (resp. unsuccessful) redshift assignments.

Purity
Ndetect any detection method. A detailed study of Lyα emitters completeness in the HUDF datacubes is beyond the scope of this paper and will be presented in the upcoming DR2 paper (Bacon et al, in prep). Nevertheless, we address here the question of completeness with a simpler approach by generating fake Lyα emitters into a datacube with similar characteristics to the HUDF datacube.
In practice we replace the signal of the udf-10 datacube by a random Gaussian noise with zero mean and variance equal to the udf-10 variance estimate. We then generate fake Lyα emitters using typical number counts representative of the faint end Lyα luminosity function (Drake et al. 2017). The generated Lyα lines are asymmetric with FWHM and skewness that are representative of the Lyα emitters population. The resulting spectral profiles of the Lyα emitters, supposed to be point sources, are convolved with the MUSE PSF. Finally, we add 9 bright continuum sources. The resulting data cube is an idealized version of the real data cube but with the same noise characteristics. Note that this process assumes a Gaussian noise distribution, an assumption checked by Bacon et al. (2017) in the udf-10 datacube.
ORIGIN is then run on the fake datacube, varying the fluxes, the locations and the wavelengths of the fake Lyα emitters. The comparison of the ORIGIN detected sources with the input list allows to compute the true purity and completeness. Fig. 7 compares the purity estimated by the algorithm with respect to the true purity. This shows that the estimate provided by ORIGIN is reliable on a wide range of purity levels. Fig. 8 shows completeness versus purity plots for two different wavelength ranges and three flux values. As expected bright sources are fully recovered while the algorithm achieves a lower completeness for fainter ones. Note also that completeness is lower in the red for a given flux because of the impact of sky lines (the larger variance in the red end is visible in the magenta plots of Fig. 6, column SPECTRUM). Finally, note that the weak slopes of the completeness versus purity curves indicate that ORIGIN is able to achieve a relatively high completeness (with respect to the 'asymptotic' one) with a fairly high purity (0.8 for instance).

Discussion
With respect to the preliminary, unreleased version of ORIGIN used on the version 0.42 of the udf-10 datacube in Bacon et al. (2017), we have increased the number of detections of Lyα emitters by 34% (255 versus 190). Note that the difference is only 7% (174 versus 163) when we restrict the sample to the highest confidence redshifts. This was expected as the improved performance of ORIGIN has allowed the detection of fainter Lyα emitters while the 'easiest one' were already identified in the preliminary version of the algorithm.
We also report the detection of 86 Lyα emitters without HST counterpart. This is almost 3 times more sources with respect to the 30 sources published in Bacon et al. (2017). The ability to detect such sources, which are invisible in the deepest HST broad band images, is an important feature of ORIGIN.
While some of these improvements are due to the better data quality of the latest version of the datacube, most of them result from the advanced methodological features of the present version of ORIGIN with respect to the preliminary version used in Bacon et al. (2017). The most important improvements regard the continuum subtraction and the use of robust test statistics. Last, but not least, ORIGIN gives now an important reliability factor, the estimated purity, a must for a robust detection method.
Although the DCT and iterative PCA do a good job for the continuum subtraction (in particular better than a basic median filtering, see sec. 3.1.3), it still leaves some low level residuals that can produce spurious detections (such an example -ORIGIN ID 334-is highlighted in sec.4.2). The consequence is that the purity in the regions corresponding to bright continuum sources is lower than in the background region. A precise estimate of the purity in that case is difficult.
Among the many results brought by the analysis of the MUSE deep field observations is the ubiquitous presence of Lyα low surface brightness extended halos surrounding Lyα emitters (Wisotzki et al. 2016;Leclercq et al. 2017;Wisotzki et al. 2018). Given that ORIGIN was designed for point source detection one can ask whether a better strategy would have been to use a kernel larger than the PSF. Even if the total flux of the Lyα halo can be similar to or even larger than the flux of the central component, its surface brightness is approximately two magnitude lower than the central component outside the PSF area (see Fig. 2 of Leclercq et al. 2017). Hence, in practice, the point like assumption does not appear to be an important limiting factor for the blind detection of Lyα emitters.

Summary and conclusion
The algorithm ORIGIN is designed to be as powerful as possible for detecting faint spatial-spectral emission signatures, to allow for a stable false detection rate over the data cube and to provide an automated and reliable estimation of the resulting purity. We have shown on realistic simulated MUSE data that the algorithm achieves these goals and ORIGIN was applied to the deepest MUSE observations of the Hubble Ultra Deep Field (udf-10). In this tiny 1 × 1 field, ORIGIN revealed a large population of Lyα emitters (255), including 86 sources without HST counterpart. These sources could not have been found without such a powerful blind detection method as ORIGIN.
While the algorithm is mostly automated, we have presented a list of the main parameters with guidelines allowing to tune their default values. The algorithm is freely available to the community as a Python package on GitHub.
We have already identified points for improvements to the current version of ORIGIN and we are currently working on them. The first point is the greedy PCA (Sec. 3.1.3). Although this step works quite well and is fast, it may still leave residual nuisances that increase the false detection rate in the region of bright extended sources. Besides, we have not analyzed the behavior of this step in different acquisition settings, for instance crowded fields or fields with much lower signal-to-noise ratios than udf-10. Finally, we are developing a method for automatically tuning of the P PCA FA parameter involved in this stage. We also know that there is room for improvement in the detection power (and completeness) of the method, by accounting for a more complex model than model (7) (leading then to GLR test statistics different from (8)). We have already devised slightly more powerful models and tests' statistics, but those make the total amount of computing power far too demanding, as they impose a processing subcube per subcube (instead of allowing for fast convolutions along the spatial and spectral dimensions). Improvements might yet be found in this direction in the future.
Finally, the estimation of the purity can also be improved. While the current estimation is efficient for data cubes with extended zones of 'background', we know that the procedure may fail for crowded data cubes where no or too few such zones exist. In such cases, the amount of data available to estimate the test statistics under the null hypothesis is insufficient, a problem which is the object of active research in statistics. The detailed contents of each step are described in more details in the documentation 7 , which also contains an Jupyter notebook example.
The other steps can be run with the following commands: In the following table we give the ORIGIN parameters used for udf-10 processing: Step in Sec.