Open Access
Issue
A&A
Volume 628, August 2019
Article Number A3
Number of page(s) 30
Section Extragalactic astronomy
DOI https://doi.org/10.1051/0004-6361/201834471
Published online 25 July 2019

© G. de La Vieuville et al. 2019

Licence Creative Commons
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Reionization is an important change of state of the universe after recombination, and many resources have been devoted in recent years to understand this process. The formation of the first structures, stars, and galaxies marked the end of the dark ages. Following the formation of the first structures, the density of ionizing photons was high enough to allow the ionization of the entire neutral hydrogen content of the intergalactic medium (IGM). It has been established that this state transition was mostly completed by z ∼ 6 (Fan et al. 2006; Becker et al. 2015). However the identification of the sources responsible for this major transition and their relative contribution to the process is still a matter of substantial debate.

Although quasars were initially considered as important candidates owing to their ionising continuum, star-forming galaxies presently appear as the main contributors to the reionization (see e.g. Robertson et al. 2013, 2015; Bouwens et al. 2015a; Ricci et al. 2017). However a large uncertainty still remains on the actual contribution of quasars, as the faint population of quasars at high redshift remains poorly constrained (see e.g. Willott et al. 2010; Fontanot et al. 2012; McGreer et al. 2013). There are two main signatures currently used for the identification of star-forming galaxies around and beyond the reionization epoch. The first signature is the Lyman “drop-out” in the continuum bluewards with respect to Lyman-alpha from the combined effect of interstellar and intergalactic scattering by neutral hydrogen. Different redshift intervals can be defined to select Lyman break galaxies (LBGs) using the appropriate colour–colour diagrams or photometric redshifts. Extensive literature is available on this topic since the pioneering work by Steidel et al. (1996) (see e.g. Ouchi et al. 2004; Stark et al. 2009; McLure et al. 2009; Bouwens et al. 2015b, and the references therein). The second method is the detection of the Lyman-alpha line to target Lyman-alpha emitters (hereafter LAEs). The “classical” approach is based on wide-field narrow-band (NB) surveys, targeting a precise redshift bin (e.g. Rhoads et al. 2000; Kashikawa et al. 2006; Konno et al. 2014). More recent methods made efficient use of 3D/IFU spectroscopy in pencil beam mode with the Multi-Unit Spectroscopic Explorer (MUSE) at the Very Large Telecope (VLT; Bacon et al. 2015), which is a technique presently limited to z ∼ 7 in the optical domain.

Based on LBG studies, the UV luminosity function (LF) evolves strongly at z ≥ 4, with a depletion of bright galaxies with increasing redshift on one hand, and the slope of the faint end becoming steeper on the other hand (Bouwens et al. 2015b). This evolution is consistent with the expected evolution of the halo mass function during the galaxy assembly process. Studies of LAEs have found a deficit of strongly emitting (“bright”) Lyman-alpha galaxies at z ≥ 6.5, whereas no significant evolution is observed below z ∼ 6 (Kashikawa et al. 2006; Pentericci et al. 2014; Tilvi et al. 2014); this trend is attributed to either an increase in the fraction of neutral hydrogen in the IGM or an evolution of the parent population, or both. The LBGs and LAEs constitute two different observational approaches to selecting star-forming galaxies, which are partly overlapping. The prevalence of Lyman-alpha emission in well-controlled samples of star-forming galaxies is also a test for the reionization history. However, a complete and “as unbiased as possible” census of ionizing sources can only be enabled through 3D/IFU spectroscopy without any photometric preselection.

As pointed out by different authors (see e.g. Maizy et al. 2010), lensing clusters are more efficient than blank fields for detailed (spectroscopic) studies at high redshift and also to explore the faint end of the LF. In this respect, they are complementary to observations in wide blank fields, which are needed to set reliable constraints on the bright end of both the UV and LAE LF. Several recent results in the Hubble Frontier Fields (HFF; Lotz et al. 2017) fully confirm the benefit expected from gravitational magnification (see e.g. Laporte et al. 2014, 2016; Atek et al. 2014; Infante et al. 2015; Ishigaki et al. 2015; Livermore et al. 2017).

This paper presents the results obtained with MUSE (Bacon et al. 2010) at the ESO VLT on the faint end of the LAE LF based on deep observations of four lensing clusters. The data were obtained as part of the MUSE consortium Guaranteed Time Observations (GTO) programme and first commissioning run. The final goal of our project in lensing clusters is to set strong constraints on the relative contribution of the LAE population to cosmic reionization. As shown in Richard et al. (2015) for SMACSJ2031.8-4036, Bina et al. (2016) for A1689, Lagattuta et al. (2017) for A370, Caminha et al. (2016) for AS1063, Karman et al. (2016) for MACS1149 and Mahler et al. (2018) for A2744, MUSE is ideally designed for the study of lensed background sources, in particular for LAEs at 2.9 ≤ z ≤ 6.7. The MUSE instrument provides a blind survey of the background population, irrespective of the detection or not of the associated continuum. This instrument is also a unique facility capable of deriving the 2D properties of “normal” strongly lensed galaxies, as recently shown by Patricio et al. (2018). In this project, an important point is that MUSE allows us to reliably recover a greater fraction of the Lyman-alpha flux for LAE emitters, as compared to usual long-slit surveys or even NB imaging.

The precise aim of the present study is to further constrain the abundance of LAEs by taking advantage of the magnification provided by lensing clusters to build a blindly selected sample of galaxies which is less biased than current blank field samples in redshift and luminosity. By construction, this sample of LAEs is complementary to those built in deep blank fields, whether observed by MUSE or by other facilities, and makes it possible to determine in a more reliable way the shape of the LF towards the faintest levels and its evolution with redshift. We focus on four well-known lensing clusters from the GTO sample, namely Abell 1689, Abell 2390, Abell 2667, and Abell 2744. In this study we present the method and we establish the feasibility of the project before extending this approach to all available lensing clusters observed by MUSE in a future work.

In this paper we present the deepest study of the LAE LF to date, combining deep MUSE observations with the magnification provided by four lensing clusters. In Sect. 2, we present the MUSE data together with the ancillary Hubble Space Telescope (HST) data used for this project as well as the observational strategy adopted. The method used to extract LAE sources in the MUSE cubes is presented in Sect. 3. The main characteristics and the references for the four lensing models used in this article are presented in Sect. 4, knowing that the present MUSE data were also used to identify new multiply-imaged systems in these clusters, and therefore to further improve the mass models. The selection of the LAE sample used in this study is presented in Sect. 5. Section 6 is devoted to the computation of the LF. In this Section we present the complete procedure developed for the determination of the LF based on IFU detections in lensing clusters; some additional technical points and examples are given in Appendices AD. This procedure includes novel methods for masking, effective volume integration and (individual) completeness determination, using as far as possible the true spatial and spectral morphology of LAEs instead of a parametric approach. The parametric fit of the LF by a Schechter function, including data from the literature to complete the present sample, is presented in Sect. 7. The impact of mass model on the faint end and the contribution of the LAE population to the star formation rate density (SFRD) are discussed in Sect. 8. Conclusions and perspectives are given in Sect. 9.

Throughout this paper we adopt the following cosmology: ΩΛ = 0.7, Ωm = 0.3 and H0 = 70 km s−1 Mpc−1. Magnitudes are given in the AB system (Oke & Gunn 1983). All redshifts quoted are based on vacuum rest-frame wavelengths.

2. Data

2.1. MUSE observations

The sample used in this study consists of four different MUSE cubes of different sizes and exposure times, covering the central regions of well-characterized lensing clusters: Abell 1689, Abell 2390, Abell 2667, and Abell 2744 (resp. A1689, A2390, A2667 and A2744 hereafter). These four clusters already had well constrained mass models before the MUSE observations, as they benefited from previous spectroscopic observations. The reference mass models can be found in Richard et al. (2010; LoCuSS) for A2390 and A2667, in Limousin et al. (2007) for A1689, and in Richard et al. (2014) for the Frontier Fields cluster A2744.

The MUSE instrument has a 1′ × 1′ field of view (FoV) and a spatial pixel size of 0.2″, the covered wavelength range from 4750 Å to 9350 Å with a 1.25 Å sampling, effectively making the detection of LAEs possible between redshifts of z = 2.9 and 6.7. The data were obtained as part of the MUSE GTO programme and first commissioning run (for A1689 only). All the observations were conducted in the nominal WFM-NOAO-N mode of MUSE. The main characteristics of the four fields are listed in Table 1. The geometry and limits of the four FoVs are shown on the available HST images, in Fig. 1.

Table 1.

Main characteristics of MUSE observations.

thumbnail Fig. 1.

MUSE footprints overlaid on HST deep colour images. North is up and east is to the left. The images are obtained from the F775W, F625W, F475W filters for A1689, from F850LP, F814W, F555W for A2390, from F814W, F606W, F450W for A2667, and from F814W, F606W, F435W for A2744.

Open with DEXTER

A1689. Observations were already presented in Bina et al. (2016) from the first MUSE commissioning run in 2014. The total exposure was divided into six individual exposures of 1100 s. A small linear dither pattern of 0.2″ was applied between each exposure to minimize the impact of the structure of the instrument on the final data. No rotation was applied between individual exposures.

A2390, A2667, and A2744. The same observational strategy was used for all three cubes: the individual pointings were divided into exposures of 1800 s. In addition to a small dither pattern of 1″, the position angle was incremented by 90° between each individual exposure to minimize the striping patterns caused by the slicers of the instrument. A2744 is the only mosaic included in the present sample. The strategy was to completely cover the multiple-image area. For this cluster, the exposures of the four different FoVs are as follows: 3.5, 4, 4, 5 hours of exposure plus an additional 2 hours at the centre of the cluster (see Fig. 1 in Mahler et al. 2018 for the details of the exposure map). For A2390 and A2667, the centre of the FoV was positioned on the central region of the cluster as shown in Table 1 and Fig. 1.

2.2. MUSE data reduction

All the MUSE data were reduced using the MUSE ESO reduction pipeline (Weilbacher et al. 2012, 2014). This pipeline includes bias subtraction, flat fielding, wavelength and flux calibrations, basic sky subtraction, and astrometry. The individual exposures were then assembled to form a final data cube or a mosaic. An additional sky line subtraction was performed with the Zurich Atmosphere Purge software (ZAP; Soto et al. 2016). This software uses principal component analysis to characterize the residuals of the first sky line subtraction to further remove them from the cubes. Even though the line subtraction is improved by this process, the variance in the wavelength layers affected by the presence of sky lines remains higher, making the source detection more difficult on these layers. For simplicity, hereafter we simply use the term layer to refer to the monochromatic images in MUSE cubes.

2.3. Complementary data (HST)

For all MUSE fields analysed in this paper, complementary deep data from HST are available. They were used to help the source detection process in the cubes but also for modelling the mass distribution of the clusters (see Sect. 4). A brief list of the ancillary HST data used for this project is presented in Table 2. For A1689 the data are presented in Broadhurst et al. (2005). For A2390 and A2667, a very thorough summary of all the HST observations available are presented in Richard et al. (2008) and more recently in Olmstead et al. (2014) for A2390. A2744 is part of the HFF programme, which comprises the deepest observations performed by HST on lensing clusters. All the raw data and individual exposures are available from the Mikulski Archive for Space Telescopes (MAST), and the details of the reduction are addressed in the articles cited above.

Table 2.

Ancillary HST observations.

3. Detection of the LAE population

3.1. Source detection

The MUSE instrument is very efficient at detecting emission lines (see for example Bacon et al. 2017; Herenz et al. 2017). On the contrary, deep photometry is well suited to detect faint objects with weak continua, with or without emission lines. To build a complete catalogue of the sources in a MUSE cube, we combined a continuum-guided detection strategy based on deep HST images (see Table 2 for the available photometric data) with a blind detection in the MUSE cubes. Many of the sources end up being detected by both approaches and the catalogues are merged at the end of the process to make a single master catalogue. The detailed method used for the extraction of sources in A1689 and A2744 can be found in Bina et al. (2016) and Mahler et al. (2018) 1, respectively. The general method used for A2744, which contains the vast majority of sources in the present sample, is summarized below.

The presence of diffuse intra-cluster light (ICL) makes the detection of faint sources difficult in the cluster core, in particular for multiple images located in this area. A running median filter computed in a window of 1.3″ was applied to the HST images to remove most of the ICL. The ICL-subtracted images were then weighted by their inverse variance map and combined to make a single deep image. The final photometric detection was performed by SExtractor (Bertin & Arnouts 1996) on the weighted and combined deep images.

For the blind detection on the MUSE cubes, the Muselet software was used (MUSE Line Emission Tracker, written by J. Richard2). This tool is based on SExtractor to detect emission-line objects from MUSE cubes. It produces spectrally weighted, continuum-subtracted NB images (NB) for each layer of the cube. The NB images are the weighted average of five wavelength layers, corresponding to a spectral width of 6.25 Å. These images form a NB cube, in which only the emission-line objects remain. This Sextractor tool is then applied to each of the NB images. At the end of the process, the individual detection catalogues are merged together and sources with several detected emission lines are assembled as one single source.

After building the master catalogue, all spectra were extracted and the redshifts of galaxies were measured. For A1689, A2390, and A2667, 1D spectra were extracted using a fixed 1.5″ aperture. For A2744, the extraction area is based on the SExtractor segmentation maps obtained from the deblended photometric detections described above. At this stage, the extracted spectra are only used for the redshift determination. The precise measurement of the total line fluxes requires a specific procedure, which is described in Sect. 3.2. Extracted spectra were manually inspected to identify the different emission lines and accurately measure the redshift.

A system of confidence levels was adopted to reflect the uncertainty in the measured redshifts, following Mahler et al. (2018), which has some examples that illustrate the different cases. All the LAEs used in the present paper belong to the confidence categories 2 and 3, meaning that they all have fairly robust redshift measurements. For LAEs with a single line and no continuum detected, the wide wavelength coverage of MUSE, the absence of any other line, and the asymmetry of the line were used to validate the identification of the Lyman-alpha emission. For A1689, A2390, and A2667 most of the background galaxies are part of multiple-image systems, and are therefore confirmed high redshift galaxies based on lensing considerations.

In total 247 LAEs were identified in the four fields: 17 in A1689, 18 in A2390, 15 in A2667, and 197 in A2744. The important difference between the number of sources found in the different fields results from a well-understood combination of field size, magnification regime, and exposure time, as explained in Sect. 5.

3.2. Flux measurements

The flux measurement is part of the main procedure developed and presented in Sect. 6 to compute the LF of LAEs in lensing clusters observed with MUSE. We discuss this in this section to understand the selection of the final sample of galaxies used to build the LF.

For each LAE, the flux measurement in the Lyman-alpha line was done on a continuum subtracted NB image that contains the whole Lyman-alpha emission. For each source, we built a sub-cube centred on the Lyman-alpha emission, plus adjacent blue and red sub-cubes used to estimate the spectral continuum. The central cube is a square of size 10″ and the spectral range depends on the spectral width of the line. To determine this width and the precise position of the Lyman-alpha emission, all sources were manually inspected. The blue and red sub-cubes are centred on the same spatial position, with the same spatial extent, and are 20 Å wide in the wavelength direction. A continuum image was estimated from the average of the blue and red sub-cubes and this image was subtracted pixel-to-pixel from the central NB image. For sources with large full width at half maximum (FWHM), the NB used for flux measurement can regroup more than 20 wavelength layers (or equivalently 25 Å).

Because SExtractor with FLUX_AUTO is known to provide a good estimate of the total flux of the sources to the 5% level (see e.g. the SExtractor Manual, Sect. 10.4, Fig. 8.), it was used to measure the flux and the corresponding uncertainties on the continuum-subtracted images. The FLUX_AUTO routine is based on Kron first moment algorithm, and is well suited to account for the extended Lyman-alpha haloes that can be found around many LAEs (see Wisotzki et al. 2016 for the extended nature of the Lyman-alpha emission). In addition, the automated aperture is useful to account properly for the distorted images that are often found in lensing fields. As our sample contains faint, low surface brightness sources, and given that the NB images are not designed to maximize the signal-to-noise ratio (S/N), it is sometimes challenging to extract sources with faint or low-surface brightness Lyman-alpha emission. In order to measure their flux we force the extraction at the position of the source. To do so, the SExtractor detection parameters were progressively loosened until a successful extraction was achieved. An extraction was considered successful when the source was recovered at less than a certain matching radius (rm ∼ 1″) from the original position given by Muselet. Such an offset is sometimes observed between the peak of the UV continuum and the Lyman-alpha emission in case of high magnification. A careful inspection was needed to make sure that no errors or mismatches were introduced in the process.

Other automated alternatives to SExtractor exist to measure the line flux (see e.g. LSDCat in Herenz et al. 2017 or NoiseChisel in Akhlaghi & Ichikawa 2015 or a curve of growth approach as developed in Drake et al. 2017). A comparison between these different methods is encouraged in the future but beyond the scope of the present analysis.

4. Lensing clusters and mass models

In this work, we used detailed mass models to compute the magnification of each LAE, and the source plane projections of the MUSE FoVs at various redshifts. These projections were needed when performing the volume computation (see Sect. 6.1). The mass models were constructed with Lenstool, using the parametric approach described in Kneib et al. (1996), Jullo et al. (2007), and Jullo & Kneib (2009). This parametric approach relies on the use of analytical dark-matter (DM) halo profiles to describe the projected 2D mass distribution of the cluster. Two main contributions are considered by Lenstool: one for each large-scale structure of the cluster and one for each massive cluster galaxy. The parameters of the individual profiles are optimized through a Monte Carlo Markov chain (MCMC) minimization. The Lenstool software aims at reducing the cumulative distance in the parameter space between the predicted position of multiple images obtained from the model, and the observed images. The presence of several robust multiple systems greatly improves the accuracy of the resulting mass model. The use of MUSE is therefore a great advantage as it allowed us to confirm multiple systems through spectroscopic redshifts and also to discover new systems (e.g. Richard et al. 2015; Bina et al. 2016; Lagattuta et al. 2017; Mahler et al. 2018). Some of the models used in this study are based on the new constraints provided by MUSE. An example of source plane projection of the MUSE FoVs is provided in Fig. 2.

thumbnail Fig. 2.

Left panel: MUSE white light image of the A2667 field represented with a logarithmic colour scale. Right panel: projection of the four MUSE FoVs in the source plane at z = 3.5, combined with the magnification map encoded in the colour. All images on this figure are at the same spatial scale. In the case of multiply imaged area, the source plane magnification values shown correspond to the magnification of the brightest image.

Open with DEXTER

Because of the large number of cluster members, the optimization of each individual galaxy-scale clump cannot be achieved in practice. Instead, a relation combining the constant mass-luminosity scaling relation described in Faber & Jackson (1976) and the fundamental plane of elliptical galaxies is used by Lenstool. This assumption allows us to reduce the parameter space explored during the minimization process, leading to more constrained mass models, whereas individual parameterization of clumps would lead to an extremely degenerate final result and therefore, a poorly constrained mass model. The analytical profiles used were double pseudo-isothermal elliptical potentials (dPIEs) as described in Elíasdóttir et al. (2007). The ellipticity and position angle of these elliptical profiles were measured for the galaxy-scale clumps with SExtractor taking advantage of the high spatial resolution of the HST images.

Because the brightest cluster galaxies (BCGs) lie at the centre of clusters, they are subjected to numerous merging processes and are not expected to follow the same light-mass scaling relation. They are modelled separately in order to not bias the final result. In a similar way, galaxies that are close to the multiple images or critical lines are sometimes manually optimized because of the significant impact they can have on the local magnification and geometry of the critical lines.

The present MUSE survey has allowed us to improve the reference models available in previous works. Table 3 summarizes their main characteristics. For A1689, the model used is an improvement made on the model of Limousin et al. (2007), previously presented in Bina et al. (2016). For A2390, the reference model is presented in Pello et al. (1991), Richard et al. (2010), and the recent improvements in Pello et al. (in prep.) For A2667, the original model was obtained by Covone et al. (2006) and was updated in Richard et al. (2010). For A2744, the gold model presented in Mahler et al. (2018) was used, including as novelty the presence of NorthGal and SouthGal, which are two background galaxies included in the mass model because they could have a local influence on the position and magnification of multiple images.

Table 3.

Summary of the main mass components for the lensing models used for this work.

5. Selection of the final LAE sample

To obtain the final LAE sample used to build the LF, only one source per multiple-image system was retained. The ideal strategy would be to keep the image with the highest S/N, which often coincides with the image with highest magnification. However, it is more secure for the needs of the LF determination to keep the sources with the most reliable flux measurement and magnification determination. In practice, it means that we often chose the less distorted and most isolated image. The flux and extraction of all sources among multiple systems were manually reviewed to select the best one to be included in the final sample. All the sources for which the flux measurement failed or that were too close to the edge of the FoV were removed from the final sample. One extremely diffuse and low surface brightness source (Id : A2744, 5681) was also removed as it was impossible to properly determine its profile for the completeness estimation in Sect. 6.2.1.

The final sample consists of 156 lensed LAEs: 16 in A1689, 5 in A2390, 7 in A2667, and 128 in A2744. Out of these 156 sources, four are removed at a later stage of the analysis for completeness reasons (see Sect. 6.2.2) leaving 152 to compute the LFs. The large difference between the clusters on the number of sources detected is expected for two reasons:

  • The A2744 cube is a 2 × 2 MUSE FoV mosaic and is deeper than the three other fields: on average four hours exposure time for each quadrant, whereas all the others have two hours or less of integration time (see Table 1).

  • The larger FoV allows us to reach further away from the critical lines of the cluster, therefore increasing the probed volume as we get close to the edges of the mosaic.

This makes the effective volume of universe explored in the A2744 cube much larger (see end of Sect. 6.1.2) than in the three other fields combined. It is therefore not surprising to find most of the sources in this field. This volume dilution effect is most visible when looking at the projection of the MUSE FoVs in the source plane (see Fig. 2). Even though this difference is expected, it seems that we are also affected by an over-density of background sources at z = 4 as shown in Fig. 3. This over-density is currently being investigated as a potential primordial group of galaxies (Mahler et al., in prep.). The complete source catalogue is provided in Table E.1 and the Lyman-alpha luminosity distribution corrected for magnification can be found on the lower panel of Fig. 3. The corrected luminosity LLyα was computed from the detection flux FLyα with

(1)

where μ and DL are the magnification and luminosity distance of the source, respectively. In this section and in the rest of this work, a flux weighted magnification is used to better account for extended sources and for sources detected close to the critical lines of the clusters where the magnification gradient is very strong. This magnification is computed by sending a segmentation of each LAE in the source plane with Lenstool, measuring a magnification for each of its pixels and making a flux weighted average of it. A full probability density of magnification P(μ) is also computed for each LAE and used in combination with its uncertainties on FLyα to obtain a realistic luminosity distribution when computing the LFs (see Sect. 6.3). Objects with the highest magnification are affected by the strongest uncertainties and tend to have very asymmetric P(μ) with a long tail towards high magnifications. Because of this effect, LAEs with log L <  40 should be considered with great caution.

thumbnail Fig. 3.

Redshift and magnification corrected luminosity distribution of the 152 LAEs used for the LF computation (in blue). The corrected histograms in light red correspond to the histogram of the population weighted by the inverse of the completeness of each source (see Sect. 6.2). The empty bins seen on the redshift histograms are not correlated with the presence of sky emission lines.

Open with DEXTER

Figure 4 compares our final sample with the sample used in the MUSE HUDF LAE LF study (Drake et al. 2017, hereafter D17). The MUSE HUDF (Bacon et al. 2017), with a total of 137 hours of integration, is the deepest MUSE observation to date. It consists of a 3 × 3 MUSE FoV mosaic, each of the quadrants being a 10 hours exposure, with an additional pointing (udf-10) of 30 h, overlaid on the mosaic. The population selected in D17 is composed of 481 LAEs found in the mosaic and 123 in the udf-10, for a total of 604 LAEs. On the upper panel of the figure, the plot presents the luminosity of the different samples versus the redshift. Using lensing clusters, the redshift selection tends to be less affected by luminosity bias, especially for higher redshift. On the lower panel, the normalized distribution of the two populations is presented. The strength of the study presented in D17 resides in the large number of sources selected. However, a sharp drop is observed in the distribution at log L ∼ 41.5. Using the lensing clusters, with ∼25 h of exposure time and a much smaller lens-corrected volume of survey, a broader luminosity selection was achieved. As discussed in the following sections, despite a smaller number of LAEs compared to D17, the sample presented in this paper is more sensitive to the faint end of the LF by construction.

thumbnail Fig. 4.

Comparison of the 152 LAEs sample used in this work with D17. Upper panel: luminosity vs. redshift; error bars have been omitted for clarity. Lower panel: luminosity distribution of the two samples, normalized using the total number of sources. The use of lensing clusters allows for a broader selection, both in redshift and luminosity towards the faint end.

Open with DEXTER

6. Computation of the luminosity function

Because of the combined use of lensing clusters and spectroscopic data cubes, it is extremely challenging to adopt a parametric approach to determine a selection function. By construction, the sample of LAEs used in this paper includes sources coming from very different detection conditions, from intrinsically bright emitters with moderate magnification to highly magnified galaxies that could not have been detected far from the critical lines. To properly take into account these differences when computing the LF, we adopted a non-parametric approach allowing us to treat the sources individually: i.e. the 1/Vmax method (Schmidt 1968; Felten 1976). We present in this section the four steps developed to compute the LFs:

  • (i)

    The flux computation, performed for all the detected sources. This step was already described in Sect. 3.2 as the selection of the final sample relies partly on the results of the flux measurements.

  • (ii)

    The volume computation for each of the sources included in the final sample, presented in Sect. 6.1.

  • (iii)

    The completeness estimation using the real source profiles (both spatial and spectral), presented in Sect. 6.2.

  • (iv)

    The computation of the points of the differential LF, using the results of the volume computation and the completeness estimations, presented in Sect. 6.3.

6.1. Volume computation in spectroscopic cubes in lensing clusters

The Vmax value is defined as the volume of the survey where an individual source could have been detected. The inverse value, 1/Vmax, is used to determine the contribution of one source to a numerical density of galaxies. Because this survey consists of several FoV, the Vmax value for a given source must be determined from all the fields that are part of the survey, including the fields in which the source is not actually present. The volumes were computed in the source plane to avoid multiple counting of parts of the survey that are multiply imaged. For that, we used Lenstool to get the projection of the MUSE fields in the source plane and then used these projections to compute the volume (see Fig. 2 for an example of source plane projection). In this analysis, the volume computation was performed independently from the completeness estimation, focussing on the spectral noise variations of the cubes only.

The detectability of each LAEs needs to be evaluated on the entire survey to compute Vmax. This task is not straightforward, as the detectability depends on many different factors:

  • The flux of the source: The brighter the source, the higher the chances to be detected. For a given spatial profile, brighter sources have higher Vmax values.

  • The surface brightness and line profile of the source: For a given flux, a compact source would have a higher surface brightness value than an extended one, and therefore would be easier to detect. This aspect is especially important as most LAEs have an extended halo (see Wisotzki et al. 2016).

  • The local noise level: At first approximation, it depends on the exposure time. This point is especially important for mosaics in which noise levels are not the same on different parts of the mosaic as the noisier parts contribute less to the Vmax values.

  • The redshift of the source: The Lyman-alpha line profile of a source may be affected by the presence of strong sky lines in the close neighbourhood. The cubes themselves have strong variations of noise level caused by the presence of those sky emission lines (see e.g. Fig. 5).

    thumbnail Fig. 5.

    Evolution of the noise level with wavelength inside the A1689 MUSE cube. We define the noise level of a given wavelength layer of a cube as the spatial median of the RMS layer over a normalization factor. The noise spikes that are more prominent in the red part of the cube are caused by sky lines.

    Open with DEXTER

  • The magnification induced by the cluster.: Where the magnification is too small, the faintest sources could not have been detected.

  • The seeing variation from one cube to another.

This shows that to properly compute Vmax, each source has to be individually considered. The easiest method to evaluate the detectability of sources is to simply mask the brightest objects of the survey, assuming that no objects could be detected behind them. This can be achieved from a white light image, using a mask generated from a SExtractor segmentation map. The volume computation can then be done on the unmasked pixels and only where the magnification is high enough to allow the detection of the source. However, as shown in Appendix C, this technique has some limitations to account for the 3D morphologies of real LAEs. For this reason, a method to determine precisely the detectability map (referred to as detection mask or simply masks hereafter) of individual sources has been developed. As the detection process in this work is based on 2D collapsed images, we adopted the same scheme to build the 2D detection masks, and from these, built the 3D masks in the source plane adapted to each LAE of the sample. Using these individual source plane 3D masks, and as previously mentioned, the volume integration was performed on the unmasked pixels only where the magnification is high enough. In the paragraphs below, we quickly summarize the method adopted to produce masks for 2D images and explain the reasons that lead to the complex method detailed in Sects. 6.1.1 and 6.1.2.

The basic idea of our method for producing masks for 2D images is to mimic the SExtractor source detection process. For each pixel in the detection image, we determine whether the source could have been detected, had it been centred on this pixel. For this pseudo-detection, we fetch the values of the brightest pixels of the source (hereafter Bp) and compare them pixel-to-pixel to the background root mean square maps (RMS maps) produced by SExtractor from the detection image. The pixels where this pseudo-detection is successful are left unmasked, and where it failed, the pixels are masked. Technical details of the method for 2D images can be found in Appendix A. The detection masks produced in this way are binary masks and show where the source could have been detected. We use the term “covering fraction” to refer to the fraction of a single FoV covered by a mask. A covering fraction of 1 means that the source could not be detected anywhere on the image, whereas a covering fraction of 0 means that the source could be detected on the entire image.

This method of producing the detection masks from 2D images is precise and simple to implement when the survey consists of 2D photometric images. However, when dealing with 3D spectroscopic cubes, its application becomes much more complicated owing to the strong variations of noise level with wavelength in the cubes. Because of these variations, the detectability of a single source through the cubes cannot be represented by a single mask, duplicated on the spectral axis to form a 3D mask. An example of the spectral variations of noise level in a MUSE cube is provided in Fig. 5. These spectral variations are very similar for the four cubes. “Noise level” is used to refer to the average level of noise on a single layer. It is determined from the RMS cubes, which are created by SExtractor from the detection cube (i.e. the Muselet cube of NB images). For a layer i of the RMS cube, the noise level corresponds to the spatial median of the RMS layer over a normalization factor as follows:

(2)

In this equation ⟨..⟩x, y is the spatial median operator. The 2D median RMS map, RMSmedian, is obtained from a median along the wavelength axis for each spatial pixel of the RMS cube. The normalization is the spatial median value of the median RMS map. The main factor responsible for the high frequency spectral variations of noise level is the presence of sky lines affecting the variance of the cubes.

To properly account for the noise variations, the detectability of each source has to be evaluated throughout the spectral direction of the cubes by creating a series of detection masks from individual layers. These masks are then projected into the source plane for the volume computation. This step is the severely limiting factor, as it would take an excessive amount of computation time. For a sample of 160 galaxies in four cubes, sampling different noise levels in cubes at only ten different wavelengths, we would need to do 6400 Lenstool projections. This represents more than 20 days of computation on a 60 CPU computer, and it is still not representative of the actual variations of noise level versus wavelength. To circumvent this difficulty, we developed a new approach that allows for a fine sampling of the noise level variations while drastically limiting the number of source plane reconstructions. A flow chart of the method described in the next sections is provided in Fig. 6.

thumbnail Fig. 6.

Flow chart of the method used to produce the 3D masks and to compute Vmax. The key points are shown in red and the main path followed by the method is indicated in blue. All the steps related to the determination of the bright pixels are shown in grey. The steps related to the computation of the S/N of each source are indicated in green. The numbered labels in light blue refer to the bullet points in Appendix D that briefly sum up all the differnt steps of this figure.

Open with DEXTER

6.1.1. Masking 3D cubes

The general idea of the method is to use a S/N proxy of individual sources instead of comparing their flux to the actual noise. In other words, the explicit computation of the detection mask for every source, wavelength layer, and cube is replaced by a set of pre-computed masks for every cube, covering a wide range of S/N values, in such a way that a given source can be assigned the mask corresponding to its S/N in a given layer. Two independent steps were performed before assembling the final 3D masks: First, the evolution of S/N values is computed through the spectral dimension of the cubes for each LAE. Second, for each cube, a series of 2D detection masks were created for an independent set of S/N values. This is referred to as the S/N curves hereafter. These two steps are detailed below. The final 3D detection masks were then assembled by successively picking the 2D mask that corresponds to the S/N value of the source at a given wavelength in a given cube. This process was done for all sources individually.

For the first step, the S/N value of a given source was defined as follows, from the bright pixels profile of the source and a RMS map, by comparing the maximum flux of the brightest pixels profile (max(Bp)) to the noise level of that RMS map.

For each layer of the RMS cube, we computed the S/N value the source would have had at that spectral position in the cube. We point out that this is not a proper S/N value (hence the use of the term “proxy”) as the normalization used to define the noise levels in Eq. (2) depends on the cube. For a layer i of the RMS cube, the corresponding S/Ni value is given by

(3)

An example of a S/N curve defined this way is provided in Fig. 7. For a given source, this computation was done on every layer of every cube part of the survey. When computing the S/N of a given source in a cube different from the parent cube, the seeing difference (see Table 1) is accounted for by introducing convolution or deconvolution procedure to set the detection image of the LAE to the resolution of the cube considered. As a result for each LAE, three additional images are produced. The four images (original detection image plus the three simulated ones) are then used to measure the value of the brightest pixels in all four seeing conditions. For the deconvolution a python implementation of a Wiener filter part of the Scikit-image package (van der Walt et al. 2014) was used. The deconvolution algorithm itself is presented in Orieux et al. (2010) and for all these computations, the PSF of the seeing is assumed to be Gaussian.

thumbnail Fig. 7.

Example of the 3D masking process. The blue solid line represents the variations of the S/N across the wavelength dimension for the source A2744-3424 in the A1689 cube. The red points over-plotted represent the 2D resampling made on the S/N curve with ∼300 points. To each of these red points, a mask with the closest S/N value is associated. The short and long dashed black lines represent the S/N level for which a covering fraction of 1 (detected nowhere) and 0 (detected everywhere) are achieved, respectively. For all the points between these two lines, the associated masks have a covering fraction ranging from 1 to 0, meaning that the source is always detectable on some regions of the field.

Open with DEXTER

For the second step, 2D masks are created from a set of S/N values that encompass all the possible values for our sample. To produce a single 2D mask, the two following inputs are needed: the list of bright pixels of the source Bp and the RMS maps produced from the detection image (in our case, the NB images produced by Muselet). To limit the number of masks produced, two simplifications were introduced, the main one being that all RMS maps of a same cube present roughly the same pattern down to a certain normalization factor. This is equivalent to saying that all individual layers of the RMS cube can be approximately modelled and reproduced by a properly rescaled version of the same median RMS map. The second simplification is the use of four generalized bright-pixel profiles (hereafter Bpg). To be consistent with the seeing variations, one profile is computed for each cluster, taking the median of all the individual LAE profiles computed from the detection images simulated in each seeing condition (see Fig. A.1 for an example of generalized bright pixel profile, also including the effect of seeing). These profiles are normalized in such a way that max(Bpg) = 1. For each value of the S/N set defined, a mask is created for each cluster from its median RMS map and the corresponding Bpg, meaning that the 2D detection masks are no longer associated with a specific source, but with a specific S/N value.

Using the definition of S/N adopted in Eq. (3), the four Bpg are rescaled to fit any S/Nj value of the S/N set and to obtain profiles that are directly comparable to the median RMS maps:

(4)

where cj is the scaling factor. According to Eq. (2), the noise level of the median RMS maps is just 1, and as mentioned above max(Bpg) = 1. We can see that the scaling factor is simply cj = S/Nj. Therefore the four sets of bright-pixels profiles S/Nj × Bpg and the corresponding median RMS maps are used as input to produce the set of 2D detection masks.

After the completion of these two steps, the final 3D detection masks were assembled for every source individually. For this purpose, a subset of wavelength values (or equivalently, a subset of layer index) drawn from the wavelength axis of a MUSE cube was used to resample the S/N curves of individual sources. For each source and each entry of this wavelength subset, the procedure fetches the value in the S/N set that is the closest to the measured value, and returns the associated 2D detection mask, effectively assembling a 3D mask. An example of this 2D sampling is provided in Fig. 7. To each of the red points resampling the S/N curve, a pre-computed 2D detection mask is associated, and the higher the density of the wavelength sampling, the higher the precision on the final reconstructed 3D mask. The important point is that to increase the sampling density, we do not need to create more masks and therefore it is not necessary to increase the number of source plane reconstructions.

6.1.2. Volume integration

In the previous section we presented the construction of 3D masks in the image plane for all sources with a limited number of 2D masks. For the actual volume computation, the same was achieved in the source plane by computing the source plane projection of all the 2D masks, and combining these masks with the magnification maps. Thanks to the method developed in the previous subsection, the number of source plane reconstructions only depends on the length of the S/N set initially defined and the number of MUSE cubes used in the survey. It depends neither on the number of sources in the sample nor the accuracy of the sampling of the S/N variations. For the projections, we used PyLenstool3, which allows for an automated use of Lenstool. Reconstruction of the source plane was performed for different redshift values to sample the variation of both the shape of the projected area and the magnification. In practice, the variations are very small with redshift and we reduce the redshift sampling to z = 3.5, 4.5, 5.5, and 6.5.

In a very similar way to what is described at the end of the previous section, 3D masks were assembled and combined with magnification maps, in the source plane. In addition to the closest S/N value, the procedure also looks for the closest redshift bin in such a way that, for a given point (λk, S/Nk) of the resampled S/N curve, the redshift of the projection is the closest to .

The last important aspect to take into account when computing Vmax is to limit the survey to the regions where the magnification is such that the source could have been detected. The condition is given by

(5)

where μ is the flux weighted magnification of the source, Fd the detection flux, and δFd the uncertainty on the detection which reflects the local noise properties. This condition simply states that μlim is the magnification that would allow for a S/N of 1 under which the detection of the source would be impossible. It is complex to find a S/N criterion to use that would be coherent with the way Muselet works on the detection images, since the images used for the flux computation are different and of variable spectral width compared to the Muslet NBs. Therefore, this criterion for the computation of μlim is intentionally conservative to avoid overestimating the steepness of the faint end slope.

To be consistent with the difference in seeing values and in exposure time from cube to cube, μlim is computed for each LAE and for each MUSE cube (i.e. four values for a given LAE). A source only detected because of very high magnification in a shallow and bad seeing cube (e.g. A1689) would need a much smaller magnification to be detected in a deeper and better seeing cube (e.g. A2744). For the exposure time difference, the ratio of the median RMS value of the entire cube is used, and for the seeing the ratio of the squared seeing value is used. In other words, the limiting magnification in A2744 for a source detected in A1689 is given by

(6)

where ⟨..⟩x, y, λ is the median operator over the three axis of the RMS cubes and s is the seeing. The exact same formula can be applied to compute the limit magnification of any source in any cube. This simple approximation is sufficient for now as only the volume of the rare LAEs with very high magnification are dominated by the effects of the limiting magnification.

The volume integration is performed from one layer of the source plane projected (and masked) cubes to the next, counting only pixels with μ >  μlim. For this integration, the following cosmological volume formula was used:

(7)

where ω is the angular size of a pixel, DL is the luminosity distance, and E(z) is given by

(8)

In practice, and for a given source, when using more than 300 points to resample the S/N curve along the spectral dimension, a stable value is reached for the volume (i.e. less than 5% of variation with respect to a sampling of 1000 points). A comparison is provided in Appendix C between the results obtained with this method and the equivalent findings when a simple mask based on SExtractor segmentation maps is adopted instead. The maximum co-volume explored between 2.9 <  z <  6.7, accounting for magnification, is about 16 000 Mpc3, distributed as follows among the four clusters: ∼900 Mpc3 for A1689, ∼800 Mpc3 for A2390, ∼600 Mpc3 for A2667, and ∼13 000 Mpc3 for A2744.

6.2. Completeness determination using real source profiles

Completeness corrections account for the sources missed during the selection process. Applying the correction is crucial for the study of the LF. The procedure used in this article separates, on one hand, the contribution to incompleteness due to S/N effects across the detection area, and the contribution due to masking across the spectral dimension on the other hand (see Vmax in Sect. 6.1).

The 3D masking method presented in the previous section aims to map precisely the volume where a source could be detected. However, an additional completeness correction was needed to account for the fact that a source does not have a 100% chance of being detected on its own wavelength layer. In the continuity of the non-parametric approach developed for the volume computation, the completeness was determined for individual sources. To better account for the properties of sources, namely their spatial and spectral profiles, simulations were performed using their real profiles instead of parameterized realizations. Because the detection of sources was done in the image plane, the simulations were also performed in the image plane on the actual masked detection layer of a given source (i.e. the layer of the NB image cube containing the peak of the Lyman-alpha emission of the source). The mask used on the detection layer was picked using the same method as described in Sect. 6.1.1, leaving only the cleanest part of the layer available for the simulations.

6.2.1. Estimating the source profile

To get an estimate of the real source profile, we used the Muselet NB image that captures the peak of the Lyman-alpha emission (called the max-NB image hereafter). Using a similar method to that presented in Sect. 3.2, the extraction of sources on the max-NB images were forced by progressively loosening the detection criterion. The vast majority of our sources were successfully detected on the first try using the original parameters used by Muselet for the initial detection of the sample: DETECT_THRES = 1.3 and MIN_AREA = 6.

To recover the estimated profile of a source, the pixels belonging to the source were extracted on the filtered image according to the segmentation map. The filtered image is the convolved and background-subtracted image that SExtractor uses for the detection. The use of filtered images allowed us to retrieve a background-subtracted and smooth profile for each LAE. Figure 8 presents examples of source profile recovery for three representative LAEs.

thumbnail Fig. 8.

Example of source profile recovery for three representative LAEs. Left column: detection image of the source in the Muselet NB cube (i.e. the max-NB image). Middle column: filtered image (convolved and background-subtracted) produced by SExtractor from the image in the left column. Right column: recovered profile of the source obtained by applying the segmentation map on the filtered image. The spatial scale is not the same as for the two leftmost columns. All the sources presented in this figure have a flag value of 1.

Open with DEXTER

A flag was assigned to each extracted profile to reflect the quality of the extraction, based on a predefined set of parameters (detection threshold, minimum number of pixels, and matching radius) used for the successful extraction of the source. A source with flag 1 is extremely trustworthy, and was recovered with the original set of parameters used for initial automated detection of the sample. A source with flag 2 is still a robust extraction and a source with flag 3 is doubtful and is not used for the LF computation. Of the LAEs, 95% were properly recovered with a flag value of 1. The summary of flag values is shown in Table 5. The three examples presented in Fig. 8 have a flag value of 1 and were recovered using DETECT_THRESH = 1.3, MIN_AREA = 6 and a matching radius of 0.8″. Objects with flag >1 are less than 5% of the total sample. For the few sources with an extraction flag above 1, several possible explanations are found, listed by order of importance as follows:

  • The image used to recover the profiles (30″) is smaller than the entire max-NB image. As the SExtractor background estimation depends on the size of the input image, this may slightly affect the detection of some objects. This is most likely the predominant reason for a flag value of two.

  • There is a small difference in the coordinates between the recovered position and listed position. This may be due to a change in morphology with wavelength or bandwidth. By increasing the matching radius to recover the profile, we obtained a successful extraction but we also increased the value of the extraction flag.

  • The NB used does not actually correspond to the NB that leads the source to be detected. By picking the NB image that catches the maximum of the Lyman-alpha emission we do not necessarily pick the layer with the cleanest detection. For example the peak could fall in a very noisy layer of the cube, whereas the neighbouring layers would provide a much cleaner detection.

  • The source is extremely faint and was actually detected with relaxed detection parameters or manually detected.

Table 5.

Summary of the extraction flag values for sources in the different lensing fields (see text for details).

We checked that we did not include LAEs that were expected to be at a certain position as part of multiple-image system. This is to say, we did not select the noisiest images in multiple-image systems.

6.2.2. Recovering mock sources

Once a realistic profile for all LAEs was obtained, source recovery simulations were conducted. For this step, the detection process was exactly the same as that initially used for the sample detection. However, since we limited the simulations to the max-NB (see Sect. 6.2.1) images and not the entire cubes, we did not need to use the full Muselet software. To gain computation time, we only used SExtractor on the max-NB images, using the same configuration files that Muselet uses, to reproduce the initial detection parameters. In this section, the set of parameters were also DETECT_THRESH = 1.3 and MIN_AREA = 6.

To create the mock images, we used the masked max-NB images. Each source profile was randomly injected many times on the corresponding detection max-NB image, avoiding overlapping. After running the detection process on the mocks, the recovered sources were matched to the injected sources based on their position. The completeness values were derived by comparing the number of successful matches to the number of injected sources. The process was repeated 40 times to derive the associated uncertainties.

The results of the completeness obtained for each source of the sample are shown in Fig. 9. The average completeness value over the entire sample is 0.74 and the median value is 0.90. The values are this high because we used masked NB images, effectively making source recovery simulations on the cleanest part of the detection layer only. As seen on this figure, there is no well-defined trend between completeness and detection flux. At a given flux, a compact source detected on a clean layer of the cube has a higher completeness than a diffuse source with the same flux detected on a layer affected by a sky line. Four LAEs with a flag value of 3 or with a completeness value less than 10% are not used for the computation of the LFs in Sect. 6.3.

thumbnail Fig. 9.

Completeness value for LAEs vs. their detection flux. Colours indicate the detection flags. We note that only the incompleteness owing to S/N on the unmasked regions of the detection layer is plotted in this graph (see Sect. 6.2).

Open with DEXTER

A more popular approach to estimate the completeness would be to perform heavy Monte Carlo (MC) simulations for each of the cubes in the survey to get a parameterized completeness (see Drake et al. 2017 for an example). The classical approach consists in injecting sources with parameterized spatial and spectral morphologies and retrieving the completeness as a function of redshift and flux. This method is extremely time consuming, in particular for IFUs where the extraction process is lengthy and tedious. The main advantage of computing the completeness based on the real source profile is that it allows us to accurately account for the different shapes and surface brightnesses of individual sources. And because the simulations are done on the detection image of the source in the cubes, we are also more sensitive to the noise increase caused by sky lines. As seen in Fig. 10, except from the obvious flux–completeness correlation, it is difficult to identify correlations between completeness and redshift or sky lines. This tends to show that the profile of the sources is a dominant factor when it comes to estimating the completeness properly. The same conclusion was reached in D17 and in Herenz et al. (2019). A non-parametric approach of completeness is therefore better suited in the case of lensing clusters, where a proper parametric approach is almost impossible to implement because of the large number of parameters to take into account (e.g. spatial and spectral morphologies including distortion effects, lensing configuration, and cluster galaxies).

thumbnail Fig. 10.

Completeness (colour bar) of the sample as a function of redshift and detection flux. Each symbol indicates a different cluster. The light grey vertical lines are indicated by the main sky lines. There is no obvious correlation in our selection of LAEs between the completeness and the position of the sky lines.

Open with DEXTER

6.3. Determination of the luminosity function

To study the possible evolution of the LF with redshift, the 152 LAE population has been subdivided into several redshift bins: z1 : 2.9 <  z <  4.0, z2 : 4.0 <  z <  5.0, and z3 : 5.0 <  z <  6.7. In addition to these three LFs, the global LF for the entire sample zall : 2.9 <  z <  6.7 was also determined. For a given redshift and luminosity bin, the following expression to build the points of the differential LFs was used:

(9)

where Δ log Li corresponds to the width of the luminosity bin in logarithmic space, j is the index corresponding to the sources falling in the bin indexed by i, and Cj stands for the completeness correction of the source j.

To account for the uncertainties affecting each LAE properly, MC iterations are performed to build 10 000 catalogues from the original catalogue. For each LAE in the parent catalogue, a random magnification is drawn from its P(μ), and a random flux and completeness values are also drawn assuming a Gaussian distribution of width fixed by their respective uncertainties. A single value of the LF was obtained at each iteration following Eq. (9). The distribution of LF values obtained at the end of the process was used to derive the average in linear space and to compute asymmetric error bars. The MC iterations are well suited to account for LAEs with poorly constrained luminosities. This happens for sources close, or even on, the critical lines of the clusters. Drawing random values from their probability density and uncertainties for magnification and flux results in a luminosity distribution (see Eq. (1)), which allows these sources to have a diluted contribution across several luminosity bins.

For the estimation of the cosmic variance, we used the cosmic variance calculator presented in Trenti & Stiavelli (2007). Lacking other options, a single compact geometry made of the union of the effective areas of the four FoVs is assumed and used as input for the calculator. The blank field equivalent of our survey is an angular area of about 1.2′ × 1.2′. Given that a MUSE FoV is a square of size 1′, the observed area of the present survey is roughly 7′ × 7′ square. Our survey is therefore roughly equivalent to a bit more than only one MUSE FoV in a blank field. The computation is done for all the bins as the value depends on the average volume explored in each bin as well as on the intrinsic number of sources. The uncertainty due to cosmic variance on the intrinsic counts of galaxies in a luminosity bin typically range from 15% to 20% for the global LF and from 15% to 30% for the LFs computed in redshift bins. For log(L) ≲ 41, the total error budget is dominated by the MC dispersion, which is mainly caused by objects with poorly constrained luminosity jumping from one bin to another during the MC process. The larger the bins the lesser this effect because a given source is less likely to jump outside of a larger bin. For 41 ≲ log(L) ≲ 42 the Poissonian uncertainty is slightly larger than the cosmic variance but does not completely dominate the error budget. Finally for 42 ≲ log(L), the Poissonian uncertainty is the dominant source of error due to the small volume and therefore the small number of bright sources in the survey.

The data points of the derived LFs and the corresponding error bars are listed in Table 6. These LF points provide solid constraints on the shape of the faint end of the LAE distribution. In the following sections, we elaborate on these results and discuss the evolution of the faint end slope as well as the implications for cosmic reionization.

Table 6.

Luminosity bins and LF points used in Fig. 13.

7. Parametric fit of the luminosity function

The differential LFs are presented in Fig. 11 for the four redshift bins. Some points in the LF, shown as empty squares, are considered as unreliable and presented for comparison purpose only. Therefore, they are not used in the subsequent parametric fits. An LF value is considered unreliable when it is dominated by the contribution of a single source, with either a small Vmax or a low completeness value, due to luminosity and/or redshift sampling. These unreliable points are referred to as “incomplete” hereafter. The rest of the points are fitted with a straight line as a visual guide, the corresponding 68% confidence regions are represented as shaded areas. For z3, the exercise is limited owing to the large uncertainties and the lack of constraints on the bright end. The measured mean slope for the four LFs are as follows: for zall, for z1, for z2 and for z3. These values are consistent with no evolution of the mean slope with redshift.

thumbnail Fig. 11.

Luminosity function points computed for the four redshift bins. Each LF was fitted with a straight dotted line and the shaded areas are the 68% confidence regions derived from these fits. For the clarity of the plot, the confidence area derived for zall is not shown and a slight luminosity offset is applied to the LF points for z1 and z3.

Open with DEXTER

In addition, and because the integrated value of each LF is of great interest regarding the constraints they can provide on the sources of reionization, the different LFs were fitted with the standard Schechter function (Schechter 1976) using the formalism described in Dawson et al. (2007). The Schechter function is defined as

(10)

where Φ* is a normalization parameter, L* a characteristic luminosity that defines the position of the transition from the power law to the exponential law at high luminosity, and α is the slope of the power law at low luminosity. In logarithmic scale the Schechter function is written as

(11)

This function represents the numerical density per logarithmic luminosity interval. The fits were done using the Python package Lmfit (Newville et al. 2014), which is specifically dedicated to nonlinear optimization and provides robust estimations for confidence intervals. We define an objective function, accounting for the strong asymmetry in the error bars, whose results are then minimized in the least-squares sense, using the default Levenberg–Marquardt method provided by the package. The results of this first minimization are then passed to a MCMC process4 that uses the same objective function. The uncertainty on the three parameters of the Schechter function (α, L*, Φ*) are recovered from the resulting individual posterior distributions. The minimization in the least-square sense is an easy way to fit our data but is not guaranteed to give the most probable parameterization for the LFs. A more robust method would be the maximum-likelihood method. However, because of the non-parametric approach used in this work to build the points of the LF, taking into account the specific complexity of the lensing regime, the implementation of a maximum-likelihood approach such as those developed in D17 or in Herenz et al. (2019) could not be envisaged.

Because of the use of lensing clusters, the volume of Universe explored is smaller than in blank field surveys. The direct consequence is that we are not very efficient in probing the transition area around L* and the high luminosity regime of the LF. Instead, the lensing regime is more efficient in selecting faint and low luminosity galaxies and is therefore much more sensitive to the slope parameter. To properly determine the three best parameters, additional data are needed to constrain the bright end of the LFs. To this aim, previous LFs from the literature are used and combined together into a single average LF with the same luminosity bin size as the LFs derived in this work. This last point is important to ensure that the fits are not dominated by the literature data points that are more numerous with smaller bin sizes and uncertainties. In this way we determine the three Schechter parameters while properly sampling the covariance between them.

The choice of the precise data sets used for the Schechter fits is expected to have a significant impact on the results, including possible systematic effects. To estimate the extent of this effect and its contribution to uncertainties, different series of data sets were used to fit the LF, among those available in a given redshift interval (see Fig. 13). The best-fit parameters recovered are found to be always consistent within their own error bars.

In addition, the error bars do not account for the error introduced by the binning of the data. To further test the robustness of the slope measurement and to recover more realistic error bars, different bins were tested for the construction of the LF. The exact same fit process was applied to the resulting LFs. The confidence regions derived from these tests are shown in Fig. 12 for z1 and z3. The bins used hereafter to build the LFs are identified in this figure as black lines. We estimate that these bins are amongst the most reliable possibilities, and in the following they are referred to as the “optimal” bins. They were determined in such a way that each bin is properly sampled in both redshift and luminosity, and has a reasonable level of completeness. Figure 12 shows that α is very stable for z1 and that all the posterior distributions are very similar. Because we are able to probe very low luminosity regimes far below L*, the effect of binning on the measured slope is negligible for zall because of the increased statistics. As redshift increases as a consequence of lower statistics and higher uncertainties, the effects of binning on the measured slope increases. For z2 the LF is affected by a small overdensity of LAEs at z ∼ 4 resulting in a higher dispersion on the faint end slope value when testing different binnings. It was ensured that the optimal binning allowed this fit to be consistent with the fit made for zall: in both cases the points at 41.5 ≲ log L ≲ 42, affected by the same sources at z ∼ 4, are treated as a small overdensity with respect to the Schechter distribution. Finally, for z3, the lack of statistics seriously limits the possibilities of binnings to test. The only viable options are the two presented on the right panel of Fig. 12: in both cases the quality of the fit is poor compared to the other redshift bins, but the measured slopes are consistent within their own error bars.

thumbnail Fig. 12.

Areas of 68% confidence derived on the Schechter parameters when testing different binnings. Left panel: results for 2.9 <  z <  4.0 and the right panel those for 5.0 <  z <  6.7. The legends on the plots indicate, from left to right, log(L)min, log(L)max and the number of bins considered for the fit between these two limits. The black lines show the results obtained from the optimal bins adopted in this work.

Open with DEXTER

The LF points from the literature used to constrain the bright end are taken from Blanc et al. (2011) and Sobral et al. (2018) for zall and z1, Dawson et al. (2007), Zheng et al. (2013), and Sobral et al. (2018) for z2, and finally Ouchi et al. (2010), Santos et al. (2016), Konno et al. (2018), and Sobral et al. (2018) for z3. The goal is to extend our own data towards the highest luminosities using available high-quality data with enough overlap to check the consistency with the present data set. The best fits and the literature data sets used for the fits are also shown in Fig. 13 as full lines and lightly coloured diamonds, respectively. The dark red coloured regions indicate the 68% and 95% confidence areas for the Schechter fit. The best Schechter parameters are listed in Table 7. In addition, this table contains the results obtained when the exact same method of LF computation is applied to the sources of A2744 as an independent data set. This is done to assess the robustness of the method and to see whether or not the addition of low volume and high magnification cubes add significant constraints on the faint end slopes. All four fits made using the complete sample are summed up in Fig. 14, which shows the evolution of the confidence regions for α, Φ*, and L* with redshift.

thumbnail Fig. 13.

Luminosity functions and their respective fits for the 4 different redshift bins considered in this study. The red and grey squares represent the points derived in this work, where the grey squares are considered incomplete and are not used in the different fits. The literature points used to constrain the bright end of the LFs are shown as lightly coloured diamonds. The black points represent the results obtained by Cassata et al. (2011), which were not used for the fits. The purple squares represent the points derived using the Vmax method in D17 and are only shown for comparison. The best Schechter fits are shown as a solid line and the 68% and 95% confidence areas as dark red coloured regions, respectively.

Open with DEXTER

Table 7.

Results of the fit of the Schechter function in the different redshift intervals.

thumbnail Fig. 14.

Evolution of the Schechter parameters with redshift. The contours plotted correspond to the limits of the 68% confidence areas determined from the results of the fits.

Open with DEXTER

Table 7 shows that the results are very similar for z1 and z3 when considering A2744 only or the full sample. For zall and z2 the recovered slopes exhibit a small difference at the ≲2σ level. This difference is caused by one single source with 40.5 ≲ log L ≲ 41, which has a high contribution to the density count. When adding more cubes and sources, the contribution of this LAE is averaged down because of the larger volume and the contribution of other LAEs. This argues in favour of a systematic underestimation of the cosmic variance in this work. Using the results of cosmological simulations to estimate a proper cosmic variance is out of the scope of this paper. For the higher redshift bin, even though the same slope is measured when using only the LAEs of A2744, the analysis can only be pushed down to log L = 41 (instead of log L = 40.5 for the other redshift bins or when using the full sample). This shows the benefit of increasing the number of lensing fields to avoid a sudden drop in completeness at high redshift. The effect of increasing the number of lensing fields will be addressed in a future article in preparation. In the following, only the results obtained with the full sample are discussed

The values measured for L* are in good agreement with the literature (e.g. log(L*) = 43.04 ± 0.14 in Dawson et al. (2007) for z ≃ 4.5, in Santos et al. (2016) for z ≃ 5.7 and a fixed value of α = −2.0, and in Hu et al. (2010) for z ≃ 5.7 and a fixed value of α = −2.0) and these values tend to increase with redshift. This is not a surprise as this parameter is most sensitive to the data points from the literature used to fit the Schechter functions. Given the large degeneracy and therefore large uncertainty affecting the normalization parameter ϕ*, a direct comparison and discussion with previous studies is difficult and not so relevant. Regarding the α parameter, the Schechter analysis reveals a steepening of the faint end slope with increasing redshift, which in itself means an increase in the observed number of low luminosity LAEs with respect to the bright population with redshift. However, this is a ∼1σ trend that can only be seen in the light of the Schechter analysis, with a solid anchorage of the bright end, and cannot be seen using only the points derived in this work (see e.g. Fig. 11).

Taking advantage of the unprecedented level of constraints on the low luminosity regime, the present analysis has confirmed a steep faint end slope varying from at 2.9 <  z <  4 to at 5 <  z <  6.7. The result for the lower redshift bin is not consistent with measured using the maximum-likelihood technique in D17. At higher redshift, the slopes measured in D17 are upper limits, which are consistent with all the values in Table 7. The points in purple in Fig. 13 are the points derived with the Vmax from this same study. It can be seen that there is a systematic difference, increasing at lower luminosity for zall, z1 and z2. This difference, taken at face value, could be evidence for a systematic underestimation of the cosmic variance both in this work and in D17. This aspect clearly requires further investigation in the future. Faint end slope values of for z = 5.7 and for z ∼ 5.7 ( for z ∼ 6.6) were found in Santos et al. (2016) and Konno et al. (2018), respectively. These values are reasonably consistent with our measurement made for z3. In this case again, the comparison with the literature is quite limited as the faint end slope is often fixed (see e.g. Dawson et al. 2007; Ouchi et al. 2010) or the luminosity range probed is not adequate leading to poor constraints on α.

From Fig. 13, we see that the Schechter function provides a relatively good fit for zall, z1, and z2. The over-density in number count at z ∼ 4 for 41.5 ≲ log L ≲ 42 is indeed seen as an over-density with respect to the Schechter distribution. For z3 however, the fit is not as good with one point well above the 1σ confidence area. The final goal of this work is not the measurement of the Schechter slope in itself, but to provide a solid constraint on the shape of the faint end of the LF. Furthermore it is not certain that such a low luminosity population is expected to follow a Schechter distribution. Some studies have already explored the possibility of a turnover in the LF of UV selected galaxies (e.g. Bouwens et al. 2017; Atek et al. 2018), and the same possibility is not to be excluded for the LAE population. For the specific needs of this work, it remains convenient to adopt a parametric form as it makes the computation of proper integrations with correct error transfer easier (see Sect. 8) and facilitates the comparison with previous and future works. When talking about integrated LFs, any reasonable deviations from the Schechter form is of little consequence as long as the fit is representative of the data. In other words, as long as no large extrapolation towards low luminosity is made, our Schechter fits provide a good estimation of the integrated values.

8. Discussion and contribution of LAEs to reionization

In this section, before going to the integration of the LFs and the constraints and implications for reionization, we discuss the uncertainties introduced by the use of lensing. As part of the HFF programme, several good quality mass models were produced and made publicly available by different teams, using different methodologies. The uncertainties introduced by the use of lensing fields when measuring the faint end of the UV LF are discussed in detail in Bouwens et al. (2017) and Atek et al. (2018) through simulations. A more general discussion on the reason why mass models of the same lensing cluster may differ from one another can be found in Priewe et al. (2017). And finally, a thorough comparison of the mass reconstruction produced by different teams with different methods from simulated lensing clusters and HST images is done in Meneghetti et al. (2017). The uncertainties are of two types:

  • The large uncertainties for high magnification values. This aspect is well treated in this work through the use of P(μ), which allows any source to have a diluted and very asymmetric contribution to the LF over a large luminosity range. This aspect was already addressed in Sect. 5.

  • The possible systematic variation from one mass model to another. This aspect is more complex as it has an impact on both the individual magnification of sources and on the total volume of the survey.

Figure 15 illustrates the problem of variation of individual magnification from one mass model to another, using the V4 models produced by the GLAFIC team (Kawamata et al. 2016; Kawamata 2018), Sharon & Johnson (Johnson et al. 2014), and Keeton that are publicly available on the HFF website5. Since we are restricted to the HFF, this comparison can only be done for the LAEs of A2744. The figure shows the Lyman-alpha luminosity histograms when using alternatively the individual magnification provided by these three additional models. The bin size is Δ log L = 0.5, which is the bin size used in this work for the LFs at z1,z2 and z3. For log L >  40.5 the highest dispersion is of the order of 15%. This shows that even though there is a dispersion when looking at the magnification predicted by the four models, the underlying luminosity population remains roughly the same. Regarding the needs of the LF, this is the most important point.

thumbnail Fig. 15.

Comparative Lyman-alpha luminosity histograms obtained using the magnification resulting from different mass models. The grey area represents the completeness limit of this work.

Open with DEXTER

Figure 10 of Atek et al. (2018) shows an example of the variations of volume probed with rest-frame UV magnitude using different mass models for the lensing cluster MACS1149. This evolution is very similar for the models derived by the Sharon and Keeton teams and, in the worst case scenario, implies a factor of ∼2 of difference among the models compared in this figure. These important variations are largely caused by the lack of constraints on the mass distribution outside of the multiple image area: a small difference in the outer slope of the mass density affects the overall mass of the cluster and therefore the total volume probed. However, unlike other lensing fields from the HFF programme, A2744 has an unprecedented number of good lensing constraints at various redshifts thanks to the deep MUSE observations. These constraints were shared between the teams and are included in all the V4 models used for comparison in this work. These four resulting mass models are robust and coherent, at the state of the art of what can be achieved with the current facilities. It has also been shown by Meneghetti et al. (2017) based on simulated cluster mass distributions, that the methodology employed by the CATS (the CATS model for A2744 is the model presented in Mahler et al. 2018) and GLAFIC teams are among the best to recover the intrinsic mass distribution of galaxy clusters. To test the possibility of a systematic error on the survey volume, the surface of the source plane reconstruction of the MUSE FoV is compared at z = 4.5 using the same four models as in Fig. 15. The surfaces are (1.23′)2, (1.08′)2,(1.03′)2, and (0.94′)2 using the mass models of Mahler, GLAFIC, Keeton, and Sharon, respectively. The strongest difference is observed between the models of Mahler and Sharon and corresponds to a relatively small difference of only 25%.

Given the complex nature of the MUSE data combined with the lensing cluster analysis, precisely assessing the effect of a possible total volume bias is nontrivial and out of the scope of this paper. From this discussion it seems clear that the use of lensing fields introduces an additional uncertainty on the faint end slope. However the luminosity limit under which this effect becomes dominant remains unknown as all the simulations (Bouwens et al. 2017; Atek et al. 2018) were only done for the UV LF for which the data structure is much simpler.

In order to estimate the contribution of the LAE population to the cosmic reionization, its SFRD was computed. From the best parameters derived in the previous section, the integrated luminosity density ρLyα was estimated. The SFRD produced by the LAE population can be estimated using the following prescription for the (Kennicutt 1998) assuming the case B for the recombination (Osterbrock & Ferland 2006):

(12)

This equation assumes an escape fraction of the Lyman-alpha photons (fLyα) of 1 and is therefore a lower limit for the SFRD. Uncertainties on this integration were estimated with MC iterations, by perturbing the best-fit parameters within their rescaled error bars, neglecting the correlations between the parameters. The values obtained for the SFRDLyα and ρLyα are presented in Table 7 for a lower limit of integration of log(L) = 40.5, which corresponds to the lowest luminosity points used to fit the LFs (i.e. no extrapolation towards lower luminosities). The equation log(L) = 44 is used as upper limit for all integrations. The upper limit has virtually no impact on the final result because the LF drops so steeply at higher luminosity.

We show in Fig. 16 the results obtained using different lower limits of integration and how they compare to previous studies of both LBG and LAE LFs. The yellow area corresponds to the 1σ and 2σ SFRD needed to reionize the universe fully, which is estimated from the cosmic ionizing emissivity derived in Bouwens et al. (2015a). The cosmic emissivity was derived using a clumping factor of 3, the conversion to UV luminosity density was done assuming log(ξionfescp) = 24.50, where fescp is the escape fraction of UV photons and ξion is the Lyman-continuum photon production efficiency. Finally the conversion to SFRD was done with the following relation: SFRD [M yr−1] = ρUV/(8.0 × 1027) (see Kennicutt 1998; Madau et al. 1998). Because all the slopes are over α = −2 (for α <  −2 the integral of the Schechter parameterization diverges), the integrated values increase relatively slowly when decreasing the lower luminosity limit. On the same plot, the SFRD computed from the integration of the LFs derived in Bouwens et al. (2015b) are shown in darker grey for two limiting magnitudes: MUV = −17 (which is the observation limit) and MUV = −13, which is thought to be the limit of galaxy formation (e.g. Rees & Ostriker 1977; Mac Low & Ferrara 1999; Dijkstra et al. 2004).

thumbnail Fig. 16.

Evolution of the SFRD with redshift with different lower limits of integration. The limit log L = 38.5 corresponds to a 2 dex extrapolation with respect to the completeness limit in this work. Our results (in red/brown) are compared to SFRD in the literature computed for LBGs (in light grey) and from previous studies of the LAE LF (in green/blue). For the clarity of the plot, a small redshift offset was added to the points with Linf = 38.5. The darker grey points correspond to the SFRD derived from the LFs in Bouwens et al. (2015b) for a magnitude limit of integration of MUV = −17 corresponding to the observation limit, and MUV = −13. The points reported by Cassata et al. (2011) are corrected for IGM absorption. The yellow area corresponds to the 1σ and 2σ estimations of the total SFRD corresponding to the cosmic emissivity derived in Bouwens et al. (2015a).

Open with DEXTER

From this plot, and with fLyα = 1, we see that the observed LAE population only is not enough to reionize the universe fully at z ∼ 6, even with a large extrapolation of 2 dex down to log L = 38.5. However, a straightforward comparison is dangerous: an escape fraction fLyα ≳ 0.5 would be roughly enough to match the cosmic ionizing emissivity needed for reionization at z ∼ 6. Moreover, in this comparison, we implicitly assumed that the LAE population has the same properties (log(fescpξion) = 24.5) as the LBG population in Bouwens et al. (2015b). A recent study on the typical values of ξion and its scatter for typical star-forming galaxies at z ∼ 2 by Shivaei et al. (2018) has shown that ξion is highly uncertain as a consequence of galaxy-to galaxy variations on the stellar population and UV dust attenuation, while most current estimates at high-z rely on (too) simple prescriptions from stellar population models. The SFRD obtained from LAEs when no evolution in fLyα is introduced remains roughly constant as a function of redshift when no extrapolation is introduced and slightly increases with redshift when using Linf = 38.5. Figure 16 shows in green/blue, the SFRDLyα values derived in previous studies of the LAE LF, namely Ouchi et al. (2008; hereafter O08), Cassata et al. (2011; hereafter C11), and D17. In C11, a basic correction for IGM absorption was performed assuming fLyα varying from 15% at z = 3 to 50% at z = 6 and using a simple radiative transfer prescription from Fan et al. (2006). This correction can easily explain the clear trend of increase of SFRD with redshift and the discrepancy with our points at higher redshift. At lower redshifts, the IGM correction is lower and the points are in a relatively good agreement. The points in O08 are the result of a full integration of the LFs with a slope fixed at α = −1.5 and are in reasonable agreement for all redshift domains. The two higher redshift points derived in D17 are inconsistent with our measurements. This is not a surprise as the slopes derived in D17 are systematically steeper and inconsistent with this work.

The use of an IFU (MUSE) in D17, in Herenz et al. (2019; hereafter H19), and this survey ensures that we better recover the total flux, even though we may still miss the faintest part of the extended Lyman-alpha haloes (see Wisotzki et al. 2016). This is not the case for NB (e.g. O08) or slit-spectroscopy (e.g. Cassata et al. 2011) surveys in which a systematic loss of flux is possible for spatially extended sources or broad emission lines because of the limited aperture of the slits or the limited spectral width of NB filters. It is noted in H19 that the 3.2 <  z <  4.55 LF estimates in C11 tend to be lower than most literature estimates (including those in H19). One possible explanation would be a systematic loss of flux, which results in a systematic shift of the derived LF towards lower luminosities. Interestingly, when assuming point-like sources to compute the selection function, H19 manages to recover very well the results of C11 for this redshift domain. It is also interesting to see that as luminosity decreases, the LF estimates from C11 become more and more consistent with the points and Schechter parameterization derived in this work. For z3, the C11 LF is even fully consistent with the Schechter parameterization across the entire luminosity domain (see Fig. 13). The following line of thought could explain the concordance of this work with the C11 estimates at lower luminosity and higher redshift: At lower luminosity and higher redshift, a higher fraction of LAEs detected are point-like sources, making the C11 LFs more consistent with our values; and at higher luminosity and lower redshift, more extended LAEs are detected and a more complex correction is needed to get a realistic LF estimate.

The second advantage of using an IFU is linked to the selection of the LAE population. The O08 authors used a NB photometric selection of sources with spectroscopic follow-up to confirm the LAE candidates. This results in an extremely narrow redshift window which is likely to lead to lower completeness of the sample due to the two-step selection process. The studies by D17 and H19, adopt the same approach as this work: a blind spectroscopic selection of sources. In addition, as shown in Fig. 4 and stated in Sect. 7 when discussing the differences in slope between A2744 alone and the full sample, the use of highly magnified observations allows for a more complete source selection at increasing redshift. The sample used in the present work could arguably have a higher completeness level than other previous studies.

To summarize the above discussion, the observational strategy adopted in this study by combining the use of MUSE and lensing clusters has allowed us to

  • Reach fainter luminosities, providing better constraints on the faint end slope of the LF, while still taking advantage of the previous studies to constrain the bright end;

  • Recover a greater fraction of flux for all LAEs;

  • Cover a large window in redshift and flux;

  • Reach a higher level of completeness, especially at high redshift.

A steepening of the faint end slope is observed with redshift, which follows what is usually expected. This trend can be explained by a higher proportion of low luminosity LAEs observed at higher redshift owing to higher dust content at lower redshift. On the other hand, the density of neutral hydrogen is expected to increase across the 5 <  z <  6.7 interval, reducing the escape fraction of Lyman-alpha photons, a trend affecting LAEs in a different way depending on large-scale structure. While an increase of SFRD with redshift is observed, the evolution of the observed SFRDLyα is also affected by fLyα. From the point of view of the literature, the expected evolution of fLyα is an increase with redshift up to z ∼ 6−7 and then a sudden drop at higher redshift (see e.g. Clément et al. 2012; Pentericci et al. 2014). For z <  6, the increase of fLyα is generally explained by the reduced amount of dust at higher redshift. And for z ∼ 6−7 and above, we start to probe the reionization era and owing to the increasing amount of neutral hydrogen and the resonant nature of the Lyα, the escape fraction is expected to drop at some point. It has been suggested in Trainor et al. (2015) and Matthee et al. (2016) that the escape fraction would decrease with an increasing SFRD. This would only increase the significance of the trend observed, as it means the points with the higher SFRD would have a larger correction.

Furthermore the derived LFs and the corresponding SFRD values could be affected by bubbles of ionized hydrogen, especially in the last redshift bin. In our current understanding of the phenomenon, reionization is not a homogeneous process (Becker et al. 2015; Bosman et al. 2018). It could be that the expanding areas of ionized hydrogen develop faster in the vicinity of large structures with a high ionising flux, leaving other areas of the universe practically untouched. There is increasing observational evidence of this effect (see e.g. Stark et al. 2017). It was shown in Matthee et al. (2015), using a simple toy model, that an increased amount of neutral hydrgen in the IGM could produce a flattening of the faint end shape of the LF. This same study also concluded that the clustering of LAEs had a large impact on the individual escape fraction, which makes it difficult to estimate a realistic correction, as the escape fraction should be estimated on a source to source basis.

As previously discussed, it is neither certain nor expected that the LAE population alone is enough to reionize the universe at z ∼ 6. However, the LBG and the LAE population have roughly the same level of contribution to the total SFRD at face value. Depending on the intersection between the two populations, the observed LAEs and LBGs together could produce enough ionizing flux to maintain the ionized state of the universe at z ∼ 6.

This question of the intersection is crucial in the study of the sources of reionization. Several authors have addressed the prevalence of LAE among LBG galaxies, and have shown that the fraction of LAE increases for low luminosity UV galaxies till z  ∼  6, whereas the LAE fraction strongly decreases towards z  ∼  7 (see e.g. Stark et al. 2010, Pentericci et al. 2011). The important point however is to precisely determine the contribution of the different populations of star-forming galaxies within the same volume, which is a problem that requires the use of 3D/IFU spectroscopy. As a preliminary result, we estimate that ∼20% of the sample presented in this study have no detected counterpart on the deep images of the HFFs. A similar analysis is being conducted on the deepest observations of MUSE on the Hubble Ultra Deep Field (Maseda et al. 2018).

9. Conclusions

The goal of this study was to set constraints on the sources of cosmic reionization by studying the LAE LF. Taking advantage of the great capabilities of the MUSE instrument and using lensing clusters as a tool to reach lower luminosities, we blindly selected behind four lensing clusters a population of 156 spectroscopically identified LAEs that have 2.9 <  z <  6.7 and magnification corrected luminosities 39 ≲ log L ≲ 43.

Given the complexity in combining the spectroscopic data cubes of MUSE with gravitational lensing, and taking into account that each source needs an appropriate treatment to properly account for its magnification and representativity, the computation of the LF needed a careful implementation, including some original developments. For these needs, a specific procedure was developed, including the following new methods: First, we created a precise Vmax computation for the sources found behind lensing clusters is based on the creation of 3D masks. This method allows us to precisely map the detectability of a given source in MUSE spectroscopic cubes. These masks are then used to compute the cosmological volume in the source plane. This method could be easily adapted to be used in blank field surveys. Second, we developed a completeness determination based on simulations using the real profile of the sources. Instead of performing a heavy parametric approach based on MC source injection and recovery simulations, which is not ideally suited for lensed galaxies, this method uses the real profile of sources to estimate their individual completeness. The method is faster, more flexible, and accounts in a better way for the specificities of individual sources, both in the spatial and spectral dimensions.

After applying this procedure to the LAE population, the Lyman-alpha LF has been built for different redshift bins using 152 of the 156 detected LAEs. Four LAEs were removed because their contribution was not trustworthy. Because of the observational strategy, this study provides the most reliable constraints on the shape of the faint end of the LFs to date and therefore, a more precise measurement of the integrated SFRD associated with the LAE population. The results and conclusions can be summarized as follows:

  • The LAE population found behind the four lensing clusters was split in four redshift bins: 2.9 <  z <  6.7, 2.9 <  z <  4.0, 4.0 <  z <  5.9, and 5.0 <  z <  6.7. Because of the lensing effect, the volume of universe probed is greatly reduced in comparison to blank field studies. The estimated average volume of universe probed in the four redshift bins are ∼15 000 Mpc3, ∼5000 Mpc3, ∼4000 Mpc3, and ∼5000 Mpc3, respectively.

  • The LAE LF was computed for the four redshift bins. By construction of the sample, the derived LFs efficiently probe the low luminosity regime and the data from this survey alone provide solid constraints on the shape of the faint end of the observed LAE LFs. No significant evolution in the shape of the LF with redshift is found using these points only. These results have to be taken with caution given the complex nature of the lensing analysis, on the one hand, and the small effective volume probed by the current sample on the other hand. Our results argue towards a possible systematic underestimation of cosmic variance in the present and other similar works.

  • A Schechter fit of the LAE LF was performed by combining the LAE LF computed in this analysis with data from previous studies to constrain the bright end. As a result of this study, a steep slope was measured for the faint end, varying with redshift between at 2.9 <  z <  4 and at 5 <  z <  6.7

  • The SFRDLyα values were obtained as a function of redshift by the integration of the corresponding Lyman-alpha LF and compared to the levels needed to ionize the universe as determined in Bouwens et al. (2015a). No assumptions were made regarding the escape fraction of the Lyman-alpha photons and the SFRDLyα derived in this work correspond to the observed values. Because of the well-constrained LFs and a better recovery of the total flux, we estimate that the present results are more reliable than previous studies. Even though the LAE population undoubtedly contributes to a significant fraction of the total SFRD, it remains unclear whether this population alone is enough to ionize the universe at z ∼ 6. The results depend on the actual escape fraction of Lyman-alpha photons.

  • The LAEs and the LBGs have a similar level of contribution at z ∼ 6 to the total SFRD level of the universe. Depending on the intersection between the two populations, the union of both the LAE and LBG populations may be enough to reionize the universe at z ∼ 6.

Through this work, we have shown that the capabilities of the MUSE instrument make it an ideal tool to determine the LAE LF. Being an IFU, MUSE allows for a blind survey of LAEs, homogeneous in redshift, with a better recovery of the total flux as compared to classical slit facilities. The selection function is also better understood as compared to NB imaging.

About 20% of the present LAE sample have no identified photometric counterpart, even on the deepest surveys to date, i.e. HFF. This is an important point to keep in mind as this is a first element of response regarding the intersection between the LAE and LBG populations. Further investigation is needed to better quantify this intersection. Also the extension of the method presented in this paper to other lensing fields should make it possible to improve the determination of the Lyman-alpha LF and to make the constraints on the sources of the reionization more robust.


1

The complete catalogue of MUSE sources detected by G. Mahler in A2744 is publicly available at http://muse-vlt.eu/science/a2744/.

2

Publicly available as part of the python MPDAF package (Piqueras et al. 2017): http://mpdaf.readthedocs.io/en/latest/muselet.html.

3

Python module written by G. Mahler, publicly available at http://pylenstool.readthedocs.io/en/latest/index.html.

4

Lmift uses the emcee algorithm implementation of the emcee Python package (see Foreman-Mackey et al. 2013)

Acknowledgments

We thank the anonymous referee for their critical review and useful suggestions. This work has been carried out thanks to the support of the OCEVU Labex (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the “Investissements d’Avenir” French government programme managed by the ANR. Partially funded by the ERC starting grant CALENDS (JR, VP, BC, JM), the Agence Nationale de la recherche bearing the reference ANR-13-BS05-0010-02 (FOGHAR), and the “Programme National de Cosmologie and Galaxies” (PNCG) of CNRS/INSU, France. GdV, RP, JR, GM, JM, BC, and VP also acknowledge support by the Programa de Cooperacion Cientifica – ECOS SUD Program C16U02. NL acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 669253), ABD acknowledges support from the ERC advanced grant “Cosmic Gas”. LW acknowledges support by the Competitive Fund of the Leibniz Association through grant SAW-2015-AIP-2, and TG acknowledges support from the European Research Council under grant agreement ERC-stg-757258 (TRIPLE).. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 060.A-9345, 094.A-0115, 095.A-0181, 096.A-0710, 097.A0269, 100.A-0249, and 294.A-5032. Also based on observations obtained with the NASA/ESA Hubble Space Telescope, retrieved from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration 2013). All plots in this paper were created using Matplotlib (Hunter 2007).

References

Appendix A: Method to create a mask for a 2D image

In this section we describe the generic method used to create a mask from the detection image of one given source. The goal is to produce a binary mask or detection mask that indicate where the source could have been detected. The details on how this generic method can be adapted to produce masks for spectroscopic cubes can be found in Sect. 6.1. The method relies on the detection process itself. For each pixel of the detection image, this approach checks whether the object would have been detected had it been centred on that pixel. This is done by comparing the local noise to the signal of the brightest pixels of the source used as input.

The method is based on SExtractor. To perform the source detection, SExtractor uses a set of parameters, the most important of which are the DETECT_THRESH and MIN_AREA. The first parameter corresponds to a detection threshold and the second to a minimal number of neighbouring pixels. SExtractor works on a convolved and background subtracted image called the filtered image. A source is only detected if at least MIN_AREA neighbouring pixels are DETECT_THRESH times above the background RMS map (shortened to only RMS map) produced from the detection image. This RMS map is the noise map of the background image also computed by SExtractor. The comparison between the filtered image and the RMS map is done pixel to pixel meaning that filtered[x,y] is compared to RMS[x,y]

The detection mask computation method is based on the same two parameters: DETECTION_THRESH and MIN_AREA. From the filtered image, the procedure selects only the MIN_AREA brightest pixels of the source, (we call this list of values Bp) and compares these to the RMS map. The bright pixels profiles of our LAE sample are shown on Fig. A.1 for illustration purpose. This list contains all the information related to the spatial features of the input source needed by the method. The adopted criterion is close to that applied by SExtractor for the detection even though it is not, strictly speaking, the same:

  • For each pixel [x,y] of the RMS map, a list of nine RMS pixels is created; the list contains the central RMS pixel and the eight connected neighbouring RMS pixel values. We call this list local_noise[x,y].

  • From the Bp list that contains the brightest pixel of the input source, min(Bp) is determined and only this value used for the comparison to local_noise. For the comparison, the following criterion is used: if any value in local_noise[x,y] fulfils the condition min(Bp)/DETECT_THRESH < local_noise[x,y], then the pixel [x,y] is masked. In all of the other cases, the central pixel remains unmasked. This criterion is a bit looser than that used by SExtractor as the comparison is only done for min(Bp) and not for all the pixels. However assuming that the noise in a certain small area is not too drastically different, the SExtractor criterion and the criterion we use are still very close. If min(Bp) fulfils the criterion, is it very likely that the other bright pixels, who all have higher signal values, also fulfils the same criterion at some point on the nine pixel area.

  • The operation is performed for each pixel of the RMS map.

thumbnail Fig. A.1.

Individual bright pixel profiles of all LAEs computed in the seeing condition of A2744 (top) and A1689 (bottom). We note that these are not spatial profiles as two consecutive pixels may not be adjacent on the image. Only the MIN_AREA-th first pixels are necessary to compute a mask (MIN_ARE = 6 was used in this work).

Open with DEXTER

An example of application is given in Fig. A.2. In both cases, the lowest values of the bright pixel list are compared to the nine pixels in the area set by the red square. The lowest value of the Bp list is set to 6. Using DETECT_THRESH = 2, for the central pixel to be masked, none of the values in the red area must be strictly less than min(Bp) / DETECT_THRESH = 3. However, for the central pixel to remain unmasked, only one pixel in the red area has to be strictly less than 3, which is true for three pixels on the example on the right.

thumbnail Fig. A.2.

Illustration of the criterion used to create the mask. The grid represents part of an RMS map. To determine whether the central pixel [x,y] is masked or not, the bright pixels values shown on the bottom left are used; in this example, only the MIN_AREA-th pixel value =6 is used to compare with the local noise. Considering the central pixel [x, y], the comparison to the local noise is only done for the 9 pixels adjacent pixels (i.e. red square). The values used for the detection threshold and the minimal area in this example are 2 and 4, respectively. On the left, none of the pixels in the red area have values that are strictly less than min(Bp)/DETECT_THRESH = 3, which results in the central pixel being masked. On the right panel, three pixels fulfil the condition and the central pixel is not masked.

Open with DEXTER

An example of RMS maps, filtered image, and mask produced for a given source is provided on Fig. A.3. The RMS and filtered maps are directly produced by SExtractor. The bright pixels determined on the filtered image are compared to the RMS map to produce the mask according to the method presented above.

thumbnail Fig. A.3.

Left panel: example of RMS maps produced from one slice of the A2744 cube. The large-scale patterns are due to the different exposure times for the different parts of the mosaic. In the deepest part of this field, the noise is reduced because of a longer integration time. Middle panel: filtered image centred on one of the faint LAE in the A2744 field. The brightest pixels Bp were defined from this image. The size of the field is ∼10″. Right panel: mask produced by this method for the source shown in the middle panel, the masked pixels are shown in white. We can see on this image that the mask patterns closely follow the RMS map.

Open with DEXTER

This exercise can be used to simulate the detectability of a given source in an image completely independent of the input source. This is useful, for example, in the case of a survey that consists of different and independent FoVs. In that situation, the differences in seeing condition have to be accounted for when measuring the bright pixel profile of the source. This can be achieved through convolution or deconvolution of the original image of the source. An example of how the seeing affects the determination of the bright pixel profiles is shown on Fig. A.1.

Appendix B: Mask examples using median RMS maps

In this section we illustrate the results found when applying the method presented in Appendix A to the different cubes, for LAEs detected with different S/N values. A sample of representative masks is presented on Fig. B.1. These masks were used for masking the 3D cubes during the volume computation. They were created with the method described in Sect. 6.1.1, including a median RMS map for each data cube and a median bright pixel profile to be rescaled in agreement with the actual S/N of the source. The S/N values used to build the masks increase from left to right. We note that, in this case, this is not a real S/N but a proxy (see Sect. 6.1.1 for details).

We see that at lower S/N values, the masks are efficient to retrieve the instrumental patterns. At higher S/N values, these patterns disappear, and only the bright galaxies and the edge of the FoVs remain masked. For A2744, we see that the masks are very efficient to account for the difference in exposure time in the mosaic. The central quadrant of the mosaic, being the deepest, is mostly not masked, whereas the upper right quadrant, being the shallowest, is only unmasked for the highest S/N values.

thumbnail Fig. B.1.

Representative examples of masks obtained in the different fields for different S/N values. The masked pixels are shown in white. For each field, the S/N values used to build the mask increase from left to right.

Open with DEXTER

Appendix C: Comparison of the different volume computation methods

In this section we compare the results obtained when computing the Vmax using the method adopted in this study to the classical integration based on a unique mask. We present in Fig. C.2 the comparison between the Vmax values obtained from these two different methods. The first (on the y-axis) is used in this project, based on 3D masks, following the noise variation through the MUSE cubes. The second (on the x-axis) uses a mask generated from a unique SExtractor segmentation map, which is replicated across the spectral dimension. An example of such a mask is provided in Fig. C.1. It is mostly efficient to mask the brightest sources and haloes on the image. Comparing this mask to the masks presented in Fig. B.1, we see that they are completely different. Whereas the 3D masks adopted in this paper are able to follow the differences in exposure time while encoding the instrumental noise patterns, the simple masks provide a unique pattern for all sources, irrespective of their S/N values. This results in the following effects as seen in Fig. C.2: First, a unique mask translates into a unique Vmax value for a large number of sources, as only the lensing effects play a role in the determination of Vmax. This corresponds to the vertical pattern on the right-hand side of Fig. C.2. Second, using the adaptive mask method, systematically lower Vmax values are obtained. And more interestingly, for sources in A1689, A2390, and A2667, we see that the differences are less pronounced (or even not significant for some sources) than for the sources in the A2744 mosaic.

thumbnail Fig. C.1.

Mask of the A2744 FoV, created from a MUSE white light image of the cluster using a SExtractor segmentation map. The masked pixels are shown in white. This type of mask is mostly efficient to mask the brightest sources and haloes.

Open with DEXTER

To explain the first point, it is important to understand that when using a single mask, the only factor that could influence the Vmax value is the limit magnification μlim (see Sect. 6.1.2). A source with a higher μlim value would end up with a smaller Vmax as the area of the survey with large magnification is smaller. For the bright sources of the sample, it could be that the computed μlim would be under the lower magnification reached on the survey. For those sources, the volume was integrated on the entire survey area. Using the 3D mask method, μlim still plays a role but it is no longer the only factor affecting the final volume value and the local noise level is properly taken into account.

To explain the second point and to illustrate the systematical difference between the two methods, we can consider a faint source detected in one of the deepest parts of the A2744 mosaic. When comparing the source to the noise level in the rest of the mosaic, the quadrants with the lower integration time end up being completely masked. As for the three other cubes, their contribution is zero as they have even less integration time. In that case, only a small portion of the mosaic has a significant contribution to the Vmax value and it results in a low Vmax. However, all sources detected in A1689, A2390, or A2667 could have been detected anywhere in the A2744 mosaic. Because the A2744 FoV accounts for 80% of the total volume, only μlim affects the final contribution of A2744, and the contribution of the smaller fields is not that significant. This explains the correlation between the two methods for the sources detected in the three shallower fields.

thumbnail Fig. C.2.

Comparison of the results of Vmax computation using the average mask obtained from a unique SExtractor segmentation map (x-axis) and the 3D masks adopted in this paper, following the evolution of noise through the MUSE cubes (y-axis). See text for details.

Open with DEXTER

Appendix D: Detailed procedure for volume computation in lensed MUSE cubes

In this appendix, we provide an overview and a quick description of all the steps needed to compute Vmax. The details are explained in the main text. The goal of this section is to provide a synthetic view to explain the method. The numbers on the notes below refer to the steps listed in Fig. 6 as follows:

  • (0) The NB cubes consist of all the NB images produced by Muselet. All LAEs were detected on those NB images. Details on those NB images are provided in Sect. 3.1

  • (1.1) Background RMS maps produced separately by SExtractor and assembled into a RMS cube. The RMS cube are cubes of noise that are used to track the spectral evolution of noise levels in cubes.

  • (1.2) Median of the RMS cubes along the spectral axis. One median RMS image is obtained per cube. They are used to mock the 2D SExtractor detection process.

  • (1.3) Set of S/N values designed to encompass all possible values in the LAE sample. The definition used for S/N is provided in Eq. (3).

  • (1.4) Using a generalized bright-pixels profile (see Fig. A.1) and the median RMS maps, a 2D detection mask is built for each value of the S/N set and for each cube; the method is described in Appendix A.

  • (1.5) Redshift values used to sample the evolution of the source plan projections and magnification maps.

  • (1.6) Source plan projection of the set of 2D masks combined with magnification maps for different redshift.

  • (1.7) For each LAE, the final 3D survey masks are assembled from the set of source plane projections. The procedure browse the S/N curves (see Fig. 7, and picks the pre-computed 2D source plane projection computed from the correct S/N value and the appropriate redshift value. Details on this can be found in Sects. 6.1.1 and 6.1.2).

  • (1.8) Minimal magnification to allow the detection of a given LAE in its parent cube. This first value is computed from the error on the flux detection, which is indicative of the local noise level. See definition in Eq. (5).

  • (1.9) A rescaled limit magnification (see definition in Eq. (6)) is computed for each LAE and for the three additional cubes. This is done to account for the differences in both seeing and exposure time. All the details about limiting magnification are explained in Sect. 6.1.2. For each LAE, the four μlim values are used to restrict the volume computation to the areas of the source plan projection with a magnification high enough to allow the detection of this LAE.

  • (1.10) Volume of the survey where a given source could have been detected. For one LAE, this volume is computed from the source plane projected 3D masks, on the pixels with a high enough magnification.

  • (2.1) For each LAE, the NB containing the max of its Lyman-alpha emission is selected. The cleanest detection was obtained on this slice of the NB cube.

  • (2.2) Filtered map produced with SExtractor. See Appendix A for details.

  • (2.3) From the original filtered map produced for each LAE in the parent cube, three additional images are produced to the resolution of the additional cubes the LAE does not belong to using convolution or deconvolution.

  • (2.4) Individual bright-pixel profiles are retrieved for the four different seeing conditions from the filtered images and the three additional images produced in the previous step. The bright-pixel profiles contain the information related to the spatial profile of the LAEs.

  • (2.5) The four generalized bright-pixel profiles are the median of the individual bright-pixel profiles computed for each seeing condition (see Fig. A.1). These generalized profiles are used to limit the number of mask computed and simplify the production of 3D masks.

  • (3.1) The noise level in cubes is an average measure of noise in a given slice of a cube. It is defined in Eq. (3) and an example is provided in Fig. 5.

  • (3.2) Combining the definition of noise levels and the individual bright-pixels profiles, the evolution of S/N for individual sources is computed through the cubes with Eq. (4) (see Sect. 6.1.1 and Fig. 7).

Appendix E: Additional table

Table E.1.

Main characteristics of the 152 LAEs used to build the LFs.

All Tables

Table 1.

Main characteristics of MUSE observations.

Table 2.

Ancillary HST observations.

Table 3.

Summary of the main mass components for the lensing models used for this work.

Table 5.

Summary of the extraction flag values for sources in the different lensing fields (see text for details).

Table 6.

Luminosity bins and LF points used in Fig. 13.

Table 7.

Results of the fit of the Schechter function in the different redshift intervals.

Table E.1.

Main characteristics of the 152 LAEs used to build the LFs.

All Figures

thumbnail Fig. 1.

MUSE footprints overlaid on HST deep colour images. North is up and east is to the left. The images are obtained from the F775W, F625W, F475W filters for A1689, from F850LP, F814W, F555W for A2390, from F814W, F606W, F450W for A2667, and from F814W, F606W, F435W for A2744.

Open with DEXTER
In the text
thumbnail Fig. 2.

Left panel: MUSE white light image of the A2667 field represented with a logarithmic colour scale. Right panel: projection of the four MUSE FoVs in the source plane at z = 3.5, combined with the magnification map encoded in the colour. All images on this figure are at the same spatial scale. In the case of multiply imaged area, the source plane magnification values shown correspond to the magnification of the brightest image.

Open with DEXTER
In the text
thumbnail Fig. 3.

Redshift and magnification corrected luminosity distribution of the 152 LAEs used for the LF computation (in blue). The corrected histograms in light red correspond to the histogram of the population weighted by the inverse of the completeness of each source (see Sect. 6.2). The empty bins seen on the redshift histograms are not correlated with the presence of sky emission lines.

Open with DEXTER
In the text
thumbnail Fig. 4.

Comparison of the 152 LAEs sample used in this work with D17. Upper panel: luminosity vs. redshift; error bars have been omitted for clarity. Lower panel: luminosity distribution of the two samples, normalized using the total number of sources. The use of lensing clusters allows for a broader selection, both in redshift and luminosity towards the faint end.

Open with DEXTER
In the text
thumbnail Fig. 5.

Evolution of the noise level with wavelength inside the A1689 MUSE cube. We define the noise level of a given wavelength layer of a cube as the spatial median of the RMS layer over a normalization factor. The noise spikes that are more prominent in the red part of the cube are caused by sky lines.

Open with DEXTER
In the text
thumbnail Fig. 6.

Flow chart of the method used to produce the 3D masks and to compute Vmax. The key points are shown in red and the main path followed by the method is indicated in blue. All the steps related to the determination of the bright pixels are shown in grey. The steps related to the computation of the S/N of each source are indicated in green. The numbered labels in light blue refer to the bullet points in Appendix D that briefly sum up all the differnt steps of this figure.

Open with DEXTER
In the text
thumbnail Fig. 7.

Example of the 3D masking process. The blue solid line represents the variations of the S/N across the wavelength dimension for the source A2744-3424 in the A1689 cube. The red points over-plotted represent the 2D resampling made on the S/N curve with ∼300 points. To each of these red points, a mask with the closest S/N value is associated. The short and long dashed black lines represent the S/N level for which a covering fraction of 1 (detected nowhere) and 0 (detected everywhere) are achieved, respectively. For all the points between these two lines, the associated masks have a covering fraction ranging from 1 to 0, meaning that the source is always detectable on some regions of the field.

Open with DEXTER
In the text
thumbnail Fig. 8.

Example of source profile recovery for three representative LAEs. Left column: detection image of the source in the Muselet NB cube (i.e. the max-NB image). Middle column: filtered image (convolved and background-subtracted) produced by SExtractor from the image in the left column. Right column: recovered profile of the source obtained by applying the segmentation map on the filtered image. The spatial scale is not the same as for the two leftmost columns. All the sources presented in this figure have a flag value of 1.

Open with DEXTER
In the text
thumbnail Fig. 9.

Completeness value for LAEs vs. their detection flux. Colours indicate the detection flags. We note that only the incompleteness owing to S/N on the unmasked regions of the detection layer is plotted in this graph (see Sect. 6.2).

Open with DEXTER
In the text
thumbnail Fig. 10.

Completeness (colour bar) of the sample as a function of redshift and detection flux. Each symbol indicates a different cluster. The light grey vertical lines are indicated by the main sky lines. There is no obvious correlation in our selection of LAEs between the completeness and the position of the sky lines.

Open with DEXTER
In the text
thumbnail Fig. 11.

Luminosity function points computed for the four redshift bins. Each LF was fitted with a straight dotted line and the shaded areas are the 68% confidence regions derived from these fits. For the clarity of the plot, the confidence area derived for zall is not shown and a slight luminosity offset is applied to the LF points for z1 and z3.

Open with DEXTER
In the text
thumbnail Fig. 12.

Areas of 68% confidence derived on the Schechter parameters when testing different binnings. Left panel: results for 2.9 <  z <  4.0 and the right panel those for 5.0 <  z <  6.7. The legends on the plots indicate, from left to right, log(L)min, log(L)max and the number of bins considered for the fit between these two limits. The black lines show the results obtained from the optimal bins adopted in this work.

Open with DEXTER
In the text
thumbnail Fig. 13.

Luminosity functions and their respective fits for the 4 different redshift bins considered in this study. The red and grey squares represent the points derived in this work, where the grey squares are considered incomplete and are not used in the different fits. The literature points used to constrain the bright end of the LFs are shown as lightly coloured diamonds. The black points represent the results obtained by Cassata et al. (2011), which were not used for the fits. The purple squares represent the points derived using the Vmax method in D17 and are only shown for comparison. The best Schechter fits are shown as a solid line and the 68% and 95% confidence areas as dark red coloured regions, respectively.

Open with DEXTER
In the text
thumbnail Fig. 14.

Evolution of the Schechter parameters with redshift. The contours plotted correspond to the limits of the 68% confidence areas determined from the results of the fits.

Open with DEXTER
In the text
thumbnail Fig. 15.

Comparative Lyman-alpha luminosity histograms obtained using the magnification resulting from different mass models. The grey area represents the completeness limit of this work.

Open with DEXTER
In the text
thumbnail Fig. 16.

Evolution of the SFRD with redshift with different lower limits of integration. The limit log L = 38.5 corresponds to a 2 dex extrapolation with respect to the completeness limit in this work. Our results (in red/brown) are compared to SFRD in the literature computed for LBGs (in light grey) and from previous studies of the LAE LF (in green/blue). For the clarity of the plot, a small redshift offset was added to the points with Linf = 38.5. The darker grey points correspond to the SFRD derived from the LFs in Bouwens et al. (2015b) for a magnitude limit of integration of MUV = −17 corresponding to the observation limit, and MUV = −13. The points reported by Cassata et al. (2011) are corrected for IGM absorption. The yellow area corresponds to the 1σ and 2σ estimations of the total SFRD corresponding to the cosmic emissivity derived in Bouwens et al. (2015a).

Open with DEXTER
In the text
thumbnail Fig. A.1.

Individual bright pixel profiles of all LAEs computed in the seeing condition of A2744 (top) and A1689 (bottom). We note that these are not spatial profiles as two consecutive pixels may not be adjacent on the image. Only the MIN_AREA-th first pixels are necessary to compute a mask (MIN_ARE = 6 was used in this work).

Open with DEXTER
In the text
thumbnail Fig. A.2.

Illustration of the criterion used to create the mask. The grid represents part of an RMS map. To determine whether the central pixel [x,y] is masked or not, the bright pixels values shown on the bottom left are used; in this example, only the MIN_AREA-th pixel value =6 is used to compare with the local noise. Considering the central pixel [x, y], the comparison to the local noise is only done for the 9 pixels adjacent pixels (i.e. red square). The values used for the detection threshold and the minimal area in this example are 2 and 4, respectively. On the left, none of the pixels in the red area have values that are strictly less than min(Bp)/DETECT_THRESH = 3, which results in the central pixel being masked. On the right panel, three pixels fulfil the condition and the central pixel is not masked.

Open with DEXTER
In the text
thumbnail Fig. A.3.

Left panel: example of RMS maps produced from one slice of the A2744 cube. The large-scale patterns are due to the different exposure times for the different parts of the mosaic. In the deepest part of this field, the noise is reduced because of a longer integration time. Middle panel: filtered image centred on one of the faint LAE in the A2744 field. The brightest pixels Bp were defined from this image. The size of the field is ∼10″. Right panel: mask produced by this method for the source shown in the middle panel, the masked pixels are shown in white. We can see on this image that the mask patterns closely follow the RMS map.

Open with DEXTER
In the text
thumbnail Fig. B.1.

Representative examples of masks obtained in the different fields for different S/N values. The masked pixels are shown in white. For each field, the S/N values used to build the mask increase from left to right.

Open with DEXTER
In the text
thumbnail Fig. C.1.

Mask of the A2744 FoV, created from a MUSE white light image of the cluster using a SExtractor segmentation map. The masked pixels are shown in white. This type of mask is mostly efficient to mask the brightest sources and haloes.

Open with DEXTER
In the text
thumbnail Fig. C.2.

Comparison of the results of Vmax computation using the average mask obtained from a unique SExtractor segmentation map (x-axis) and the 3D masks adopted in this paper, following the evolution of noise through the MUSE cubes (y-axis). See text for details.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.