Open Access
Issue
A&A
Volume 635, March 2020
Article Number A90
Number of page(s) 21
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201936890
Published online 13 March 2020

© A. Berdeu et al. 2020

Licence Creative Commons
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. General context

The pertinence of single-field spectroscopy has been demonstrated in numerous fields in astrophysics, such as stellar classification, chemical analysis, velocity measurement, magnetic field probing, and so on. In recent decades, a requirement has emerged to complement this method with increased sampling on extended objects. While the use of long-slit spectrographs is possible by scanning the field of view in one direction, this technique is time consuming and requires precise control of the position in the field being sampled.

An integral field spectrograph (IFS) combines this spatial sampling of an object with its spectral information resolved in different points of the field of view in a single exposure to obtain 2D + λ data. This new technique became available with the rise of large detectors. Indeed, if the 2D + λ information is three-dimensional, the detectors remain bi-dimensional. The technique consists in using an integral field unit (IFU) coupled with a dispersing element to “unfold” the 3D spatiospectral information onto the 2D surface of a sensor of sufficient size. Different unfolding techniques with dedicated optical components have been deployed in different kinds of IFSs: fibre bundles, lenslet arrays and image slices.

Firstly, Vanderriest (1980) proposed the use of an IFU composed of fibre bundles, eventually coupled with a lenslet array to maximise the flux entering in the cores of the fibres, to conjugate positions in the field of view with a long slit that disperses their spectrum. Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) implements this design (Drory et al. 2015).

Secondly, to improve the spatial resolution and limit the light loss, Courtes (1982) proposed the concept of the Traitement Intégral des Galaxies par l’Étude de leurs Raies (TIGER, Bacon et al. 1995), which uses a grism behind an IFU composed of a lenslet array. Many IFSs are based on this principle, such as for example the Optically Adaptive System for Imaging Spectroscopy (OASIS), the Spectrographic Areal Unit for Research on Optical Nebulae (SAURON, Bacon et al. 2001), the SuperNovae Integral Field Spectrograph (SNIFS, Lantz et al. 2004), and the OH-Suppressing Infrared Integral Field Spectrograph (OSIRIS, Larkin et al. 2006).

Thirdly, to go beyond the limited spectral resolution of the lenslet arrays, recent instruments based on image slicing IFU, as proposed by Bowen (1938), were developed such as the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI, Eisenhauer et al. 2003), the Multi Unit Spectroscopic Explorer (MUSE, Bacon et al. 2006), and the Near-Infrared Spectrograph (NIRspec, Bagnasco et al. 2007).

These instruments provide a new window onto many astrophysical phenomena, such as planets and comets of the solar system, stellar jets, stellar population dynamics, protostars, galactic X-ray sources, the nuclei of galaxies, active nuclei, interacting galaxies (active galactic nuclei (AGNs) and starbursts), radiogalaxies, quasar environments, gravitational lenses, and so on (Vanderriest et al. 1994).

With the development of extreme adaptive optics (Jovanovic et al. 2015), in the last few years we have witnessed the results of a new generation of IFSs, such as those incorporated into the Project 1640 IFS (Hinkley et al. 2011, Hale Telescope), the Gemini Planet Imager (GPI, Macintosh et al. 2008, Gemini South Telescope), the Spectro-Polarimetric High-contrast Exoplanet REsearch instrument (SPHERE, Beuzit et al. 2019, Very Large Telescope; VLT), and the Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS, Groff et al. 2016, Subaru Telescope). These are based on lenslet arrays that limit the non-common path aberrations compared to slicer concepts. Coupled with a coronagraph, they are dedicated to high-contrast and high-resolution imaging of the surroundings of stars. These IFSs acquire a large number of monochromatic images simultaneously, which are then used to retrieve the spectral features of substellar companions, but also to calibrate and then to reduce the impact of the speckle noise that is induced by the stellar leakages through the coronagraph. Such instruments consequently represent a powerful technique to detect and characterise exoplanets (Vérinaud et al. 2006) by achieving higher contrasts (Sparks & Ford 2002; Marois et al. 2014).

A common point of all these IFSs is their global optical scheme: the target object is sampled at specific points in the field of view, each point producing a spectrum that is dispersed onto the sensor. The raw data consequently consist of hundreds to tens of thousands of spectra rearranged compactly on one or several 2D detectors. Dedicated codes are therefore needed to reduce this large amount of 2D data into a spatiospectral 2D + λ datacube exploitable for astrophysical analyses.

In this paper we describe PIC (Projection, Interpolation, and Convolution), a reduction technique of the 2D + λ object of interest based on a forward model of the instrument. The reduction performed by PIC amounts to solving the inverse problem of retrieving the 2D + λ object given the instrument model and the data. Such an “inverse-problems” approach is fairly general and can readily be adapted to any kind of IFS. In addition, PIC automatically detects damaged pixels and copes with them in a consistent way. Compared to existing methods, PIC produces 2D + λ reduced data of better quality with almost no artifacts. As shown by our work, this improved quality is also beneficial to post-processing methods.

Section 2 details the motivation behind the proposed method and details the mathematical formalism of the forward model of an IFS instrument as well as the detector model. The inversion uses adequately chosen regularisations and constraints as well as robust penalisation to identify on-the-fly defective pixels missed during the calibration or cosmic rays hitting the detector during the science acquisitions.

In Sect. 3, PIC is applied to SPHERE-IFS data for two typical kinds of target: extended objects (Ganymede and Io, Sect. 3.1) and coronagraphic data (dedicated to the exoplanets and disc detection and characterisation around HR 8799 and RY lup; Sect. 3.2). Comparisons with the reductions performed with the SPHERE pipeline as well as angular differential imaging (ADI) post-processing on the reduced datacubes (Sect. 3.5) highlight the benefits of the proposed approach.

Appendices AD focus on the tuning of the method with the available SPHERE calibration files associated to the reduced science data. Special care is taken on the detector characterisation (Appendix A) and on the chromatic model of the spectral dispersion (Appendix B).

2. Proposed method

2.1. Motivation of the inverse-problems approach

Beyond calibration considerations, the main feature of data-reduction algorithms dedicated to IFSs is the mapping of the raw data measured on the 2D sensor to the targeted 2D + λ hyperspectral cube. Different approaches are implemented depending on the instrument.

A straightforward extraction consists in a direct read of the raw data. Indeed, whatever the optical design of the IFS (fibre bundle, slicer, or lenslet array), the spectra are generally dispersed along an axis of the 2D sensor. After a wavelength calibration step, it is possible to link, via the so-called “pixel association map” (SPHERE, Pavlov et al. 2008; Mesa et al. 2015; GPI, Perrin et al. 2016), or “pixel table” (SINFONI, Modigliani et al. 2007; MUSE, Weilbacher et al. 2012), each spectrum on the 2D sensor to a given position in the 2D + λ hyperspectral cube. Each spectrum is then independently extracted by integrating pixels in small segments perpendicular to the dispersion direction.

While this direct mapping is conceptually easy to grasp and computationally fast, it suffers from several limitations. In particular, pixels flagged as defective in the calibration step are replaced by interpolating the values of their neighbours on the sensor, which introduces spurious correlations in the reconstructed hyperspectral cube. Furthermore, many pixels not detected as defective during calibration are corrupted and lead to aberrant values in the reconstructed cube. In addition, the method does not account for the spectral cross-talk-induced overlapping of neighbouring spectra. And finally, in order to produce a hyperspectral cube, it is necessary to re-interpolate spatial locations on a regular rectangular grid and according to a common spectral sampling. The interpolation operations introduce spatial and spectral correlations due to the linear combination of the spatially and spectrally closest measurements. These spatial and spectral correlations are generally ignored by the downstream processing methods.

For spectrographs with high spectral resolution such as Elodie (Baranne et al. 1996) and SpeX (Cushing et al. 2004), the general approach for the extraction of the spectra is the optimal method proposed by Horne (1986). Most of the IFS reduction pipelines (TIGER and OASIS, Bacon et al. 1995, and their successors, SAURON and SNIFS, Bacon et al. 2001; MaNGA, Law et al. 2016; and OSIRIS, Lockhart et al. 2019), also implement this technique. It is based on a chromatic model of the transverse profile of each spectrum. Combining this line-spread function with the measurement errors to weight the pixels, it solves the first previously mentioned problem by avoiding the interpolation of the defective pixels. It is also possible to use additional weighting coefficients to partially correct the effect of the cross-talk among neighbouring spectra (SPHERE, Pavlov et al. 2008). Nonetheless, this approach assumes that the pixel variance is known and that the defective pixels are correctly identified. Furthermore, this model of adjacent 1D line-spread functions does not account for the spectral response of the instrument, which is along the direction of the spectral dispersion.

For the GPI instrument, Draper et al. (2014) proposed an improvement of this method, which was further refined by Brandt et al. (2017) for the CHARIS instrument. The general method comprises a χ2-based extraction of the spectra, previously proposed by Zimmerman et al. (2011) for the calibration of the Project 1640 IFS. Each spectrum is described by a super-resolved model combining wavelength-dependent point spread functions (PSFs) found by weighted least-squares fitting with weights given by the inverse variance of each pixel. By setting their weights to zero, this approach can correctly deal with defective pixels. As the individual spectrum model can be computed with a wavelength sampling common to all wavelengths, this method also solves the spectral interpolation issue and performs a deconvolution by the instrument line spread function. Finally, the spectral cross-talk is reduced by running the extraction twice after data cleaning by the model after the first estimation. Nonetheless, by extracting the spectra separately, this method does not address the previously raised problem of spatial interpolations that are still needed to obtain a datacube on a regular grid.

In this work, we implement a method that goes a step further by jointly reconstructing the whole 2D + λ datacube using a general framework based on an inverse-problems approach to reduce IFS 2D raw data. In this approach, the reconstructed datacube x is the minimum of a global cost function:

(1)

where 𝒟(y,Mx,W) is the data-fidelity term that penalises how “far” the modelled intensity Mx, which is predicted by the linear model of the instrument M for a given object x, is from the data y according to the confidence on the data, which itself is characterised by the precision matrix W. Further, ℛ(x) denotes the regularisation term implemented to enforce spatial and spectral priors on the hyperspectral cube x, and 𝔻 is the admissible domain for the spatiospectral cube x to reconstruct (e.g. x ≥ 0). In terms of notation, the object x ∈ ℝn1 × n2 × nλ is a datacube in arbitrary units defined by the value xθ, ℓ at each discrete element of this volume; each element is referred to as a voxel in the following. Each voxel has a 2D angular position of θ = (θ1,θ2) ∈ ⟦1, n1⟧×⟦1, n2⟧ and a wavelength index of ℓ ∈ ⟦1, nλ⟧. The raw data, denoted by y ∈ ℝm1 × m2, are the intensities in analog-digital units (adu), the same units as given by the detector, sampled at the 2D positions of the sensor pixels yk indexed by k = (k1,k2) ∈ ⟦1, m1⟧×⟦1, m2⟧.

2.2. Forward model

The critical point of any inverse-problems approach is the forward model M that predicts the data y for a given object x, or in the present case, the pixelated intensities measured by the sensor and the 2D + λ hyperspectral cube of the object. Figure 1 is the general optical scheme of an IFS, exemplified using the SPHERE instrument (SPHERE-IFS, Claudi et al. 2008), presented in Fig. 1b. It emphasises the different steps to convert the coloured image of the object presented in Figs. 1a,d to the spectral projection on the 2D sensor of each of its spatial elements, presented in Figs. 1c,g:

  • C :  ℝn1 × n2 × nλ → ℝn1 × n2 × nλ, a convolution operator: the entrance of the instrument is composed of an IFU array that performs a convolution C of the input signal by the pupil entrance of its sub-elements, as illustrated in Fig. 1e;

  • I :  ℝn1 × n2 × nλ → ℝnh × nλ, a sampling operator: the integration of the signal by the IFU induces a loss of its spatial information, performing its interpolation I on the positions of the nh hexagonal lenslets, as illustrated in Fig. 1f;

  • P :  ℝnh × nλ → ℝm1 × m2, a sparse matrix projection operator: the sampled coloured signal is dispersed over the sensor onto neighbouring spectra following the IFU array geometry modelled by a projection matrix P. As the elementary monochromatic PSF of a single lenslet impacts only a portion of the detector, which is as small as possible to reduce inter-spectra cross-talk, this projection matrix is sparse.

thumbnail Fig. 1.

Forward model Mx = PICx of an integral field spectrograph with the example of the SPHERE-IFS instrument (b). The telescope forms an image of the targeted object x (a,d) on the instrument that projects a spectrum of each of its spatial elements onto the sensor (c,g). The low-crosstalk BIGRE design of the IFU (b) proposed by Antichi et al. (2009) is based on two hexagonal lenslet arrays whose effect is first modelled as a convolution C by their pupil shape, averaging the light received by a single lenslet (d → e). An interpolation I then accounts for the sampling performed by the first array of lenslets of the IFU (red) at the lenslet positions (green dots) (e → f). Finally, their spectrum is dispersed by prisms on the sensor, corresponding to a sparse matrix P linking the different wavelengths seen by a given lenslet to the location of their projection on the detector (f → g). Photo Credit: Lucasfilm Ltd.

Open with DEXTER

In the present work, C is a convolution by the hexagonal pupil shape of the lenslets and is performed by a simple componentwise multiplication in the Fourier space. Here, I is a linear interpolation of the Cartesian grid of the hyperspectral cube on the lenslet positions, assumed to be identical to the spectra positions on the sensor given by the dispersion model. Finally, for a given interpolated spectrum sampled at the wavelengths of the reconstructed hyperspectral cube, the projection matrix P consists in a linear combination of axisymmetric Gaussian patterns whose position and size depend on the considered lenslet and wavelength. Considering a super-resolved model such as the one proposed by Brandt et al. (2017) would be straightforward in our framework but would require further calibration steps that are not available in the European Southern Observatory (ESO) calibration framework of SPHERE, as discussed in Appendix B. Thus, the global optical forward model of an IFS is linear and is synthesised by the following equation, using a compact matrix notation:

(2)

This gives the name of the proposed method, mentioned as PIC in the following. These operators are tuned from calibration exposures in different calibration steps described in Appendix B for the SPHERE instrument.

We would like to mention here that one could use this model to inject artificial planet signals directly in the raw data rather than in the reduced hyperspectral cubes in order to provide an end-to-end characterisation of the IFS performance that includes the reduction and post-processing algorithms.

2.3. Detector model

Beyond the effect due to the optical elements of the IFS, the recorded raw signal yraw presents additional contributions. Indeed, in practice, each pixel of the detector records only an affine transform of the modelled signal [Mx]k corrupted by additive noise:

(3)

where gk ≃ 1 is a correction factor accounting for the variability of the quantum efficiency and of the electronic gain, and accounting for the optical throughput such as vignetting or masking by dust particles; bk is a bias term accounting for a constant offset set by the detector and all the background signals that are proportional to the integration time: the dark current of the sensor, the thermal background of the instrument, and the contribution of the sky; and ϵk is a centred additive noise of variance vk which accounts for the detector read-out noise of variance at null flux, and the photon noise of the incident flux. Assuming that the forward model Mx correctly predicts the intensity on the sensor in adu, the variance of the photon noise is proportional to the incident flux and the total pixel variance is

(4)

where ηk, in adu, accounts for the unit conversion from the number of photons to adu and is the bias of the detector (see Eq. (A.1)) at pixel k.

Like any other inverse approach, the proposed method relies on precise modelling of the recorded raw signal yraw to trustfully retrieve all the information contained in the data. It is therefore mandatory to have good knowledge of the gain g, the bias b, and the noise parameters ηk and for each pixel k. These parameters are estimated from calibration exposures in different calibration steps described in Appendix A for the SPHERE instrument.

After the calibration step, to perform the data reduction, the raw data are preprocessed, leading to the calibrated data y considered in Eq. (1):

(5)

2.4. Robust penalisation for the data fidelity

The role of the data fidelity term 𝒟(y,Mx,W) in Eq. (1) is to ensure that the retrieved object x produces via the forward model M an intensity that is as “close” as possible to the calibrated data y. In this section, we describe what we mean by “close”.

The precision matrix W is the inverse of the covariance of the errors due to noise and approximations of the model (see e.g., Mugnier et al. 2004). Assuming uncorrelated noise, this matrix is diagonal and is comprised of coefficients (or weights) w that are the inverse of the pixel variance: . These weights represent the confidence on each measured pixel k. Nonetheless, during the instrument calibration, some defective pixels, such as those that are hot or dead, can be identified and discarded. Their precision is set to wk = 0, that is, their variance is considered infinite. We cope with these discarded pixels by introducing a mask δ defined by:

(6)

and rewrite the weighting coefficients w as:

(7)

where the variance vk of the kth pixel is given by Eq. (4).

Owing to the high number of photons per pixel in a frame, independent but non-homogeneous Gaussian noise can be assumed. The data fidelity is then defined as the sum of the weighted square residuals between the data and the predicted intensity:

(8)

Nonetheless, this method is not robust to outliers. Indeed, some defective pixels that have an erratic behaviour are missed during the sensor calibration. In addition, energetic particles may hit the detector during an exposure and impact several pixels.

To reduce the incidence of these erroneous measurements, robust penalisation approaches have been developed (Hogg 1979; Huber 1996). The general idea is to replace the quadratic penalisation of the residuals of Eq. (8) by a function ρ that is approximately quadratic around zero and that grows sub-quadratically for large values. Adapting Eq. (8) to account for a robust penalisation leads to:

(9)

where sk is a scaling factor on the residuals that characterises how “far” from a normal Gaussian distribution the whitened residual statistics can be.

With sk = 1 and ρ(r) = r2, Eq. (9) is equivalent to the standard least-squares cost of Eq. (8).

Several robust penalisation functions ρ exist in the literature for which the equivalent weight wρ of a least-squares minimisation can be computed (Holland & Welsch 1977). In the following, the Cauchy function is used:

(10)

with γ = 2.385.

Such a robust penalisation is pertinent in the case of IFS imaging since according to our forward model, a given voxel xθ, ℓ has an impact on several regions of the detector. It is unlikely that these regions are all polluted by many defective pixels. This redundancy coupled with the robust penalisation avoids most artefacts due to outliers.

2.5. Regularisations and constraints

There is no guarantee that the model M defined in Sect. 2.2 is a well-conditioned and full-rank matrix. This implies that minimising Eq. (9) may be an ill-posed and ill-conditioned problem leading to noise amplification. Introducing the regularisation function ℛ(x) in Eq. (1) is a means of coping with these issues by accounting for some prior knowledge about the observed object x (Titterington 1985). Two kinds of regularisations are implemented in the proposed inversion method.

A first regularisation favours sharp-edged objects to control the spatial continuity and smoothness of the reconstruction. Enforcing such an edge-preserving smoothness is done by encouraging the sparsity of spatial gradients (Rudin et al. 1992; Charbonnier et al. 1997). In addition, it can be expected that the edges of the reconstructed objects are co-localized according to the wavelength. Similarly, we are more likely to have a full spectrum in a given position than fragmented spectra spread on neighbouring positions. This prior about the spectral behaviour is implemented by means of a structured norm (Blomgren & Chan 1998):

(11)

where and correspond to finite difference operators along the first and second spatial dimensions, respectively, and ϵ >  0 is a threshold corresponding to the smallest gradient at a sharp edge and ensures that this hyperbolic approximation of the Euclidean norm is differentiable at zero. Because of the summation over ℓ under the square root, hyperspectral images with edges co-localised at all wavelengths are favoured.

A second regularisation ensures that the reconstructed spectra are smooth. Indeed, with the limited spectral resolution of the instruments, no extremely thin emission or absorption lines are expected to be reconstructed. This also prevents the apparition of the high-frequency noise mentioned by Draper et al. (2014) and Brandt et al. (2017) in the super-resolved reconstructed spectra. We chose to implement this spectral regularisation by the following quadratic penalty:

(12)

where is the finite difference operator along the spectral dimension.

The two chosen regularisations are combined in the term ℛ(x) in Eq. (1) with respective hyperparameters μ2D and μλ to weight the priors relative to the data fidelity term:

(13)

Combined with the robust penalisation, the chosen regularisation helps to identify corrupted pixels that were not yet detected as such. Indeed, to obtain a good fit with an outlier, the estimated hyperspectral cube would necessarily involve a value at a given spatial location and wavelength that significantly differs from its spatial and spectral neighbours. Such a situation is penalised by the regularisation in order to reject outliers and obtain a more meaningful estimation.

In addition to the “loose” constraints imposed by the regularisation, two “tight” constraints are applied to x. Firstly, the spectra can only have positive intensities. Secondly, to ensure that the whole spectral extension is accounted for by the forward model, the reconstructed wavelengths go slightly beyond the instrument spectral sensitivity range. Thus, the extreme wavelengths are forced to be null. Combined with the previously defined regularisations, this forces the spectra to smoothly drop to zero at their edges.

In summary, the feasible domain 𝔻 is:

(14)

2.6. Reconstruction algorithm

After the calibration of the detector (to get a preliminary list of defective pixels and determine ηk, gk, and bk irrespective of k) and of the instrument (to code M = PIC), the reduction algorithm consists in recovering x by minimising the objective function defined in Eq. (1) and updating the factors δk. This is implemented by the iterative Algorithm 1.

Strictly, looking at Eq. (7), the weights w in Eq. (9) depend on the current value of x during the optimisation of the cost function Eq. (1), increasing its complexity and raising potential convergence issues. To overcome this limitation, the reconstruction of x is subdivided into a loop of nit optimisations. At a given iteration of the loop, the weighting coefficients are assumed to be constant, computed via Eq. (7) based on the forward model given by the solution of the previous iteration ym = Mx.

At the initialisation, for the first estimation of the weighting coefficients w, the predicted model is approximated by a median filter of the preprocessed data y to diminish the influence of the defective pixels. If some of them have been detected during the calibration, the map δ is initialised with the flagged pixels δcal. Otherwise, it is set to δk = 1, for all k.

At each loop after the second loop i ≥ 3, the map of discarded pixels δcal is updated using a threshold wth on the equivalent weight wρ given by the robust penalisation on the residuals of the previous loop.

3. Results

SPHERE (Beuzit et al. 2019) is a second-generation VLT instrument optimised for high-contrast imaging. Its IFS is based on a new optical concept proposed by Antichi et al. (2009) to overcome the limitations affecting a 3D spectrograph working in diffraction-limited regime (Claudi et al. 2011). BIGRE is an evolution of the classical TIGER design, optimised for seeing-limited conditions in the SAURON instrument (Bacon et al. 1995, 2001). The micro-lenses composing the lenslet array of the IFU are smaller than the full width at half maximum (FWHM) of the projected PSF of the telescope, and therefore provide resolved images of the PSF and the speckle patterns produced by the stellar leakage in the coronagraphic mode of the instrument. To limit the cross-talk between the lenslets, the BIGRE design improves the TIGER concept by adding a second lenslet array as schemed in Fig. 1b.

The BIGRE SPHERE-IFS provides low-resolution spectroscopy (R ∼ 50) in the near-infrared (0.95−1.7 microns), an ideal spectral range for the detection of exoplanets and the characterisation of their features. Its IFU is composed of 145 × 145 lenslets and covers a field of view of about 1.8 × 1.8 square arcseconds (Claudi et al. 2008; Dohlen et al. 2016). The spatial dimension of the raw data taken on the Hawaii-2RG sensor is equal to m1 × m2 = 2048 × 2048 pixels.

This section presents the results of the proposed PIC method on representative datasets of the SPHERE-IFS instrument compared to the reduction obtained with the SPHERE reduction pipeline (Delorme et al. 2017) used to date. Two situations are considered: the direct imaging of extended objects, and the coronagraphic mode of SPHERE dedicated to exoplanet detection and characterisation. The output cubes of the SPHERE reduction pipeline are composed of n1 × n2 × nλ = 291 × 291 × 39 voxels. The proposed reduction pipeline provides cubes of n1 × n2 × nλ = 300 × 300 × nλ voxels with nλ = 44 for the YH-mode and nλ = 47 for the YJ-mode, which represents a ratio of approximately eight between the pitch of the angular position θ of the lenslets in the reduced cube and the pitch of their position k on the sensor in terms of pixel. An important component of our method is the calibration of the parameters of the forward model. We describe the methodology to estimate those parameters in Appendices A and B.

The code is implemented in MatlabTM and uses the open-source GlobalBioIm framework (Soubies et al. 2019). The optimisation problem (1) is solved by the VMLM-B algorithm (Thiébaut 2002), a limited-memory quasi-Newton method with BFGS updates (Nocedal 1980) that handles bound constraints. The version of the code used in this paper can found at https://doi.org/10.5281/zenodo.3585632. All the data and reductions presented in this papers can be downloaded at https://doi.org/10.5281/zenodo.3585656.

3.1. Extended-object imaging

Even if SPHERE-IFS is mainly used in its coronagraphic mode and the exoplanet search, it is also suitable for the observation of extended objects (see e.g., Hanus et al. 2017; King et al. 2019). To highlight the pertinence of the proposed method for the imaging of such targets, we reduced two Jovian moons with the SPHERE reduction pipeline (Delorme et al. 2017) and the proposed PIC method: Ganymede (60.A-9372 – 1 Feb. 2015, 298th out of 300 acquisitions) and Io (60.A-9357 – 6 Dec. 2014, 2nd out of 8 acquisitions), both acquired in YH mode. The results are given in Figs. 2 and 3.

thumbnail Fig. 2.

Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Ganymede (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1018 nm and λ = 1556 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f).

Open with DEXTER

thumbnail Fig. 3.

Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Io (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1068 nm and λ = 1353 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f).

Open with DEXTER

For each reduced cube, spectra are extracted along three positions θ of interest. For comparison, the transmission tatm of the Earth’s atmosphere, computed via the software “HITRAN on the Web” proposed by Rothman et al. (2013), is given in green along with the extracted spectra.

Several remarks can be made about the colour coding in Figs. 2a,b and 3a,b. The colour intensity reflects the spectral average ⟨xλ, the colour hue indicates the wavelengths that deviate from this projection, and the colour saturation represents the magnitude of this deviation.

The colour variations at the Ganymede’s surface shown by Figs. 2a,b suggest that the reconstructed cubes contain information of spatial variations of the surface spectrum. While smooth regions are favoured, the proposed regularisation does not prevent spatial structures with small contrasts and spectral variations from being restored.

This spectral diversity can be seen in the extracted spectra in Fig. 2g. The black spectrum is close to the average spectrum of the reduced cube. The red (resp. blue) spectrum is extracted in an area where the spectrum is attenuated (resp. enhanced) in the shorter wavelengths and enhanced (resp. attenuated) at longer wavelengths. This diversity is also visible on the two extracted slices at λ = 1018 nm in Figs. 2c,d and λ = 1556 nm in Figs. 2e,f and which show very different spatial structures. These spectral contrasts are less pronounced on Io’s surface.

In terms of reconstruction quality, the cubes reduced by the SPHERE reduction pipeline exhibit vertical artefacts that are visible in the spectral projections in Figs. 2a and 3a and on the slices in Figs. 2c and 3c. At low fluxes, horizontal artefacts are also visible in the Io cube in Fig. 3e.

In addition to these large-scale structures, the reduced cubes produced by the SPHERE pipeline are corrupted by many artefacts due to defective pixels, most of them leading to dark voxels, as visible in the slices in Figs. 2c,e and 3c,e. These artefacts leave a strong distortion in the extracted spectra, as visible in the black spectra in Figs. 2g and 3g. A defective pixel on the sensor plane impacts several wavelengths in the cube reduction. This can also lead to misinterpretations, as in the case of the red spectrum in Fig. 3g where the absorption bands of the Earth’s atmosphere are shifted from their actual positions, for instance, at ∼1100 nm for the blue spectrum and at ∼1350 nm for the red spectrum. With the proposed calibration and regularised reconstruction, these artefacts are corrected in the reduced cubes as visible in Figs. 2a,c,e and 3a,c,e. The striped pattern has disappeared and no strong artifacts are visible. Moreover, qualitatively, the spatial regularisation reduces the noise in the reconstruction and improves the contrast of the geological formations, even at very low flux such as in Figs. 3e,f.

3.2. Coronagraphic imaging

SPHERE-IFS is mainly used for high-contrast observation with a coronagraph. To directly image an extrasolar planet, the light from its host star must be reduced by several orders of magnitude. Associated with dedicated detection algorithms, SPHERE as well as other instruments of its generation such as GPI and CHARIS, allow the spectral characterisation of the imaged exoplanets at low resolution (R ∼ 30−50) and achieve typically planet-to-star contrast ratios down to 10−6 at 0.2″ angular separation (see e.g., Currie et al. 2015; Macintosh et al. 2015; Ruffio et al. 2018; Mesa et al. 2019).

To illustrate the present method when SPHERE-IFS operates in its coronagraphic mode, we considered two datasets: one on the HR 8799 star that hosts a cortege of exoplanets (Marois et al. 2008) and that was observed in YH mode (095.C-0298 – 4 Jul. 2015, 38th out of 56 acquisitions), and one on the RY lup star that hosts a bright transition disc (Langlois et al. 2018) and was observed in YJ mode (097.C-0865 – 15 Apr. 2016, 50th out of 80 acquisitions). A comparison between the reduced cubes with the SPHERE reduction pipeline is given in Figs. 4 and 5.

thumbnail Fig. 4.

Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the HR 8799 dataset from a single IFS exposure. The proposed method removes artificial background structures. a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1068 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d).

Open with DEXTER

thumbnail Fig. 5.

Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the RY lup dataset from a single IFS exposure. The proposed sky background estimation removes the host star remnant and the proposed reconstruction algorithm is robust to aberrant measurements (see Fig. 6). a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1165 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d).

Open with DEXTER

The reduced HR 8799 cube is dominated by the speckles produced by the stellar leakage. The coloured spectral projection ⟨xλ emphasises that this is a chromatic effect with the speckle structure scaling with the wavelength from the shorter wavelengths represented in blue to the longer wavelengths represented in red. Without further processing, no exoplanet is visible. On the contrary, the signal of the reduced RY lup cube is dominated by the signal of the accretion disc that is visible at all wavelengths as a well-contrasted structure. Contrary to the speckles, the location of an object imaged by the IFS does not scale with the wavelength and thus appears in white.

Concerning the reduction quality, similar conclusions for the reduction of extended objects can be drawn. The HR 8799 cube presents the same striped artefacts that are corrected by the proposed method. The important noise of the RY lup cube due to the small exposure time is smoothed by the regularisation of the proposed method.

The zooms of Figs. 5a,c given in Figs. 5e,f highlight the necessity to correct the “remnant signal”, as presented in Fig. A.2. The subtraction of this unwanted signal, a ghost of a previous high dynamic acquisition on the sensor, in the sky background correction step produces the artificial dark spot visible in the cube reduced by the standard method. With the proposed method, this region is correctly reconstructed.

These same zooms show the effectiveness of the robust penalisation in discarding defective pixels or pixels hit by energetic particles based on the spatial and spectral regularisations of the reconstructed cube. Indeed, as visible in Figs. 6a,e, such a particle leaves a long trace on the sensor. Looking at the equivalent weights wρ in Figs. 6c,g, these aberrant measures are clearly identified as detector pixels with very low statistical weights below the threshold wth. These can be discarded by the reduction procedure. They are identified in blue in Figs. 6d,h. In addition, defective pixels missed during the calibration are also flagged in red among the already discarded pixels in green.

thumbnail Fig. 6.

Robust defective-pixel identification on the sensor region corresponding to the region of interest of Figs. 5 and 7b. In addition to the known defective pixels (green), the robust penalisation can flag additional unusable pixels in a given acquisition that are blinking (red) or hit by an energetic particle (blue). a,b: raw data (a) and intensity (b) predicted by the model (colour bar: adu). c: equivalent weights obtained by the robust penalisation (cubic stretch). d: new map of the unconsidered pixels. e–h: zooms on (a–d).

Open with DEXTER

The comparison between the data in Figs. 6a,e and the intensity predicted by the model in Figs. 6b,f show that the coupling of the redundancy of the hyperspectral information on an IFS field with adequate regularisation allows the estimation of the signal even under high levels of noise.

3.3. Statistics of the residuals

The reconstructed model compared to the raw data is further detailed in Fig. 7 for the two datasets. The zooms in Figs. 7d,h,l,f,j,n correspond to regions of interest previously identified in the calibration because of their high density of defective pixels (Fig. A.4d) or because of the presence of a dust particle (Fig. A.4c).

thumbnail Fig. 7.

Residuals of the model on the HR 8799 (on the left column) and the RY lup (on the right column) datasets (square root stretch, colour bar: adu). a,b: raw data. c–f: zooms on the regions framed in (a,b). g–j: model prediction on the same regions of interest. k–n: residuals on the same regions of interest. o,p: statistics of the residuals. For comparison, Gaussian profiles of standard deviation σ = 47 adu (HR 8799) and σ = 23 adu (RY lup) are plotted. The region of interest framed in orange in (b) is the region detailed in Fig 6.

Open with DEXTER

Despite the numerous discarded pixels in the raw data flagged in Figs. 7d,f, the algorithm predicts a model, as shown in Figs. 7h,j, where no structure is visible in the residuals displayed in Figs. 7l,n, except for the defective pixels and the pixels masked by a dust particle.

The zooms in Figs. 7c,g,k,e,i,m focus on regions closer to the coronagraph mask where the flux is the highest. No structure is visible in the residuals of Fig. 7m. On the contrary, on the limit of the coronagraph mask in Fig. 7k, vertical blue and red structures can still be discerned. This is because of a slight lateral shift of the spectra in the raw data compared to the positions predicted by the calibrated forward model. This phenomenon is also visible on the reduction of the star PSF acquisitions (not presented here). In these two cases, the object inducing these shifts is a star, that is, an unresolved and coherent object inducing stronger coherent cross-talk effects that are not totally suppressed by the BIGRE design, as described by Antichi et al. (2009). These effects cannot be estimated by the calibration acquisitions or accounted for by the forward model which considers only incoherent cross-talk.

Figures 7o,p compare the histogram of the raw data (red), the histogram of the residuals between the data and the model prediction (blue), and a Gaussian profile (dashed black). The statistics of the raw data are correctly matched by the statistics of the model prediction. The residuals have approximately Gaussian distributions.

3.4. Image registration and pixel scale calibration

In coronagraphic mode, the centre of the field of view is given by the position of the masked star. In order to determine the star position behind the coronagraph’s mask, specific acquisitions are obtained by applying a periodic phase pattern on the deformable mirror of the instrument. This induces four first-order diffraction patterns of the masked star as presented in Fig. 8b. The star is at the centre of the square formed by these four satellite images, as presented by the black dots and dashed lines on the figure.

thumbnail Fig. 8.

Study of the chromatic scaling of the diffraction images of the host star for the two datasets. a: coloured projection of the HR 8799 reconstructed hyperspectral cube (cubic root stretch of the intensity). b: slice of the HR 8799 reconstructed hyperspectral cube (λ = 1486 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). The five black dots represent the fitted positions of the four images of the star and of the centre of the fitted square (dashed lines). c: linear fits of the square diagonals according to the wavelength for each dataset at two different times in the night with the proposed PIC method and, for the first time in the night, with the SPHERE reduction pipeline (RP, Delorme et al. 2017). d: fitted positions of the squares at each wavelength compared to the estimation of the star position. Only the wavelengths whose position falls inside the grey circle of 0.4 voxels radius are used to fit the linear laws.

Open with DEXTER

With the SPHERE reduction pipeline, the centre of the field of view is determined after the reduction of all the acquisitions. Once the centre is known, all the reduced cubes are re-interpolated to centre the star on a given pixel. With the proposed method, this shift between the centre of the field and the centre of the IFS can be taken into account by the forward model. This is done by adapting the sampling positions of the interpolation operator I in the cube x. Doing so, the reduced cube is by construction centred on the host star. This avoids performing any further interpolation of the reduced data that would imply a slight loss in spatial resolution and stronger noise correlations due to the re-interpolation operation.

In the proposed method, the centre of the field of view is determined by the centres of the square diffraction patterns fit at each wavelength of the cube. To be robust to outliers, we first take the centroid of all the square centres and then iteratively take the centroid of the subset of square centres that are at a distance from the previous centroid of less than 0.4 voxels (≃2.6 mas). The results are shown in Fig. 8d for the two datasets and for two different acquisitions in the same night. Most of the centres are within 0.25 voxels (≃1.7 mas) of the estimated centre. No noticeable difference can be seen in the estimated parameters according to the dataset or the acquisition date.

Being a diffraction-driven phenomenon, these square patterns should scale with the wavelength. This can be seen by the four radial rainbow-coloured features on the coloured projection ⟨xλ in Fig. 8a. To study this scaling law, the square best-fitting the four spots in the reconstructed cube is estimated at each wavelength. The slope ξ of the linear law that best fits the diagonal s according to the wavelength s ≈ ξλ is given in Fig. 8c for the two datasets (blue and red) for each acquisition date in the observation night (squares and circles). Table 1 summarises the fitted parameters for the two datasets and their two acquisition dates. The standard deviation of the chromatic slope is 0.02 voxel μm−1 for all datasets. This linear law is used to estimate a relative error on the spectral calibration given by the sample standard deviation σλ of computed on all spectral channels ℓ.

Table 1.

Estimation of the chromatic slope ξ of the linear law linking the diagonal s of the diffracted square pattern with the wavelength λ and the corresponding relative error on the spectral calibration σλ for the different datasets and their acquisition dates with the proposed method and, for the first acquisition date, with the SPHERE reduction pipeline (RP, Delorme et al. 2017).

As for the centre determination, no noticeable difference can be seen in the estimated parameters according to the dataset or the acquisition dates.

The differences in the chromatic slope are due to the choice of smaller pixels in the reconstructions in the proposed method compared to the SPHERE reduction pipeline. We note here that this is not a measure of the spatial resolution of the reduced datacubes that would be different with the SPHERE reduction pipeline or PIC. Indeed, the SPHERE-IFS instrument resolution is fixed and is given by the pitch of the lenslets in the array. With the chosen reduction parameters, there are roughly two voxels per lenslet.

The small standard deviations on the estimated chromatic slope (≈0.02 voxel μm−1 for all datasets) reflect the good spectral calibration of both PIC and the SPHERE pipeline. The residuals in Fig. 8c were used to estimate the relative spectral calibration errors given in Table 1 that are well under 1%. This supports the quality of the wavelength calibration procedure.

Nevertheless, the residuals in Fig. 8c worsen around λ = 1.4 μm. This corresponds to the main atmosphere absorption band in Figs. 2g and 3g. As a consequence, the signal-to-noise ratio to estimate the positions of the four diffraction patterns is worse and degrades the square estimation.

On the spectral range edges, in addition to the limited reconstructed flux, the wavelengths are extrapolated beyond the calibration laser wavelengths, as mentioned in the description of the model calibration in Appendix B. This may explain why the residuals are also worse at short and large wavelengths.

3.5. Post-processing examples

As seen above, residual starlight is present in the form of speckles which limit the detection of faint point-source objects. Such residuals can be suppressed by adopting different strategies based for example on the polarisation of the residual speckle pattern (differential polarimetric imaging; DPI) or assuming that it scales with the wavelength (spectral differential imaging; SDI) or that it is stable in time (angular differential imaging; ADI). These techniques are used in dedicated post-processing algorithms such as cADI (Marois et al. 2006), LOCI (Lafreniere et al. 2007), KLIP (Soummer et al. 2012), ANDROMEDA (Cantalloube et al. 2015), or PACO (Flasseur et al. 2018), which are used to detect the faint signal of the exoplanets hidden in this speckle noise.

We chose to run a Template Locally Optimized Combination of Images (TLOCI) detection algorithm (Marois et al. 2013, 2014) on the hyperspectral cubes of the 46 best dates (among the 56 acquisitions) reduced with the SPHERE reduction pipeline and the proposed method.

TLOCI is an evolved version of the LOCI algorithm that is based on ADI where the intrinsic field-of-view rotation during the acquisitions is used to separate the residual light from the star masked by the coronagraph and the light coming from off-axis sources, that is, the objects of interest, for each wavelength of the reduced hyperspectral cube. Further, TLOCI uses a template for the spectrum of the sought-after point sources in the hyperspectral cube to combine images along the spectral dimension.

Figure 9 gives the comparison between the results obtained with the SPHERE reduction pipeline and PIC. The global detection maps presented in Figs. 9e,f combine the S/N maps obtained at all wavelengths as well as the S/N derived for the detection of each planet as given in Table 2. If σ+ is the list of the nλ positive S/N of a planet or a position in the detection map, the global S/N is computed as follows:

(15)

thumbnail Fig. 9.

Comparison between a TLOCI post-processing of the SPHERE reduction pipeline (Delorme et al. 2017) (left column) and the proposed PIC method (right column) on the HR 8799 dataset. The contrast of the planet closest to the star is increased by the PIC method and several false positives are avoided. a–d: monochromatic images of the post-processed hyperspectral cube and planet detection S/N (square root stretch of the intensity, colour bar: arbitrary unit) at λ = 972 nm (a,b) and λ = 1636 nm (c,d). e,f: maps of planet detection S/N combining all wavelengths (red circles: true positives, blue boxes: false positives with the SPHERE pipeline, colour bar: S/N).

Open with DEXTER

Table 2.

Comparison between the S/N of the innermost planet at large wavelengths for the SPHERE reduction pipeline (RP, Delorme et al. 2017) and the proposed method (PIC).

where ⟨ ⋅ ⟩λ denotes spectral averaging.

At short wavelengths, as in Figs. 9a,b, the planet fluxes are too low to be seen. The suppression of the vertically striped artefacts with the proposed method leads to a better speckle removal by the TLOCI algorithm. This is especially striking close to the host star at longer wavelengths, as in Figs. 9c,d where the S/N of the innermost planet is strongly increased. This S/N gain can be observed at all wavelengths (except the shortest); see Table 2. In addition, the better reduction of the datacubes reduces the artefacts of post-processing algorithms such as TLOCI, removing potential false detections (in the blue squares) and enlarging the usable area of the field of view.

The reductions on the RY lup dataset are compared by averaging the 80 temporal acquisitions after rotating them to compensate for the time-varying field rotation. These averaged reduced data are presented in Fig. 10. As a consequence of the imposed angular correction, the speckle pattern rotates and is averaged out while the accretion disc remains fixed and is enhanced. The coloured projections show that the disc is globally white without notable spectral variations. As previously noted, the edge of the field of view is better reconstructed with the proposed method (absence of outliers).

thumbnail Fig. 10.

Coloured projection of the post-processing of the SPHERE reduction pipeline (Delorme et al. 2017) (a) and of the proposed PIC method (b) on the RY lup dataset (cubic root stretch of the intensity).

Open with DEXTER

4. Discussion and conclusions

Here, we propose a new algorithm named PIC designed to reduce the raw data of an IFS in order to produce a 2D + λ cube. Based on an inverse problem approach implementing a robust penalisation, the proposed method directly reconstructs the full hyperspectral cube.

Our method amounts to solving an inverse problem which provides a number of advantages compared to previous methods. The direct object extraction avoids a posteriori interpolation of the extracted spectra on a Cartesian spatial grid or a common wavelength sampling. Modelled by the forward model, the spectral cross-talk between neighbouring spectra is reduced and the spectral response of the instrument is deconvolved in the reduction. This framework also provides a natural and consistent way to deal with corrupted measurements (e.g. defective pixels or cosmic particles) so as to avoid artefacts in the result.

Contrary to other methods, the proposed algorithm explicitly introduces a spatiospectral regularisation. Our results demonstrate that this regularisation improves the quality of the reduced data by limiting the incidence of the noise and of outlier pixels which are automatically detected during the reduction.

In addition to this new reduction algorithm, we also describe an improved calibration procedure for the SPHERE-IFS instrument. A model of the statistics of the noise in the raw data provided by the sensor is obtained from the calibrations files. Defective pixels are also flagged to be discarded in the reduction procedure. Polynomial laws for the positions and the dispersion are robustly fitted for each lenslet, providing a model that is as close as possible to the instrument physics. An autocalibration step is added to correct for possible shifts between the calibration and the science acquisitions.

The reconstructed spectra, in particular from the calibration observations presented in Appendix B, show a slight improvement of the spectral resolution due to the spectral deconvolution performed by the proposed data-reduction method. The results on extended objects show that such a regularised method and careful calibration enhances the spatial contrast while reducing the impact of defective pixels along the spectral dimension. Some artefacts visible in some SPHERE reductions in the form of large-scale structures (vertical bands) are also suppressed.

Applied to coronagraphic acquisitions, the method provides reconstructed hyperspectral cubes of improved quality and with fewer artefacts, which in turn lead to detection maps with better separation between sources (like exoplanets) and background speckle. In the quest to detect very faint exoplanets by direct imaging, the proposed reconstruction may contribute to improving the achievable contrast.

The forward model could be further refined while keeping the same general structure in order to account for non-Gaussian microlense PSFs. Another refinement would be to account for the small spatial shifts of the spectra in the area that surrounds the coronagraph mask. However, this would, require a more complex calibration procedure, with a step where a refinement is computed directly on the science data. Finally, the method should be tested and certified routinely on a large amount of data.

This work can be considered as a preliminary building block from which more accurate and more complete reconstruction algorithms can be be built. Currently, only the IFS instrument is accounted for in the forward model. The next step is to include the telescope response via its chromatic PSF. This would allow an end-to-end deconvolution of the reduced cube directly from the raw data. Our model could also be included in detection or imaging algorithms to separate the stellar leakages from the off-axis objects in high-contrast imaging.

Acknowledgments

This work was performed within the framework of the LABEX PRIMES (ANR-11-LABX-0063) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). It has been supported by the (project DIAGHOLO). This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CeSAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon/CRAL, and supported by a grant from Labex OSUG@2020, within the program “Investissements d’Avenir” (ANR-10-LABX-56). All the reductions are based on public data provided by the ESO Science Archive Facility and acquired for different programs: 60.A-9372 (Fletcher L., 1 Feb. 2015), 60.A-9357 (Carter J., 6 Dec. 2014), 095.C-0298 (Beuzit J.-L., et al., 4 Jul. 2015) and 097.C-0865 (Beuzit J.-L., et al., 15 Apr. 2016). The authors would like to thank Gratton R., Dohlen K., and Delorme P. for their fruitful discussions on the IFS principle and on the SPHERE reduction pipeline. We are grateful to Carter J. and Guiot P. for their end user’s view on the reduction of Io and Ganymede observations as well as to Vigan A. for having shared the data of Fig. C.2e. We are also thankful to the referee and its pertinent feedback.

References

  1. Antichi, J., Dohlen, K., Gratton, R. G., et al. 2009, ApJ, 695, 1042 [NASA ADS] [CrossRef] [Google Scholar]
  2. Bacon, R., Adam, G., Baranne, A., et al. 1995, A&AS, 113, 347 [NASA ADS] [Google Scholar]
  3. Bacon, R., Copin, Y., Monnet, G., et al. 2001, MNRAS, 326, 23 [NASA ADS] [CrossRef] [Google Scholar]
  4. Bacon, R., Bauer, S., Boehm, P., et al. 2006, Ground-based and Airborne Instrumentation for Astronomy, 6269, 62690J [CrossRef] [Google Scholar]
  5. Bagnasco, G., Kolm, M., Ferruit, P., et al. 2007, Cryogenic Optical Systems and Instruments XII, 6692, 66920M [CrossRef] [Google Scholar]
  6. Baranne, A., Queloz, D., Mayor, M., et al. 1996, A&AS, 119, 373 [NASA ADS] [CrossRef] [EDP Sciences] [MathSciNet] [PubMed] [Google Scholar]
  7. Beuzit, J. L., Vigan, A., Mouillet, D., et al. 2019, A&A, 631, A155 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  8. Blomgren, P., & Chan, T. 1998, IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 7, 304 [CrossRef] [Google Scholar]
  9. Bowen, I. S. 1938, ApJ, 88, 113 [NASA ADS] [CrossRef] [Google Scholar]
  10. Brandt, T. D., Rizzo, M. J., Groff, T. D., et al. 2017, J. Astron. Telesc. Instrum. Syst., 3, 048002 [NASA ADS] [CrossRef] [Google Scholar]
  11. Cantalloube, F., Mouillet, D., Mugnier, L. M., et al. 2015, A&A, 582, A89 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  12. Charbonnier, P., Blanc-Feraud, L., Aubert, G., & Barlaud, M. 1997, Trans. Image Process., 6, 298 [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
  13. Claudi, R. U., Turatto, M., Gratton, R. G., et al. 2008, in Ground-based and Airborne Instrumentation for Astronomy II, Proc. SPIE, 7014, 70143E [CrossRef] [Google Scholar]
  14. Claudi, R. U., Giro, E., Anselmi, U., et al. 2011, in Optical Design and Engineering IV, Proc. SPIE, 8167, 81671S [Google Scholar]
  15. Courtes, G. 1982, Int. Astron. Union Colloq., 67, 123 [CrossRef] [Google Scholar]
  16. Currie, T., Cloutier, R., Brittain, S., et al. 2015, ApJ, 814, L27 [NASA ADS] [CrossRef] [Google Scholar]
  17. Cushing, M. C., Vacca, W. D., & Rayner, J. T. 2004, PASP, 116, 362 [NASA ADS] [CrossRef] [Google Scholar]
  18. Delorme, P., Meunier, N., Albert, D., et al. 2017, in SF2A-2017: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, eds. C. Reylé, P. Di Matteo, F. Herpin, et al., 347 [Google Scholar]
  19. Dohlen, K., Vigan, A., Mouillet, D., et al. 2016, Ground-based and Airborne Instrumentation for Astronomy VI, 9908, 99083D [Google Scholar]
  20. Draper, Z. H., Marois, C., Wolff, S., et al. 2014, Ground-based and Airborne Instrumentation for Astronomy V, 9147, 91474Z [Google Scholar]
  21. Drory, N., MacDonald, N., Bershady, M. A., et al. 2015, AJ, 149, 77 [NASA ADS] [CrossRef] [Google Scholar]
  22. Eisenhauer, F., Abuter, R., Bickert, K., et al. 2003, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, 4841 [Google Scholar]
  23. Flasseur, O., Denis, L., Thiébaut, É., & Langlois, M. 2018, A&A, 618, A138 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  24. Groff, T. D., Chilcote, J., Kasdin, N. J., et al. 2016, Ground-based and Airborne Instrumentation for Astronomy VI, 9908, 99080O [Google Scholar]
  25. Hanus, J., Marchis, F., Viikinkoski, M., Yang, B., & Kaasalainen, M. 2017, A&A, 599, A36 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  26. Hinkley, S., Oppenheimer, B. R., Zimmerman, N., et al. 2011, PASP, 123, 74 [NASA ADS] [CrossRef] [Google Scholar]
  27. Hogg, R. V. 1979, Am. Stat., 33, 108 [Google Scholar]
  28. Holland, P. W., & Welsch, R. E. 1977, Commun. Stat. Theory Methods, 6, 813 [CrossRef] [Google Scholar]
  29. Horne, K. 1986, PASP, 98, 609 [NASA ADS] [CrossRef] [Google Scholar]
  30. Huber, P. 1996, Robust Statistical Procedures: Second Edition, CBMS-NSF Regional Conference Series in Applied Mathematics (Society for Industrial and Applied Mathematics) [Google Scholar]
  31. Huber, P. J. 2011, Robust Statistics (Springer) [Google Scholar]
  32. Jovanovic, N., Martinache, F., Guyon, O., et al. 2015, PASP, 127, 890 [NASA ADS] [CrossRef] [Google Scholar]
  33. King, O. R., Fletcher, L. N., & Ligier, N. 2019, Compositional Mapping of Europa with VLT/SPHERE, EPSC-DPS2019-596-1, 13 [Google Scholar]
  34. Lafreniere, D., Marois, C., Doyon, R., Nadeau, D., & Artigau, E. 2007, ApJ, 660, 770 [NASA ADS] [CrossRef] [Google Scholar]
  35. Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P.E, 1998, J. Optim., 9, 112 [Google Scholar]
  36. Langlois, M., Pohl, A., Lagrange, A.-M., et al. 2018, A&A, 614, A88 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  37. Lantz, B., Aldering, G., Antilogus, P., et al. 2004, Optical Design and Engineering, 5249 [Google Scholar]
  38. Larkin, J., Barczys, M., Krabbe, A., et al. 2006, Ground-based and Airborne Instrumentation for Astronomy, 6269, 62691A [CrossRef] [Google Scholar]
  39. Law, D. R., Cherinka, B., Yan, R., et al. 2016, AJ, 152, 83 [NASA ADS] [CrossRef] [Google Scholar]
  40. Lockhart, K. E., Do, T., Larkin, J. E., et al. 2019, AJ, 157, 75 [NASA ADS] [CrossRef] [Google Scholar]
  41. Macintosh, B. A., Graham, J. R., Palmer, D. W., et al. 2008, Adaptive Optics Systems, 7015, 701518 [CrossRef] [Google Scholar]
  42. Macintosh, B., Graham, J. R., Barman, T., et al. 2015, Science, 350, 64 [NASA ADS] [CrossRef] [Google Scholar]
  43. Marois, C., Lafreniere, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, ApJ, 641, 556 [NASA ADS] [CrossRef] [Google Scholar]
  44. Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  45. Marois, C., Correia, C., Véran, J.-P., & Currie, T. 2013, Proc. Int. Astron. Union, 8, 48 [CrossRef] [Google Scholar]
  46. Marois, C., Correia, C., Galicher, R., et al. 2014, in Adaptive Optics Systems IV, Int. Soc. Opt. Photon., 9148, 91480U [CrossRef] [Google Scholar]
  47. Mesa, D., Gratton, R., Zurlo, A., et al. 2015, A&A, 576, A121 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  48. Mesa, D., Langlois, M., Garufi, A., et al. 2019, MNRAS, 488, 37 [NASA ADS] [CrossRef] [Google Scholar]
  49. Modigliani, A., Hummel, W., Abuter, R., et al. 2007, ArXiv e-prints [arXiv:astro-ph/0701297] [Google Scholar]
  50. Mugnier, L. M., Fusco, T., & Conan, J.-M. 2004, J. Opt. Soc. Am. A, 21, 1841 [NASA ADS] [CrossRef] [Google Scholar]
  51. Nocedal, J. 1980, Math. Comput., 35, 773 [CrossRef] [Google Scholar]
  52. Pavlov, A., Möller-Nilsson, O., Feldt, M., et al. 2008, in Advanced Software and Control for Astronomy II, Proc. SPIE, 7019, 701939 [CrossRef] [Google Scholar]
  53. Perrin, M. D., Ingraham, P., Follette, K. B., et al. 2016, in Ground-based and Airborne Instrumentation for Astronomy VI, Proc. SPIE, 9908, 990837 [Google Scholar]
  54. Rothman, L., Gordon, I., Babikov, Y., et al. 2013, J. Quant. Spectr. Rad. Transf., 130, 4 hITRAN2012 special issue [NASA ADS] [CrossRef] [Google Scholar]
  55. Rudin, L. I., Osher, S., & Fatemi, E. 1992, J. Phys. D, 60, 259 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
  56. Ruffio, J.-B., Mawet, D., Czekala, I., et al. 2018, AJ, 156, 196 [NASA ADS] [CrossRef] [Google Scholar]
  57. Soubies, E., Soulez, F., McCann, M. T., et al. 2019, Inverse Probl., 35, 104006 [NASA ADS] [CrossRef] [Google Scholar]
  58. Soummer, R., Pueyo, L., & Larkin, J. 2012, ApJ, 755, L28 [NASA ADS] [CrossRef] [Google Scholar]
  59. Sparks, W. B., & Ford, H. C. 2002, ApJ, 578, 543 [NASA ADS] [CrossRef] [Google Scholar]
  60. Thiébaut, E. 2002, Proc. SPIE, 4847, 174 [NASA ADS] [CrossRef] [Google Scholar]
  61. Titterington, D. M. 1985, A&A, 144, 381 [NASA ADS] [Google Scholar]
  62. Vanderriest, C. 1980, PASP, 92, 858 [NASA ADS] [CrossRef] [Google Scholar]
  63. Vanderriest, C., Bacon, R., Georgelin, Y., LeCoarer, E., & Monnet, G. J. 1994, Instrumentation in Astronomy VIII, 2198 [Google Scholar]
  64. Vérinaud, C., Hubin, N., Kasper, M., et al. 2006, Proc. SPIE, 6272, 62720M [CrossRef] [Google Scholar]
  65. Vigan, A., Gry, C., Salter, G., et al. 2015, MNRAS, 454, 129 [NASA ADS] [CrossRef] [Google Scholar]
  66. Weilbacher, P. M., Streicher, O., Urrutia, T., et al. 2012, Software and Cyberinfrastructure for Astronomy II, 8451, 84510B [CrossRef] [Google Scholar]
  67. Zimmerman, N., Brenner, D., Oppenheimer, B. R., et al. 2011, PASP, 123, 746 [NASA ADS] [CrossRef] [Google Scholar]

Appendix A: Detector calibration

Each SPHERE-IFS scientific acquisition comes with a set of calibration files described in Pavlov et al. (2008). These files are split into two subsets: one dedicated to the calibration of the sensor and the other dedicated to the calibration of the lenslet array, that is, the spectral dispersion. In each configuration, a set of na acquisitions are performed for various exposure times τ ∈ ℝnt. The calibration files that we use in our calibration of the sensor are the following:

  • detector flat-field acquisitions yf performed with an integrating sphere producing a uniform light in the optical system after the lenslet array for different light sources (lasers or broadband sources);

  • background acquisitions ybg performed without any flux of interest reaching the sensor. The recorded signal is then mainly due to the thermal background. These measurements are performed in three different configurations: (a) ybg/c ∈ ℝm1 × m2 × nt × na with part of the instrument optical path closed (i.e. same optical path as for the flat-field acquisitions, without the lenslet array); (b) ybg/o ∈ ℝm1 × m2 × nt × na with opened optical path (i.e. complete optical path of the instrument from the entrance focus, including the lenslet array); (c) ybg/s ∈ ℝm1 × m2 × nt × na with the shutter of the instrument opened, the telescope pointing at a dark region of the sky to record the thermal emission of the atmosphere.

In the following, we describe how these different images are combined in order to estimate the parameters that characterise the detector. We first introduce notations for generic operations on images:

  • ai is the empirical mean of a along the dimension indexed by the letter i,

  • 𝒱i(a) is the empirical variance of a along the dimension indexed by the letter i,

  • i(a) is the median of a along the dimension indexed by the letter i,

  • 𝒜i(a) = ℳi(|a−ℳi(a)|) is the median absolute deviation of a along the dimension indexed by the letter i. It is often used as a robust estimator of the standard deviation, see Huber (2011).

The calibration procedures are illustrated using the two datasets HR 8799 and RY lup introduced in the main text.

A.1. Bias calibration

The bias term bk in Eq. (3) is the sum of different contributions:

(A.1)

with yb the bias of the detector, ydc the dark current, yth the thermal emission of the instrument and ysky the thermal emission of the sky. All these contributions but the detector bias yb are proportional to the exposure time. Hence the bias in the raw data shall follow an affine law of the exposure time. The coefficients of this law can be estimated from the background calibration data given by:

(A.2)

where fbg is the background flux in adu s−1 accounting for the dark current and the thermal emission, τt is the exposure time and ϵk, a, t is the noise supposed to be independent.

Due to the high level of instrumental thermal emission, the term is predominant even at short exposure times and the constant bias is negligible. In the following, we therefore assume that: yb = 0. The estimated flux at a given pixel k is then obtained by a simple linear fit of the averaged background calibration data qbg = ⟨ybga ∈ ℝm1 × m2 × nt weighted by their empirical precision (∀k, t):

(A.3)

Finally, in Eq. (5), the term bk is estimated by or by with τexp the exposure time and depending on whether the sky contribution must be removed or not.

Figure A.1 shows the estimated flux for the RY lup dataset with the shutter opened. It is very similar to the estimated flux with the shutter closed (not presented here), meaning that the thermal background is mainly due to the internal components of the instrument. The figure also highlights some regions of the sensor where some dust masks the thermal emission, such as Figs. A.1b,c, and other regions with a high density of anomalous pixels, such as Fig. A.1d.

thumbnail Fig. A.1.

Estimation of the temporal background flux on the RY lup dataset, with the shutter opened (cubic stretch, colour bar: adu s−1). b–e: regions of interest of (a).

Open with DEXTER

A.2. Remnant signal

During a night-time calibration procedure, it happens that the sky background ybg/s is polluted by a remnant signal from a previous acquisitions fpa, as seen in Figs. A.2a,b. This is typically the case in high contrast imaging mode when the target star is decentred in order to measure the off-axis PSFs and the spectral energy distribution of the star, just before a sky background is acquired, as shown in Fig. A.2c on RY lup.

thumbnail Fig. A.2.

Illustration of the remnant signal suppression from the sky background acquisitions (RY lup dataset, colour bar: adu s−1). a,b: averaged sky background fluxes ⟨fbg/sa. c: previous acquisition flux fpa. d: corrected sky background flux . e: estimated remnant signal in (c).

Open with DEXTER

As several sky background acquisitions are performed, the remnant signal flux decreases while the background flux remains constant. These two contributions are iteratively split under the following assumptions:

  • a ∈ ⟦1, na − 1⟧, βa ≥ βa + 1 ≥ 0,

where the ≈ sign indicates that some discrepancy is expected because of noise.

Algorithm 2 implements a simple iterative method to estimate the background with a reduced contribution of the remnant signal . Figures A.2d,e show that this algorithm is effective and that the two contributions can be correctly identified. The remnant artefact is strongly attenuated in the estimated background flux while the remnant signal contribution is not polluted by an offset due to the background.

Algorithm 2 Remnant-free background flux estimate Remnant-free background flux estimate

1:                                          ⊳Initial background estimate

2:  fori from 1 to nrsdo

3:                                        ⊳Update foreground estimate

4:    fora from 1 to nado                                             ⊳Loop over calibration backgrounds

5:                            ⊳Least-squares estimate of βa

6:                    ⊳Apply constraints

7:    ∀(k, a),                                ⊳Subtract remnant signal

8:                                              ⊳Update background estimate

9:    ∀k,                                          ⊳Apply constraints

10:  return

A.3. Gain calibration

The factor gk in Eq. (3) is a gain converting the signal of interest [Mx]k at pixel k into digital values. This gain accounts for the quantum efficiency, for the optical throughput and for the electronic gain. The pixelwise gain is estimated from flat-field acquisitions yf obtained under a flux that is made as uniform as possible over the whole sensor. These acquisitions must first be pre-corrected from the background corresponding to the situation where the shutter is closed:

(A.4)

To account for the fact that the flat-field calibrations were acquired under different illuminations, the estimated gain is then given by fitting the averaged (background compensated) flat-field calibrations as a linear function of their median intensity weighted by their empirical precisions (∀k, t):

(A.5)

thumbnail Fig. A.3.

Estimation of the flat-field correction factor on the RY lup dataset. b–e: regions of interest of (a).

Open with DEXTER

Figure A.3 shows the gain calibrated on the RY lup dataset. Contributions from the optics and from the electronics can be identified. The vertical stripes are due to the gain differences between the 32 reading channels of the sensor. The curved horizontal lines are probably artefacts due to parallel plates on the optical path. Large darker areas are attributable to dust particles on the optics. The dust particles directly laying on the sensor and already visible in the calibrated background flux, such as in Figs. A.1b,c, are now clearly visible, as shown in the zooms of Figs. A.3b,c.

A.4. Defective pixels identification

The different calibration files can be used to form a first estimate of the map of defective pixels, noted δcal in Algorithm 1. Figure A.4 represents the discarded pixels for the RY lup dataset as well as the reason of their flagging.

thumbnail Fig. A.4.

Identified defective pixels in the RY lup dataset. White: Pixels out of the lenslet array field. Green: pixels flagged by the median filter on the dark acquisitions. Red: pixels flagged by their anomalous variance in the dark acquisitions. Blue: pixels flagged by the median filter on the flat-field acquisitions. Dark blue: pixels flagged by their anomalous variance in the flat-field acquisitions. Grey: unflagged pixels. b–e: regions of interest of (a).

Open with DEXTER

The spectral flat-field, which is discussed in more detail in Appendix B and in Fig. C.1a, is first used to identify the active area of the sensor (the grey area of Fig. A.4). Outside of this area the pixels are discarded. In addition we discard also the first four and the last four columns and rows of the sensor. This active area corresponds to the grey square region in the figure.

The defective pixels are then identified by analysing the statistics of the different calibration datasets yc ∈ ℝm1 × m2 × na × nt and their model (∀a) with a pixel-wise map and exposure-wise factors respectively given by:

where we recall that . For each of these three sets, a map of defective pixels is built as presented in Algorithm 3.

Algorithm 3 Defective pixel identification on a set (yc, pc, uc)

1:                                ⊳Difference with a 5 × 5 median filter

2:                       ⊳Threshold on the tolerance

3:                                      ⊳Count of the discarded pixels

4:                              ⊳Residuals of the linear fit

5:                             ⊳Weighted cost of the residuals

6:                                             ⊳Discard already flagged pixels

7:                                                 ⊳Init. of the map on the statistics of yc

8:  fornw worst ck                                                       ⊳Discarding nw worst statistics

9:                                                               ⊳δv update

10:                                                 ⊳Global map on v

11: returnδcal

A first set δa of nw defective pixels is built by forming a residual map by subtracting to pc its local median in a 5 × 5 pixels neighbourhood and identifying as defective the pixels whose absolute value in is greater than a tolerance αa multiplied by the median absolute deviation . In Fig. A.4, such defective pixels are marked in green if they are found via the background calibration files (shutter opened or closed) and in blue if they are found via the detector flat-field acquisitions.

In a second step, the statistics of the residuals of the linear fit pc in the averaged acquisitions ⟨yca, according to uc weighted by their empirical variance , are analysed. The nw pixels with the worst costs (nw corresponds to the number of defective pixels previously detected), that is to say, an anomalous variance, are discarded among the remaining pixels not yet flagged as defective. This gives a second map δv of defective pixels. In Fig. A.4, such defective pixels are marked in red if they are found via the background calibration files (shutter opened or closed) and in dark blue, if they are found via the detector flat-field acquisitions.

The maps δa and δv built for the three sets are then combined to obtain the final map of discarded pixels δcal This method clearly identifies the pixels impacted by the presence of dust particles laying on the sensor, as shown in blue the zooms of Figs. A.4b,c.

As one would expect, looking at the red-flagged pixels, it appears that some of the pixels with an anomalous variance cluster around defective pixels. This is because, at high fluxes, defective pixels overflow and corrupt surrounding pixels. This justifies the choice of discarding the nw worst variances. Some areas, such as the one of Fig. A.4d, have a high density of discarded pixels.

The zooms of Figs. A.4b–e must be compared with the ones of Figs. A.1b–e and A.3b–e: some of the discarded pixels would have been hardly identifiable in the estimated background fluxes or in the gains .

A.5. Noise variance estimation

In this section, we describe how to estimate the variance vk of the additive noise used in Eq. (4) on each pixel k, namely the variance at null flux , the so-called readout noise, and ηk, which contains the conversion from the number of photons to adu in the Poisson noise term of this equation.

As the different calibration images are not sufficient to estimate these parameters independently for each pixel k, they are assumed to be constant on the whole sensor:

(A.6)

This approximation holds if all the pixels behave similarly and if gk ≃ 1 whatever k.

The coefficients are estimated using all the calibration files yc ∈ {ybg/o, ybg/c, yf}. An affine law is fitted to their median empirical variances : qc ≈ ηpc + v01, where is the median flux in the calibration images yc. Here, ηpc is consequently the variance associated with the Poisson noise induced by the median flux. Weights , corresponding to the inverse of the square of the median absolute deviation are used to weight each sample of the fit:

(A.7)

(A.8)

(A.9)

Results obtained on HR 8799 and RY lup calibration files are given in Fig. A.5 and in Table A.1.

thumbnail Fig. A.5.

Estimation of the noise model for the HR 8799 (a) and RY lup (b) datasets. The model is fitted on the median fluxes and the median variances of the different calibration files: the dark currents with the shutter close (red) and open (green) and the flat-fields (blue). The error bars are obtained by the median absolute deviation on all the sensor pixels for each acquisition set. The grey law is fitted without taking into account the errors. The black law is fitted by accounting for the errors. The red and blue insets are zooms on regions of low incident fluxes.

Open with DEXTER

Table A.1.

Estimation of the coefficients for the two datasets, when accounting for the median absolute deviation of the empirical variances (weighted least squares) or not (standard least squares).

Accounting for the median absolute deviation of the empirical variance improves the estimation: the estimated line (in black) is less affected by values with large uncertainty than an unweighted least square fit (grey line).

It appears that the smaller errors on the low values help to counterbalance the lever arm effect on the large fluxes that are noisier. In the case of the RY lup dataset, the weighted fit reduces the influence of an outlier point around a flux of 2750 adu whose variance exceeds 4000 adu2. This can be seen in the estimation of presented in Table A.1: as expected the weighted least squares gives more similar coefficients for the two datasets than the standard least square estimation.

Appendix B: Spectral dispersion calibration

This appendix describes the calibration of the lenslet array, that is, how to determine the operator P :  ℝnh × nλ → ℝm1 × m2 that represents the spectral dispersion produced by the individual lenslets projected on the sensor.

To build this operator, several calibration files are used:

  • Spectral flat-field acquisitions ys are obtained when an integrating sphere produces a uniform light upstream of the lenslet array with a broadband lamp. These images help to calibrate the 2D location of each spectrum and the lenslet transmissions as well as to identify defective lenslets.

  • Laser acquisitions yw are obtained with calibrated lasers placed upstream of the lenslet array. Their wavelengths are λcal ∈ {987.72,1123.71,1309.37,1545.10}nm. Only the first three lasers are used for the YJ band calibrations ().

For each of these files, a set of several acquisitions a ∈ ⟦1, na⟧ are performed for a given exposure time τ. In the following, only their average, corrected for the background and the gain, is considered:

(B.1)

B.1. Dispersion model

As previously described, after dispersion, the spectrum produced by a given lenslet is modelled as a weighted sum of elementary normalised Gaussian patterns, one for each reconstructed wavelength. For a pattern of size σ ∈ ℝ+, centred on κ ∈ ℝ2 in pixel coordinates, the value at a given pixel k is:

(B.2)

In practice, to save memory and computational time, the Gaussian patterns are truncated outside a region of a few pixels. As a consequence, P is a sparse matrix.

The reduced hyperspectral cube produced by the standard reduction pipeline of SPHERE (Pavlov et al. 2008) possesses nλ = 39 wavelengths. In our calibration method, the lists of reconstructed wavelengths λ are extended beyond the edges of the spectrum:

  • for the YH-mode, nλ = 44 wavelengths are reconstructed (and the calibration lasers are used);

  • for the YJ-mode, nλ = 47 wavelengths are reconstructed (and the first calibration lasers are used).

Thus, if κ ∈ ℝ2 × nλ is the list of the positions of the elementary Gaussian patterns in a spectrum and σ ∈ ℝnλ the list of their sizes, the light incoming from a given lenslet h is dispersed to form a spectrum u on the sensor, whose value at pixel k is:

(B.3)

where Λ is the spectrum of the lamp and γh is the transmission of lenslet h.

For each lenslet, the position κ and the width σ of each Gaussian pattern are assumed to follow a polynomial law of degree d as a function of the wavelength. At wavelength λ, the value of the polynomial p defined by the coefficients c ∈ ℝd + 1 is:

(B.4)

The spectral projection operator P is defined based on the list γ ∈ ℝnh of the lenslet transmissions, the list cκ ∈ ℝnh × 2 × (dk+1) of the coefficients of the polynomial law of degree dk describing the positions of the elementary patterns for each lenslet, and the list cλ ∈ ℝnh × (dλ+1) describing the coefficients of the polynomial law of degree dλ of the size of the elementary patterns for each lenslet. With these notations, the model of the spectrum at a given pixel k produced by the lenslet h is:

(B.5)

In practice, the degree of the polynomial model is dictated by the number of calibration spots in each spectrum: i.e. dk = 3 in the YH-mode and dk = 2 in the YJ-mode. For both cases, we use dλ = 2.

B.2. Dispersion calibration

With the notations defined for the robust penalisation, the estimation of these dispersion coefficients is obtained by minimising the following data fidelity term for a given lenslet h:

(B.6)

applied on the spectral flat-field {ys,Λ,λ} or on the wavelength calibration , where is the list of the amplitudes of the spots for each lenslet.

Estimating the dispersion parameters from the laser calibration images yw is not suitable because some laser spots may fall into regions with many defective pixels, or be totally hidden by a dust particle. Using the spectral flat-field images ys is more adapted since the signal covers many more pixels, but is very limited in terms of wavelength characterisation. It is best to combine the two calibration files via a global data fidelity term:

(B.7)

where the hyperparameter μw balances the influence of the wavelength calibration compared to the spectral flat-field.

Minimising Eq. (B.7) to determine the dispersion parameters of a given lenslet h implies knowledge of the spectrum Λ of the lamp. This spectrum can be robustly estimated from all the lenslets by minimising the following data fidelity term:

(B.8)

regularised by the smoothing term μΛ ℛλ(Λ) previously defined in Eq. (12) and constrained to the domain:

(B.9)

The spectrum of the lamp is initialised on a few neighbouring spectra at the centre of the sensor field. Each lenslet spectrum and its dispersion coefficients are then identified neighbour after neighbour by minimising Eq. (B.7).

This first estimation can be refined to account for the overlap of neighbouring spectra. If the impact on the position coefficients cκ is minor, it influences the size coefficients cλ in particular on the spectra edges. That is why the first set of parameters is refined nc times by Algorithm 4 by iteratively working on the residuals.

Algorithm 4 Spectral dispersion calibration algorithm

1:  fori from 1 to ncdo                                                   ⊳nc refinement loops

2:           ⊳Λ update, fixed

3:                ⊳Global residuals

4:    forh from 1 to nhdo                                                       ⊳Loop on the lenslets

5:                        ⊳hth lenslet residuals

6:      update on (rh,yw), fixed

7:      update on rh, fixed

8:  return

At each iteration of the refinement loop, the parameters of the model are alternatively estimated. First, the lamp spectrum is robustly fitted on all the lenslets by minimising Eq. (B.8) with the VMLM-B algorithm.

The positions and amplitudes of each spot are then estimated, one lenslet after another, after the contributions of all other lenslets have been removed from the data. These estimations are also performed with VMLM-B. In a third step, the size of the patterns and the lenslet transmissions γh are estimated using a simplex search method (Lagarias & Reeds 1998).

Doing so, the estimation of the position parameters is split from the pattern size law fit. Thus, is not influenced by the monochromatic spots size in yw whereas they are supposed to fit continuous spectra in ys.

B.3. A super-resolved model for the lenslet PSF?

Considering a super-resolved model, as proposed by Brandt et al. (2017), is straightforward in the proposed framework. It consists in replacing the Gaussian patterns ξk with a more accurate model of the lenslet PSFs. The real difficulty comes from the calibration of this model. In particular, the calibration files available for the SPHERE instrument are not sufficient to derive an accurate super-resolved PSF model for several reasons.

Firstly, the calibration lasers simultaneously illuminate the IFU. Thus, the foot of the PSF produced by one laser overlaps with the PSFs of the other wavelengths. A sequential acquisition (one wavelength at a time) would be necessary to directly recover the whole support of the PSFs without requiring to unmix the PSFs. Secondly, even with a single monochromatic illumination, all lenslets are illuminated at once, which leads to two issues. The spots produced by the different lenslets lie on a regular grid and slightly overlap, which creates a degeneracy of the super-resolved model. Some interference also occurs between patterns which are not representative of the PSFs obtained with incoherent light. The selective illumination of one lenslet at a time would solve these issues but would be time consuming and would raise technical instrumental issues. Finally, the number of distinct wavelengths used in the calibration procedure is too low to allow a fine parameterisation of a PSF model, and in particular to address the variability from one lenslet to another. A larger set of wavelengths, covering the whole spectrum (to prevent extrapolations on the edges of the spectral domain) would be necessary. For these different reasons, such a super-resolved model was not implemented in the present paper.

Appendix C: Results of the dispersion calibration

C.1. Dispersion

Results of Algorithm 4 are illustrated in Fig. C.1 for the calibration in the YH-mode of the HR 8799 dataset.

thumbnail Fig. C.1.

Calibration of the spectral projection for the HR 8799 dataset (square root stretch, colour bar: adu). a: averaged wavelength calibration acquisition yw. b: averaged spectral flat-field calibration acquisition ys. c: zoom of (a). The found positions for the calibration wavelengths are highlighted by coloured dots. The black circles are proportional to the size of the Gaussian patterns. d: simulation of the wavelength calibration by the projection model. e: zoom of (b). The positions of the wavelengths simulated in the forward model are highlighted by coloured dots. f: simulation of the spectral flat by the projection model. g: residuals of the first estimation of the wavelength calibration. h: final residuals of the wavelength calibration. i: residuals of the first estimation of the spectral flat calibration. j: final residuals of the spectral flat calibration. k: histogram of the residuals of the spectral flat calibration along the iterations. For comparison purposes, a Gaussian profile of standard deviation σ = 175 adu is plotted.

Open with DEXTER

The laser spots are visible in the wavelength calibration acquisition yw for each lenslet in Figs. C.1a,c. The coloured dots mark the positions of the spots fitted by the Algorithm 4. The radii of the dark circles indicate the size of the Gaussian patterns. The chromatic evolution of the fitted size is visible with a minimal spot extension around the third calibration wavelength.

The residuals in Fig. C.1g of the first iteration of the wavelength calibration present a strong diversity due to inaccurate values of the size of the Gaussian patterns and incorrect amplitudes for the spots Λcal. Some patterns are not well-aligned with the laser spots, as suggested by nonsymmetric residuals on some spots (which indicates a bad wavelength calibration). The final residuals in Fig. C.1h, after nc = 10, are more homogeneous, with centred residuals on each spot. Nonetheless, all the spots present red ring-shaped residuals and seem to worsen. This suggests that the pattern size given by cλ is overestimated when applied to the laser spots. This is expected as these spots are monochromatic whereas the dispersion law is fitted on the continuous spectra of the spectral flat-field file ys. This also implies that a wavelength calibration only performed on the wavelength calibration file yw underestimates the size parameters.

The spectra of the nh ≈ 19 000 lenslets of the spectral flat-field acquisition ys are visible in Figs. C.1b,d. The coloured dots represent the positions of the nλ = 44 wavelengths of the forward model. It can be seen that the spectral range goes beyond the calibration laser spots, especially in the longer wavelengths (red dots). The extrapolation of the positions beyond these calibrated spots leads to wider spatial steps in the short wavelengths (blue dots) while the positions are more packed in the longer wavelengths (red dots). This may lead to a degraded wavelength calibration in the outer regions of the spectral range (see Fig. 8 and associated discussion).

In the residuals of the first calibration of the spectral flat calibration in Fig. C.1i, it appears that the overall signal is over-estimated. This may be due to the fact that the overlap of the spectra is not taken into account at this first iteration. The dust presents in Fig. A.3c is clearly identifiable. After the nc = 10 refinements, the residuals in Fig. C.1j improve, especially on the spectra edges, to fall below ∼2.5% of the lamp spectrum intensity. This is confirmed by the curves of Fig. A.3k that show the evolution of the histogram of the residuals along with the refinement iterations. Starting from a spread distribution, the residuals squeeze towards a Gaussian curve with smaller dispersion.

Blue structures in Fig. C.1j indicate that some regions are still slightly underestimated by the model, leading to positive residuals. This corresponds to the small bump around 400 adu in the residue distribution. This is due to the fact that the Gaussian pattern model is an approximation of the spots diffracted by the hexagonal apertures of the lenslets. They have “feet” in privileged directions that are not fitted by an axisymmetric pattern, to which optical aberrations may be added. This also explains the blue positive background around the laser spots in Figs. C.1a,c.

In terms of wavelength centring of the forward model, as mentioned by Vigan et al. (2015), the SPHERE reduction pipeline (Pavlov et al. 2008) needs an extra-patch to correct its wavelength calibration. To check our calibration, the wavelength calibration file yw is reconstructed using Algorithm 1. The results are given in Fig. C.2 for the two datasets.

thumbnail Fig. C.2.

Wavelength calibration of the model on the HR 8799 (YH-mode, on the left panel) and the RY lup (YJ-mode, on the right panel) datasets (colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube. c,d: slice of the reconstructed hyperspectral cube (λ = 1314 nm for the HR 8799 dataset, λ = 1122 nm for the RY lup dataset). e: median intensity ℳθ(x) computed on each slice of the reconstructed hyperspectral cube for each dataset compared with the reduction proposed by Vigan et al. (2015). The wavelengths of the calibration lasers are highlighted by dashed grey lines).

Open with DEXTER

The coloured projections of Figs. C.2a,b and the monochromatic slices given in Figs. C.2c,d show that the reconstructed laser illuminations are not uniform but are characteristic of speckle noise.

The median intensity ℳθ(x), computed on each slice of the reconstructed hyperspectral cube, is shown in Fig. C.2e for each dataset. Maxima match well the wavelength of the calibration lasers, represented by vertical dashed lines. The effective resolution is estimated from the full width at half maximum of each laser. The average resolution of the YJ-mode and YH-mode is R = 57 and R = 31 respectively. These values are compared to the standard reduction method based on a direct mapping (Vigan et al. 2015) in the YH-mode from which the estimated average spectral resolution is R = 20. The better spectral resolution using PIC is close to the SPHERE-IFS theoretical resolution R = 30 of Beuzit et al. (2019) and shows the impact of the spectral deconvolution performed by the inverse problems approach (this deconvolution is possible because the forward model accounts for the spectral mixing).

C.2. Transmission

The calibration procedure also gives the achromatic transmission map γ of the IFU, including the lenslet array, the prism, and the other associated lenses; it is shown in Fig. C.3a, with a median transmission of ℳh(γ) = 1.

It can be seen that the transmission map γ further corrects the effect of the dust particles on the optical path that cannot be suppressed only by the sensor flat-field correction factor . Indeed, comparing Figs. A.3a and C.3a, it appears that these dust particles have a counterpart in the transmission map γ. Even if the effect is minor, Fig. C.3b shows that the lamp spectrum Λ is refined with the iterations of the calibration loop. The corrections are mainly due to the refinement of the dispersion law cλ. Indeed, the dilution of the intensity in a Gaussian pattern according to its size must be counterbalanced by its amplitude. Some features are visible in the lamp spectra that can participate to the robust wavelength calibration.

C.3. Autocalibration

A last correction must be added in the calibration of the forward model. Indeed, it appears that the data on the sky are not perfectly aligned with the calibration files. This is emphasised in Figs. C.4a–d that presents a region of interest of an acquisition on the sky (a), the fit of the model obtained before the autocalibration (b), and the corresponding residuals (d). The difference between the model before and after the alignment is shown in Fig. C.4c.

thumbnail Fig. C.3.

a: transmission map γ of the IFU estimated for the HR 8799 dataset. b: fitted spectrum of the lamp for the two datasets HR 8799 (YH-mode) and RY lup (YJ-mode) along the iterations of Algorithm 4.

Open with DEXTER

thumbnail Fig. C.4.

Autocalibration for the HR 8799 dataset (square root stretch, colour bar: adu). a: zoom on a region of interest. b: model before the alignment. c: difference between the model prediction before and after the alignment. d: residuals after the alignment. e: statistics before (red) and after (black) the autocalibration step.

Open with DEXTER

To perform this alignment, nr regions of interest, equally spread on the sensor and of mr × mr pixels, are extracted from the raw science data to perform an autocalibration. The regions are small and the spectra Λa ∈ ℝnr × nλ of the lenslets are considered to be equal on these reduced fields.

Then an algorithm similar to Algorithm 4 is applied. A refinement loop is performed to compute the global shift δk ∈ ℝ2 that is assumed to be constant for all the coefficients cκ. Each iteration alternates two steps:

  • the spectrum of each of the nr regions is fitted via Eq. (B.8), applied on the region of interest, assuming that the shift δk is known;

  • the shift δk is fitted by combining all the regions of interest by adapting Eq. (B.6), for fixed spectra .

The histograms of the residuals before and after the alignment are compared in Fig. C.4e. After the alignment they are less dispersed and more symmetrical. The estimated shifts for the two datasets are given in Table C.1. The shifts are small, of the order of a quarter of a pixel for the strongest correction in the case of the RY lup dataset. For the HR 8799 dataset, the correction is just a few percent of a pixel. Nonetheless, this shift corresponds to a measurable signal as seen in Fig. C.4c that compares the intensity predicted by the model before and after the autocalibration step.

Table C.1.

Estimation of the shift δk in pixel to apply on the coefficients cκ for the two datasets.

Appendix D: Numerical value of the different parameters

Table D.1 summarises the different parameters and hyperparameters of the different algorithms. The numerical values were the ones used to obtain the results presented in this paper.

The scaling factors sk applied on the pixels k of the data are chosen to be constant all over the field: ∀k, sk = s0. As the role of s0 is to discriminate the data that deviate compared to the expected values, it depends on the overall data dynamics and must be adapted to each dataset. This can be done by looking at the distribution of the residuals such as the ones given in Figs. 7o,p: s0 is proportional to their standard deviation. The RY lup dataset having a low dynamics, this explains why its s0 is smaller.

The hyperparameters μλ, μ2D and μΛ introduced in Eqs. (13) and (B.8) are normalised as follows:

(D.1)

where 𝜚 is a normalisation factor to scale the number of unknowns in the regularisation terms with the measurements in the data fidelity term. In other words, it accounts for the number of reconstructed spectra and wavelengths compared to the number of pixels in the forward model. The normalisation by accounts for the quadratic development of the Cauchy function close to zero. It ensures that the regularisation scales coherently with the scaling coefficient s0. and are manually set. Here, is chosen to be as small as possible while rejecting the high-frequency modes that appear in the reduced spectra due to the oversampling of the forward model as mentioned by Draper et al. (2014). Also, must be high enough to participate in the outlier pixel detection while avoiding adding too much smoothing in the reduced cubes. To do so, it is placed at the limit at which the acquisition noise starts to pollute the reduced cubes, and consequently depends on the data dynamic range.

For the refinement of the spectral dispersion model, to refine the parameters of each lenslet, the local spectra are extracted and analysed on their region of interest. These regions are defined as the window that encompasses the position of the different Gaussian patterns, extended on each side by 2 pixels, leading to a window of roughly 5 × 45 pixels for each spectrum. In addition, at each loop, of the calibration, the parameter μw in Eq. (B.7) is updated so that the cost on the spectral flat-field acquisition, defined as

(D.2)

is balanced with the cost on the wavelength calibration file, defined as

(D.3)

to favor the wavelength calibration file as follows:

(D.4)

Table D.1.

Numerical values of the parameters and hyperparameters used in the different algorithms.

All Tables

Table 1.

Estimation of the chromatic slope ξ of the linear law linking the diagonal s of the diffracted square pattern with the wavelength λ and the corresponding relative error on the spectral calibration σλ for the different datasets and their acquisition dates with the proposed method and, for the first acquisition date, with the SPHERE reduction pipeline (RP, Delorme et al. 2017).

Table 2.

Comparison between the S/N of the innermost planet at large wavelengths for the SPHERE reduction pipeline (RP, Delorme et al. 2017) and the proposed method (PIC).

Table A.1.

Estimation of the coefficients for the two datasets, when accounting for the median absolute deviation of the empirical variances (weighted least squares) or not (standard least squares).

Table C.1.

Estimation of the shift δk in pixel to apply on the coefficients cκ for the two datasets.

Table D.1.

Numerical values of the parameters and hyperparameters used in the different algorithms.

All Figures

thumbnail Fig. 1.

Forward model Mx = PICx of an integral field spectrograph with the example of the SPHERE-IFS instrument (b). The telescope forms an image of the targeted object x (a,d) on the instrument that projects a spectrum of each of its spatial elements onto the sensor (c,g). The low-crosstalk BIGRE design of the IFU (b) proposed by Antichi et al. (2009) is based on two hexagonal lenslet arrays whose effect is first modelled as a convolution C by their pupil shape, averaging the light received by a single lenslet (d → e). An interpolation I then accounts for the sampling performed by the first array of lenslets of the IFU (red) at the lenslet positions (green dots) (e → f). Finally, their spectrum is dispersed by prisms on the sensor, corresponding to a sparse matrix P linking the different wavelengths seen by a given lenslet to the location of their projection on the detector (f → g). Photo Credit: Lucasfilm Ltd.

Open with DEXTER
In the text
thumbnail Fig. 2.

Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Ganymede (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1018 nm and λ = 1556 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f).

Open with DEXTER
In the text
thumbnail Fig. 3.

Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Io (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1068 nm and λ = 1353 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f).

Open with DEXTER
In the text
thumbnail Fig. 4.

Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the HR 8799 dataset from a single IFS exposure. The proposed method removes artificial background structures. a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1068 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d).

Open with DEXTER
In the text
thumbnail Fig. 5.

Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the RY lup dataset from a single IFS exposure. The proposed sky background estimation removes the host star remnant and the proposed reconstruction algorithm is robust to aberrant measurements (see Fig. 6). a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1165 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d).

Open with DEXTER
In the text
thumbnail Fig. 6.

Robust defective-pixel identification on the sensor region corresponding to the region of interest of Figs. 5 and 7b. In addition to the known defective pixels (green), the robust penalisation can flag additional unusable pixels in a given acquisition that are blinking (red) or hit by an energetic particle (blue). a,b: raw data (a) and intensity (b) predicted by the model (colour bar: adu). c: equivalent weights obtained by the robust penalisation (cubic stretch). d: new map of the unconsidered pixels. e–h: zooms on (a–d).

Open with DEXTER
In the text
thumbnail Fig. 7.

Residuals of the model on the HR 8799 (on the left column) and the RY lup (on the right column) datasets (square root stretch, colour bar: adu). a,b: raw data. c–f: zooms on the regions framed in (a,b). g–j: model prediction on the same regions of interest. k–n: residuals on the same regions of interest. o,p: statistics of the residuals. For comparison, Gaussian profiles of standard deviation σ = 47 adu (HR 8799) and σ = 23 adu (RY lup) are plotted. The region of interest framed in orange in (b) is the region detailed in Fig 6.

Open with DEXTER
In the text
thumbnail Fig. 8.

Study of the chromatic scaling of the diffraction images of the host star for the two datasets. a: coloured projection of the HR 8799 reconstructed hyperspectral cube (cubic root stretch of the intensity). b: slice of the HR 8799 reconstructed hyperspectral cube (λ = 1486 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). The five black dots represent the fitted positions of the four images of the star and of the centre of the fitted square (dashed lines). c: linear fits of the square diagonals according to the wavelength for each dataset at two different times in the night with the proposed PIC method and, for the first time in the night, with the SPHERE reduction pipeline (RP, Delorme et al. 2017). d: fitted positions of the squares at each wavelength compared to the estimation of the star position. Only the wavelengths whose position falls inside the grey circle of 0.4 voxels radius are used to fit the linear laws.

Open with DEXTER
In the text
thumbnail Fig. 9.

Comparison between a TLOCI post-processing of the SPHERE reduction pipeline (Delorme et al. 2017) (left column) and the proposed PIC method (right column) on the HR 8799 dataset. The contrast of the planet closest to the star is increased by the PIC method and several false positives are avoided. a–d: monochromatic images of the post-processed hyperspectral cube and planet detection S/N (square root stretch of the intensity, colour bar: arbitrary unit) at λ = 972 nm (a,b) and λ = 1636 nm (c,d). e,f: maps of planet detection S/N combining all wavelengths (red circles: true positives, blue boxes: false positives with the SPHERE pipeline, colour bar: S/N).

Open with DEXTER
In the text
thumbnail Fig. 10.

Coloured projection of the post-processing of the SPHERE reduction pipeline (Delorme et al. 2017) (a) and of the proposed PIC method (b) on the RY lup dataset (cubic root stretch of the intensity).

Open with DEXTER
In the text
thumbnail Fig. A.1.

Estimation of the temporal background flux on the RY lup dataset, with the shutter opened (cubic stretch, colour bar: adu s−1). b–e: regions of interest of (a).

Open with DEXTER
In the text
thumbnail Fig. A.2.

Illustration of the remnant signal suppression from the sky background acquisitions (RY lup dataset, colour bar: adu s−1). a,b: averaged sky background fluxes ⟨fbg/sa. c: previous acquisition flux fpa. d: corrected sky background flux . e: estimated remnant signal in (c).

Open with DEXTER
In the text
thumbnail Fig. A.3.

Estimation of the flat-field correction factor on the RY lup dataset. b–e: regions of interest of (a).

Open with DEXTER
In the text
thumbnail Fig. A.4.

Identified defective pixels in the RY lup dataset. White: Pixels out of the lenslet array field. Green: pixels flagged by the median filter on the dark acquisitions. Red: pixels flagged by their anomalous variance in the dark acquisitions. Blue: pixels flagged by the median filter on the flat-field acquisitions. Dark blue: pixels flagged by their anomalous variance in the flat-field acquisitions. Grey: unflagged pixels. b–e: regions of interest of (a).

Open with DEXTER
In the text
thumbnail Fig. A.5.

Estimation of the noise model for the HR 8799 (a) and RY lup (b) datasets. The model is fitted on the median fluxes and the median variances of the different calibration files: the dark currents with the shutter close (red) and open (green) and the flat-fields (blue). The error bars are obtained by the median absolute deviation on all the sensor pixels for each acquisition set. The grey law is fitted without taking into account the errors. The black law is fitted by accounting for the errors. The red and blue insets are zooms on regions of low incident fluxes.

Open with DEXTER
In the text
thumbnail Fig. C.1.

Calibration of the spectral projection for the HR 8799 dataset (square root stretch, colour bar: adu). a: averaged wavelength calibration acquisition yw. b: averaged spectral flat-field calibration acquisition ys. c: zoom of (a). The found positions for the calibration wavelengths are highlighted by coloured dots. The black circles are proportional to the size of the Gaussian patterns. d: simulation of the wavelength calibration by the projection model. e: zoom of (b). The positions of the wavelengths simulated in the forward model are highlighted by coloured dots. f: simulation of the spectral flat by the projection model. g: residuals of the first estimation of the wavelength calibration. h: final residuals of the wavelength calibration. i: residuals of the first estimation of the spectral flat calibration. j: final residuals of the spectral flat calibration. k: histogram of the residuals of the spectral flat calibration along the iterations. For comparison purposes, a Gaussian profile of standard deviation σ = 175 adu is plotted.

Open with DEXTER
In the text
thumbnail Fig. C.2.

Wavelength calibration of the model on the HR 8799 (YH-mode, on the left panel) and the RY lup (YJ-mode, on the right panel) datasets (colour bar: arbitrary unit). a,b: coloured projection ⟨xλ of the reconstructed hyperspectral cube. c,d: slice of the reconstructed hyperspectral cube (λ = 1314 nm for the HR 8799 dataset, λ = 1122 nm for the RY lup dataset). e: median intensity ℳθ(x) computed on each slice of the reconstructed hyperspectral cube for each dataset compared with the reduction proposed by Vigan et al. (2015). The wavelengths of the calibration lasers are highlighted by dashed grey lines).

Open with DEXTER
In the text
thumbnail Fig. C.3.

a: transmission map γ of the IFU estimated for the HR 8799 dataset. b: fitted spectrum of the lamp for the two datasets HR 8799 (YH-mode) and RY lup (YJ-mode) along the iterations of Algorithm 4.

Open with DEXTER
In the text
thumbnail Fig. C.4.

Autocalibration for the HR 8799 dataset (square root stretch, colour bar: adu). a: zoom on a region of interest. b: model before the alignment. c: difference between the model prediction before and after the alignment. d: residuals after the alignment. e: statistics before (red) and after (black) the autocalibration step.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.