Issue 
A&A
Volume 635, March 2020



Article Number  A90  
Number of page(s)  21  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201936890  
Published online  13 March 2020 
PIC: a data reduction algorithm for integral field spectrographs
Application to the SPHERE instrument
^{1}
Université de Lyon, UJMSaintEtienne, CNRS, Institut d’Optique Graduate School, Laboratoire Hubert Curien UMR 5516, 42023 SaintEtienne, France
email: a.berdeu@gmail.com
^{2}
Université de Lyon, Université Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon, UMR 5574, 69230 SaintGenisLaval, France
email: ferreol.soulez@univlyon1.fr
Received:
10
October
2019
Accepted:
8
January
2020
Context. The improvement of large size detectors permitted the development of integral field spectrographs (IFSs) in astronomy. Spectral information for each spatial element of a twodimensional field of view is obtained thanks to integral field units that spread the spectra on the 2D grid of the sensor.
Aims. Here we aim to solve the inherent issues raised by standard datareduction algorithms based on direct mapping of the 2D + λ data cube: the spectral crosstalk due to the overlap of neighbouring spectra, the spatial correlations of the noise due to the reinterpolation of the cube on a Cartesian grid, and the artefacts due to the influence of defective pixels.
Methods. The proposed method, Projection, Interpolation, and Convolution (PIC), is based on an “inverseproblems” approach. By accounting for the overlap of neighbouring spectra as well as the spatial extension in a spectrum of a given wavelength, the model inversion reduces the spectral crosstalk while deconvolving the spectral dispersion. Considered as missing data, defective pixels undetected during the calibration are discarded onthefly via a robust penalisation of the data fidelity term.
Results. The calibration of the proposed model is presented for the SpectroPolarimetric Highcontrast Exoplanet REsearch instrument (SPHERE). This calibration was applied to extended objects as well as coronagraphic acquisitions dedicated to exoplanet detection or disc imaging. Artefacts due to badly corrected defective pixels or artificial background structures observed in the cube reduced by the SPHERE data reduction pipeline are suppressed while the reconstructed spectra are sharper. This reduces the false detections by the standard exoplanet detection algorithms.
Conclusions. These results show the pertinence of the inverseproblems approach to reduce the raw data produced by IFSs and to compensate for some of their imperfections. Our modelling forms an initial building block necessary to develop methods that can reconstruct and/or detect sources directly from the raw data.
Key words: techniques: image processing / techniques: imaging spectroscopy / methods: numerical
© A. Berdeu et al. 2020
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. General context
The pertinence of singlefield spectroscopy has been demonstrated in numerous fields in astrophysics, such as stellar classification, chemical analysis, velocity measurement, magnetic field probing, and so on. In recent decades, a requirement has emerged to complement this method with increased sampling on extended objects. While the use of longslit spectrographs is possible by scanning the field of view in one direction, this technique is time consuming and requires precise control of the position in the field being sampled.
An integral field spectrograph (IFS) combines this spatial sampling of an object with its spectral information resolved in different points of the field of view in a single exposure to obtain 2D + λ data. This new technique became available with the rise of large detectors. Indeed, if the 2D + λ information is threedimensional, the detectors remain bidimensional. The technique consists in using an integral field unit (IFU) coupled with a dispersing element to “unfold” the 3D spatiospectral information onto the 2D surface of a sensor of sufficient size. Different unfolding techniques with dedicated optical components have been deployed in different kinds of IFSs: fibre bundles, lenslet arrays and image slices.
Firstly, Vanderriest (1980) proposed the use of an IFU composed of fibre bundles, eventually coupled with a lenslet array to maximise the flux entering in the cores of the fibres, to conjugate positions in the field of view with a long slit that disperses their spectrum. Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) implements this design (Drory et al. 2015).
Secondly, to improve the spatial resolution and limit the light loss, Courtes (1982) proposed the concept of the Traitement Intégral des Galaxies par l’Étude de leurs Raies (TIGER, Bacon et al. 1995), which uses a grism behind an IFU composed of a lenslet array. Many IFSs are based on this principle, such as for example the Optically Adaptive System for Imaging Spectroscopy (OASIS), the Spectrographic Areal Unit for Research on Optical Nebulae (SAURON, Bacon et al. 2001), the SuperNovae Integral Field Spectrograph (SNIFS, Lantz et al. 2004), and the OHSuppressing Infrared Integral Field Spectrograph (OSIRIS, Larkin et al. 2006).
Thirdly, to go beyond the limited spectral resolution of the lenslet arrays, recent instruments based on image slicing IFU, as proposed by Bowen (1938), were developed such as the Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI, Eisenhauer et al. 2003), the Multi Unit Spectroscopic Explorer (MUSE, Bacon et al. 2006), and the NearInfrared Spectrograph (NIRspec, Bagnasco et al. 2007).
These instruments provide a new window onto many astrophysical phenomena, such as planets and comets of the solar system, stellar jets, stellar population dynamics, protostars, galactic Xray sources, the nuclei of galaxies, active nuclei, interacting galaxies (active galactic nuclei (AGNs) and starbursts), radiogalaxies, quasar environments, gravitational lenses, and so on (Vanderriest et al. 1994).
With the development of extreme adaptive optics (Jovanovic et al. 2015), in the last few years we have witnessed the results of a new generation of IFSs, such as those incorporated into the Project 1640 IFS (Hinkley et al. 2011, Hale Telescope), the Gemini Planet Imager (GPI, Macintosh et al. 2008, Gemini South Telescope), the SpectroPolarimetric Highcontrast Exoplanet REsearch instrument (SPHERE, Beuzit et al. 2019, Very Large Telescope; VLT), and the Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS, Groff et al. 2016, Subaru Telescope). These are based on lenslet arrays that limit the noncommon path aberrations compared to slicer concepts. Coupled with a coronagraph, they are dedicated to highcontrast and highresolution imaging of the surroundings of stars. These IFSs acquire a large number of monochromatic images simultaneously, which are then used to retrieve the spectral features of substellar companions, but also to calibrate and then to reduce the impact of the speckle noise that is induced by the stellar leakages through the coronagraph. Such instruments consequently represent a powerful technique to detect and characterise exoplanets (Vérinaud et al. 2006) by achieving higher contrasts (Sparks & Ford 2002; Marois et al. 2014).
A common point of all these IFSs is their global optical scheme: the target object is sampled at specific points in the field of view, each point producing a spectrum that is dispersed onto the sensor. The raw data consequently consist of hundreds to tens of thousands of spectra rearranged compactly on one or several 2D detectors. Dedicated codes are therefore needed to reduce this large amount of 2D data into a spatiospectral 2D + λ datacube exploitable for astrophysical analyses.
In this paper we describe PIC (Projection, Interpolation, and Convolution), a reduction technique of the 2D + λ object of interest based on a forward model of the instrument. The reduction performed by PIC amounts to solving the inverse problem of retrieving the 2D + λ object given the instrument model and the data. Such an “inverseproblems” approach is fairly general and can readily be adapted to any kind of IFS. In addition, PIC automatically detects damaged pixels and copes with them in a consistent way. Compared to existing methods, PIC produces 2D + λ reduced data of better quality with almost no artifacts. As shown by our work, this improved quality is also beneficial to postprocessing methods.
Section 2 details the motivation behind the proposed method and details the mathematical formalism of the forward model of an IFS instrument as well as the detector model. The inversion uses adequately chosen regularisations and constraints as well as robust penalisation to identify onthefly defective pixels missed during the calibration or cosmic rays hitting the detector during the science acquisitions.
In Sect. 3, PIC is applied to SPHEREIFS data for two typical kinds of target: extended objects (Ganymede and Io, Sect. 3.1) and coronagraphic data (dedicated to the exoplanets and disc detection and characterisation around HR 8799 and RY lup; Sect. 3.2). Comparisons with the reductions performed with the SPHERE pipeline as well as angular differential imaging (ADI) postprocessing on the reduced datacubes (Sect. 3.5) highlight the benefits of the proposed approach.
Appendices A–D focus on the tuning of the method with the available SPHERE calibration files associated to the reduced science data. Special care is taken on the detector characterisation (Appendix A) and on the chromatic model of the spectral dispersion (Appendix B).
2. Proposed method
2.1. Motivation of the inverseproblems approach
Beyond calibration considerations, the main feature of datareduction algorithms dedicated to IFSs is the mapping of the raw data measured on the 2D sensor to the targeted 2D + λ hyperspectral cube. Different approaches are implemented depending on the instrument.
A straightforward extraction consists in a direct read of the raw data. Indeed, whatever the optical design of the IFS (fibre bundle, slicer, or lenslet array), the spectra are generally dispersed along an axis of the 2D sensor. After a wavelength calibration step, it is possible to link, via the socalled “pixel association map” (SPHERE, Pavlov et al. 2008; Mesa et al. 2015; GPI, Perrin et al. 2016), or “pixel table” (SINFONI, Modigliani et al. 2007; MUSE, Weilbacher et al. 2012), each spectrum on the 2D sensor to a given position in the 2D + λ hyperspectral cube. Each spectrum is then independently extracted by integrating pixels in small segments perpendicular to the dispersion direction.
While this direct mapping is conceptually easy to grasp and computationally fast, it suffers from several limitations. In particular, pixels flagged as defective in the calibration step are replaced by interpolating the values of their neighbours on the sensor, which introduces spurious correlations in the reconstructed hyperspectral cube. Furthermore, many pixels not detected as defective during calibration are corrupted and lead to aberrant values in the reconstructed cube. In addition, the method does not account for the spectral crosstalkinduced overlapping of neighbouring spectra. And finally, in order to produce a hyperspectral cube, it is necessary to reinterpolate spatial locations on a regular rectangular grid and according to a common spectral sampling. The interpolation operations introduce spatial and spectral correlations due to the linear combination of the spatially and spectrally closest measurements. These spatial and spectral correlations are generally ignored by the downstream processing methods.
For spectrographs with high spectral resolution such as Elodie (Baranne et al. 1996) and SpeX (Cushing et al. 2004), the general approach for the extraction of the spectra is the optimal method proposed by Horne (1986). Most of the IFS reduction pipelines (TIGER and OASIS, Bacon et al. 1995, and their successors, SAURON and SNIFS, Bacon et al. 2001; MaNGA, Law et al. 2016; and OSIRIS, Lockhart et al. 2019), also implement this technique. It is based on a chromatic model of the transverse profile of each spectrum. Combining this linespread function with the measurement errors to weight the pixels, it solves the first previously mentioned problem by avoiding the interpolation of the defective pixels. It is also possible to use additional weighting coefficients to partially correct the effect of the crosstalk among neighbouring spectra (SPHERE, Pavlov et al. 2008). Nonetheless, this approach assumes that the pixel variance is known and that the defective pixels are correctly identified. Furthermore, this model of adjacent 1D linespread functions does not account for the spectral response of the instrument, which is along the direction of the spectral dispersion.
For the GPI instrument, Draper et al. (2014) proposed an improvement of this method, which was further refined by Brandt et al. (2017) for the CHARIS instrument. The general method comprises a χ^{2}based extraction of the spectra, previously proposed by Zimmerman et al. (2011) for the calibration of the Project 1640 IFS. Each spectrum is described by a superresolved model combining wavelengthdependent point spread functions (PSFs) found by weighted leastsquares fitting with weights given by the inverse variance of each pixel. By setting their weights to zero, this approach can correctly deal with defective pixels. As the individual spectrum model can be computed with a wavelength sampling common to all wavelengths, this method also solves the spectral interpolation issue and performs a deconvolution by the instrument line spread function. Finally, the spectral crosstalk is reduced by running the extraction twice after data cleaning by the model after the first estimation. Nonetheless, by extracting the spectra separately, this method does not address the previously raised problem of spatial interpolations that are still needed to obtain a datacube on a regular grid.
In this work, we implement a method that goes a step further by jointly reconstructing the whole 2D + λ datacube using a general framework based on an inverseproblems approach to reduce IFS 2D raw data. In this approach, the reconstructed datacube x is the minimum of a global cost function:
where 𝒟(y,M x,W) is the datafidelity term that penalises how “far” the modelled intensity M x, which is predicted by the linear model of the instrument M for a given object x, is from the data y according to the confidence on the data, which itself is characterised by the precision matrix W. Further, ℛ(x) denotes the regularisation term implemented to enforce spatial and spectral priors on the hyperspectral cube x, and 𝔻 is the admissible domain for the spatiospectral cube x to reconstruct (e.g. x ≥ 0). In terms of notation, the object x ∈ ℝ^{n1 × n2 × nλ} is a datacube in arbitrary units defined by the value x_{θ, ℓ} at each discrete element of this volume; each element is referred to as a voxel in the following. Each voxel has a 2D angular position of θ = (θ_{1},θ_{2}) ∈ ⟦1, n_{1}⟧×⟦1, n_{2}⟧ and a wavelength index of ℓ ∈ ⟦1, n_{λ}⟧. The raw data, denoted by y ∈ ℝ^{m1 × m2}, are the intensities in analogdigital units (adu), the same units as given by the detector, sampled at the 2D positions of the sensor pixels y_{k} indexed by k = (k_{1},k_{2}) ∈ ⟦1, m_{1}⟧×⟦1, m_{2}⟧.
2.2. Forward model
The critical point of any inverseproblems approach is the forward model M that predicts the data y for a given object x, or in the present case, the pixelated intensities measured by the sensor and the 2D + λ hyperspectral cube of the object. Figure 1 is the general optical scheme of an IFS, exemplified using the SPHERE instrument (SPHEREIFS, Claudi et al. 2008), presented in Fig. 1b. It emphasises the different steps to convert the coloured image of the object presented in Figs. 1a,d to the spectral projection on the 2D sensor of each of its spatial elements, presented in Figs. 1c,g:

C : ℝ^{n1 × n2 × nλ} → ℝ^{n1 × n2 × nλ}, a convolution operator: the entrance of the instrument is composed of an IFU array that performs a convolution C of the input signal by the pupil entrance of its subelements, as illustrated in Fig. 1e;

I : ℝ^{n1 × n2 × nλ} → ℝ^{nh × nλ}, a sampling operator: the integration of the signal by the IFU induces a loss of its spatial information, performing its interpolation I on the positions of the n_{h} hexagonal lenslets, as illustrated in Fig. 1f;

P : ℝ^{nh × nλ} → ℝ^{m1 × m2}, a sparse matrix projection operator: the sampled coloured signal is dispersed over the sensor onto neighbouring spectra following the IFU array geometry modelled by a projection matrix P. As the elementary monochromatic PSF of a single lenslet impacts only a portion of the detector, which is as small as possible to reduce interspectra crosstalk, this projection matrix is sparse.
Fig. 1. Forward model M x = P I C x of an integral field spectrograph with the example of the SPHEREIFS instrument (b). The telescope forms an image of the targeted object x (a,d) on the instrument that projects a spectrum of each of its spatial elements onto the sensor (c,g). The lowcrosstalk BIGRE design of the IFU (b) proposed by Antichi et al. (2009) is based on two hexagonal lenslet arrays whose effect is first modelled as a convolution C by their pupil shape, averaging the light received by a single lenslet (d → e). An interpolation I then accounts for the sampling performed by the first array of lenslets of the IFU (red) at the lenslet positions (green dots) (e → f). Finally, their spectrum is dispersed by prisms on the sensor, corresponding to a sparse matrix P linking the different wavelengths seen by a given lenslet to the location of their projection on the detector (f → g). Photo Credit: Lucasfilm Ltd. 
In the present work, C is a convolution by the hexagonal pupil shape of the lenslets and is performed by a simple componentwise multiplication in the Fourier space. Here, I is a linear interpolation of the Cartesian grid of the hyperspectral cube on the lenslet positions, assumed to be identical to the spectra positions on the sensor given by the dispersion model. Finally, for a given interpolated spectrum sampled at the wavelengths of the reconstructed hyperspectral cube, the projection matrix P consists in a linear combination of axisymmetric Gaussian patterns whose position and size depend on the considered lenslet and wavelength. Considering a superresolved model such as the one proposed by Brandt et al. (2017) would be straightforward in our framework but would require further calibration steps that are not available in the European Southern Observatory (ESO) calibration framework of SPHERE, as discussed in Appendix B. Thus, the global optical forward model of an IFS is linear and is synthesised by the following equation, using a compact matrix notation:
This gives the name of the proposed method, mentioned as PIC in the following. These operators are tuned from calibration exposures in different calibration steps described in Appendix B for the SPHERE instrument.
We would like to mention here that one could use this model to inject artificial planet signals directly in the raw data rather than in the reduced hyperspectral cubes in order to provide an endtoend characterisation of the IFS performance that includes the reduction and postprocessing algorithms.
2.3. Detector model
Beyond the effect due to the optical elements of the IFS, the recorded raw signal y^{raw} presents additional contributions. Indeed, in practice, each pixel of the detector records only an affine transform of the modelled signal [M x]_{k} corrupted by additive noise:
where g_{k} ≃ 1 is a correction factor accounting for the variability of the quantum efficiency and of the electronic gain, and accounting for the optical throughput such as vignetting or masking by dust particles; b_{k} is a bias term accounting for a constant offset set by the detector and all the background signals that are proportional to the integration time: the dark current of the sensor, the thermal background of the instrument, and the contribution of the sky; and ϵ_{k} is a centred additive noise of variance v_{k} which accounts for the detector readout noise of variance at null flux, and the photon noise of the incident flux. Assuming that the forward model M x correctly predicts the intensity on the sensor in adu, the variance of the photon noise is proportional to the incident flux and the total pixel variance is
where η_{k}, in adu, accounts for the unit conversion from the number of photons to adu and is the bias of the detector (see Eq. (A.1)) at pixel k.
Like any other inverse approach, the proposed method relies on precise modelling of the recorded raw signal y^{raw} to trustfully retrieve all the information contained in the data. It is therefore mandatory to have good knowledge of the gain g, the bias b, and the noise parameters η_{k} and for each pixel k. These parameters are estimated from calibration exposures in different calibration steps described in Appendix A for the SPHERE instrument.
After the calibration step, to perform the data reduction, the raw data are preprocessed, leading to the calibrated data y considered in Eq. (1):
2.4. Robust penalisation for the data fidelity
The role of the data fidelity term 𝒟(y,M x,W) in Eq. (1) is to ensure that the retrieved object x produces via the forward model M an intensity that is as “close” as possible to the calibrated data y. In this section, we describe what we mean by “close”.
The precision matrix W is the inverse of the covariance of the errors due to noise and approximations of the model (see e.g., Mugnier et al. 2004). Assuming uncorrelated noise, this matrix is diagonal and is comprised of coefficients (or weights) w that are the inverse of the pixel variance: . These weights represent the confidence on each measured pixel k. Nonetheless, during the instrument calibration, some defective pixels, such as those that are hot or dead, can be identified and discarded. Their precision is set to w_{k} = 0, that is, their variance is considered infinite. We cope with these discarded pixels by introducing a mask δ defined by:
and rewrite the weighting coefficients w as:
where the variance v_{k} of the kth pixel is given by Eq. (4).
Owing to the high number of photons per pixel in a frame, independent but nonhomogeneous Gaussian noise can be assumed. The data fidelity is then defined as the sum of the weighted square residuals between the data and the predicted intensity:
Nonetheless, this method is not robust to outliers. Indeed, some defective pixels that have an erratic behaviour are missed during the sensor calibration. In addition, energetic particles may hit the detector during an exposure and impact several pixels.
To reduce the incidence of these erroneous measurements, robust penalisation approaches have been developed (Hogg 1979; Huber 1996). The general idea is to replace the quadratic penalisation of the residuals of Eq. (8) by a function ρ that is approximately quadratic around zero and that grows subquadratically for large values. Adapting Eq. (8) to account for a robust penalisation leads to:
where s_{k} is a scaling factor on the residuals that characterises how “far” from a normal Gaussian distribution the whitened residual statistics can be.
With s_{k} = 1 and ρ(r) = r^{2}, Eq. (9) is equivalent to the standard leastsquares cost of Eq. (8).
Several robust penalisation functions ρ exist in the literature for which the equivalent weight w_{ρ} of a leastsquares minimisation can be computed (Holland & Welsch 1977). In the following, the Cauchy function is used:
with γ = 2.385.
Such a robust penalisation is pertinent in the case of IFS imaging since according to our forward model, a given voxel x_{θ, ℓ} has an impact on several regions of the detector. It is unlikely that these regions are all polluted by many defective pixels. This redundancy coupled with the robust penalisation avoids most artefacts due to outliers.
2.5. Regularisations and constraints
There is no guarantee that the model M defined in Sect. 2.2 is a wellconditioned and fullrank matrix. This implies that minimising Eq. (9) may be an illposed and illconditioned problem leading to noise amplification. Introducing the regularisation function ℛ(x) in Eq. (1) is a means of coping with these issues by accounting for some prior knowledge about the observed object x (Titterington 1985). Two kinds of regularisations are implemented in the proposed inversion method.
A first regularisation favours sharpedged objects to control the spatial continuity and smoothness of the reconstruction. Enforcing such an edgepreserving smoothness is done by encouraging the sparsity of spatial gradients (Rudin et al. 1992; Charbonnier et al. 1997). In addition, it can be expected that the edges of the reconstructed objects are colocalized according to the wavelength. Similarly, we are more likely to have a full spectrum in a given position than fragmented spectra spread on neighbouring positions. This prior about the spectral behaviour is implemented by means of a structured norm (Blomgren & Chan 1998):
where and correspond to finite difference operators along the first and second spatial dimensions, respectively, and ϵ > 0 is a threshold corresponding to the smallest gradient at a sharp edge and ensures that this hyperbolic approximation of the Euclidean norm is differentiable at zero. Because of the summation over ℓ under the square root, hyperspectral images with edges colocalised at all wavelengths are favoured.
A second regularisation ensures that the reconstructed spectra are smooth. Indeed, with the limited spectral resolution of the instruments, no extremely thin emission or absorption lines are expected to be reconstructed. This also prevents the apparition of the highfrequency noise mentioned by Draper et al. (2014) and Brandt et al. (2017) in the superresolved reconstructed spectra. We chose to implement this spectral regularisation by the following quadratic penalty:
where is the finite difference operator along the spectral dimension.
The two chosen regularisations are combined in the term ℛ(x) in Eq. (1) with respective hyperparameters μ_{2D} and μ_{λ} to weight the priors relative to the data fidelity term:
Combined with the robust penalisation, the chosen regularisation helps to identify corrupted pixels that were not yet detected as such. Indeed, to obtain a good fit with an outlier, the estimated hyperspectral cube would necessarily involve a value at a given spatial location and wavelength that significantly differs from its spatial and spectral neighbours. Such a situation is penalised by the regularisation in order to reject outliers and obtain a more meaningful estimation.
In addition to the “loose” constraints imposed by the regularisation, two “tight” constraints are applied to x. Firstly, the spectra can only have positive intensities. Secondly, to ensure that the whole spectral extension is accounted for by the forward model, the reconstructed wavelengths go slightly beyond the instrument spectral sensitivity range. Thus, the extreme wavelengths are forced to be null. Combined with the previously defined regularisations, this forces the spectra to smoothly drop to zero at their edges.
In summary, the feasible domain 𝔻 is:
2.6. Reconstruction algorithm
After the calibration of the detector (to get a preliminary list of defective pixels and determine η_{k}, g_{k}, and b_{k} irrespective of k) and of the instrument (to code M = P I C), the reduction algorithm consists in recovering x by minimising the objective function defined in Eq. (1) and updating the factors δ_{k}. This is implemented by the iterative Algorithm 1.
1: x ← 0 ⊳Initialisation of the object
2: ⊳Preprocessing
3: y^{m} ← medfilter(y) ⊳Initialisation of the model
4: for i from 1 to n_{it} do ⊳n_{it} optimisations
5: ⊳Initialisation of δ in w
6: if then ⊳Equivalent weight
7: δ_{k} ← 0 ⊳δ update in w
8: ⊳w update
9: ⊳x update, w fixed
10: ⊳Update of the model
11: ⊳Weighted residuals
12: return x
Strictly, looking at Eq. (7), the weights w in Eq. (9) depend on the current value of x during the optimisation of the cost function Eq. (1), increasing its complexity and raising potential convergence issues. To overcome this limitation, the reconstruction of x is subdivided into a loop of n_{it} optimisations. At a given iteration of the loop, the weighting coefficients are assumed to be constant, computed via Eq. (7) based on the forward model given by the solution of the previous iteration y^{m} = M x.
At the initialisation, for the first estimation of the weighting coefficients w, the predicted model is approximated by a median filter of the preprocessed data y to diminish the influence of the defective pixels. If some of them have been detected during the calibration, the map δ is initialised with the flagged pixels δ^{cal}. Otherwise, it is set to δ_{k} = 1, for all k.
At each loop after the second loop i ≥ 3, the map of discarded pixels δ^{cal} is updated using a threshold w_{th} on the equivalent weight w_{ρ} given by the robust penalisation on the residuals of the previous loop.
3. Results
SPHERE (Beuzit et al. 2019) is a secondgeneration VLT instrument optimised for highcontrast imaging. Its IFS is based on a new optical concept proposed by Antichi et al. (2009) to overcome the limitations affecting a 3D spectrograph working in diffractionlimited regime (Claudi et al. 2011). BIGRE is an evolution of the classical TIGER design, optimised for seeinglimited conditions in the SAURON instrument (Bacon et al. 1995, 2001). The microlenses composing the lenslet array of the IFU are smaller than the full width at half maximum (FWHM) of the projected PSF of the telescope, and therefore provide resolved images of the PSF and the speckle patterns produced by the stellar leakage in the coronagraphic mode of the instrument. To limit the crosstalk between the lenslets, the BIGRE design improves the TIGER concept by adding a second lenslet array as schemed in Fig. 1b.
The BIGRE SPHEREIFS provides lowresolution spectroscopy (R ∼ 50) in the nearinfrared (0.95−1.7 microns), an ideal spectral range for the detection of exoplanets and the characterisation of their features. Its IFU is composed of 145 × 145 lenslets and covers a field of view of about 1.8 × 1.8 square arcseconds (Claudi et al. 2008; Dohlen et al. 2016). The spatial dimension of the raw data taken on the Hawaii2RG sensor is equal to m_{1} × m_{2} = 2048 × 2048 pixels.
This section presents the results of the proposed PIC method on representative datasets of the SPHEREIFS instrument compared to the reduction obtained with the SPHERE reduction pipeline (Delorme et al. 2017) used to date. Two situations are considered: the direct imaging of extended objects, and the coronagraphic mode of SPHERE dedicated to exoplanet detection and characterisation. The output cubes of the SPHERE reduction pipeline are composed of n_{1} × n_{2} × n_{λ} = 291 × 291 × 39 voxels. The proposed reduction pipeline provides cubes of n_{1} × n_{2} × n_{λ} = 300 × 300 × n_{λ} voxels with n_{λ} = 44 for the YHmode and n_{λ} = 47 for the YJmode, which represents a ratio of approximately eight between the pitch of the angular position θ of the lenslets in the reduced cube and the pitch of their position k on the sensor in terms of pixel. An important component of our method is the calibration of the parameters of the forward model. We describe the methodology to estimate those parameters in Appendices A and B.
The code is implemented in Matlab^{TM} and uses the opensource GlobalBioIm framework (Soubies et al. 2019). The optimisation problem (1) is solved by the VMLMB algorithm (Thiébaut 2002), a limitedmemory quasiNewton method with BFGS updates (Nocedal 1980) that handles bound constraints. The version of the code used in this paper can found at https://doi.org/10.5281/zenodo.3585632. All the data and reductions presented in this papers can be downloaded at https://doi.org/10.5281/zenodo.3585656.
3.1. Extendedobject imaging
Even if SPHEREIFS is mainly used in its coronagraphic mode and the exoplanet search, it is also suitable for the observation of extended objects (see e.g., Hanus et al. 2017; King et al. 2019). To highlight the pertinence of the proposed method for the imaging of such targets, we reduced two Jovian moons with the SPHERE reduction pipeline (Delorme et al. 2017) and the proposed PIC method: Ganymede (60.A9372 – 1 Feb. 2015, 298th out of 300 acquisitions) and Io (60.A9357 – 6 Dec. 2014, 2nd out of 8 acquisitions), both acquired in YH mode. The results are given in Figs. 2 and 3.
Fig. 2. Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Ganymede (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1018 nm and λ = 1556 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f). 
Fig. 3. Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Io (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1068 nm and λ = 1353 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f). 
For each reduced cube, spectra are extracted along three positions θ of interest. For comparison, the transmission t_{atm} of the Earth’s atmosphere, computed via the software “HITRAN on the Web” proposed by Rothman et al. (2013), is given in green along with the extracted spectra.
Several remarks can be made about the colour coding in Figs. 2a,b and 3a,b. The colour intensity reflects the spectral average ⟨x⟩_{λ}, the colour hue indicates the wavelengths that deviate from this projection, and the colour saturation represents the magnitude of this deviation.
The colour variations at the Ganymede’s surface shown by Figs. 2a,b suggest that the reconstructed cubes contain information of spatial variations of the surface spectrum. While smooth regions are favoured, the proposed regularisation does not prevent spatial structures with small contrasts and spectral variations from being restored.
This spectral diversity can be seen in the extracted spectra in Fig. 2g. The black spectrum is close to the average spectrum of the reduced cube. The red (resp. blue) spectrum is extracted in an area where the spectrum is attenuated (resp. enhanced) in the shorter wavelengths and enhanced (resp. attenuated) at longer wavelengths. This diversity is also visible on the two extracted slices at λ = 1018 nm in Figs. 2c,d and λ = 1556 nm in Figs. 2e,f and which show very different spatial structures. These spectral contrasts are less pronounced on Io’s surface.
In terms of reconstruction quality, the cubes reduced by the SPHERE reduction pipeline exhibit vertical artefacts that are visible in the spectral projections in Figs. 2a and 3a and on the slices in Figs. 2c and 3c. At low fluxes, horizontal artefacts are also visible in the Io cube in Fig. 3e.
In addition to these largescale structures, the reduced cubes produced by the SPHERE pipeline are corrupted by many artefacts due to defective pixels, most of them leading to dark voxels, as visible in the slices in Figs. 2c,e and 3c,e. These artefacts leave a strong distortion in the extracted spectra, as visible in the black spectra in Figs. 2g and 3g. A defective pixel on the sensor plane impacts several wavelengths in the cube reduction. This can also lead to misinterpretations, as in the case of the red spectrum in Fig. 3g where the absorption bands of the Earth’s atmosphere are shifted from their actual positions, for instance, at ∼1100 nm for the blue spectrum and at ∼1350 nm for the red spectrum. With the proposed calibration and regularised reconstruction, these artefacts are corrected in the reduced cubes as visible in Figs. 2a,c,e and 3a,c,e. The striped pattern has disappeared and no strong artifacts are visible. Moreover, qualitatively, the spatial regularisation reduces the noise in the reconstruction and improves the contrast of the geological formations, even at very low flux such as in Figs. 3e,f.
3.2. Coronagraphic imaging
SPHEREIFS is mainly used for highcontrast observation with a coronagraph. To directly image an extrasolar planet, the light from its host star must be reduced by several orders of magnitude. Associated with dedicated detection algorithms, SPHERE as well as other instruments of its generation such as GPI and CHARIS, allow the spectral characterisation of the imaged exoplanets at low resolution (R ∼ 30−50) and achieve typically planettostar contrast ratios down to 10^{−6} at 0.2″ angular separation (see e.g., Currie et al. 2015; Macintosh et al. 2015; Ruffio et al. 2018; Mesa et al. 2019).
To illustrate the present method when SPHEREIFS operates in its coronagraphic mode, we considered two datasets: one on the HR 8799 star that hosts a cortege of exoplanets (Marois et al. 2008) and that was observed in YH mode (095.C0298 – 4 Jul. 2015, 38th out of 56 acquisitions), and one on the RY lup star that hosts a bright transition disc (Langlois et al. 2018) and was observed in YJ mode (097.C0865 – 15 Apr. 2016, 50th out of 80 acquisitions). A comparison between the reduced cubes with the SPHERE reduction pipeline is given in Figs. 4 and 5.
Fig. 4. Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the HR 8799 dataset from a single IFS exposure. The proposed method removes artificial background structures. a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1068 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d). 
Fig. 5. Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the RY lup dataset from a single IFS exposure. The proposed sky background estimation removes the host star remnant and the proposed reconstruction algorithm is robust to aberrant measurements (see Fig. 6). a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1165 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d). 
The reduced HR 8799 cube is dominated by the speckles produced by the stellar leakage. The coloured spectral projection ⟨x⟩_{λ} emphasises that this is a chromatic effect with the speckle structure scaling with the wavelength from the shorter wavelengths represented in blue to the longer wavelengths represented in red. Without further processing, no exoplanet is visible. On the contrary, the signal of the reduced RY lup cube is dominated by the signal of the accretion disc that is visible at all wavelengths as a wellcontrasted structure. Contrary to the speckles, the location of an object imaged by the IFS does not scale with the wavelength and thus appears in white.
Concerning the reduction quality, similar conclusions for the reduction of extended objects can be drawn. The HR 8799 cube presents the same striped artefacts that are corrected by the proposed method. The important noise of the RY lup cube due to the small exposure time is smoothed by the regularisation of the proposed method.
The zooms of Figs. 5a,c given in Figs. 5e,f highlight the necessity to correct the “remnant signal”, as presented in Fig. A.2. The subtraction of this unwanted signal, a ghost of a previous high dynamic acquisition on the sensor, in the sky background correction step produces the artificial dark spot visible in the cube reduced by the standard method. With the proposed method, this region is correctly reconstructed.
These same zooms show the effectiveness of the robust penalisation in discarding defective pixels or pixels hit by energetic particles based on the spatial and spectral regularisations of the reconstructed cube. Indeed, as visible in Figs. 6a,e, such a particle leaves a long trace on the sensor. Looking at the equivalent weights w_{ρ} in Figs. 6c,g, these aberrant measures are clearly identified as detector pixels with very low statistical weights below the threshold w_{th}. These can be discarded by the reduction procedure. They are identified in blue in Figs. 6d,h. In addition, defective pixels missed during the calibration are also flagged in red among the already discarded pixels in green.
Fig. 6. Robust defectivepixel identification on the sensor region corresponding to the region of interest of Figs. 5 and 7b. In addition to the known defective pixels (green), the robust penalisation can flag additional unusable pixels in a given acquisition that are blinking (red) or hit by an energetic particle (blue). a,b: raw data (a) and intensity (b) predicted by the model (colour bar: adu). c: equivalent weights obtained by the robust penalisation (cubic stretch). d: new map of the unconsidered pixels. e–h: zooms on (a–d). 
The comparison between the data in Figs. 6a,e and the intensity predicted by the model in Figs. 6b,f show that the coupling of the redundancy of the hyperspectral information on an IFS field with adequate regularisation allows the estimation of the signal even under high levels of noise.
3.3. Statistics of the residuals
The reconstructed model compared to the raw data is further detailed in Fig. 7 for the two datasets. The zooms in Figs. 7d,h,l,f,j,n correspond to regions of interest previously identified in the calibration because of their high density of defective pixels (Fig. A.4d) or because of the presence of a dust particle (Fig. A.4c).
Fig. 7. Residuals of the model on the HR 8799 (on the left column) and the RY lup (on the right column) datasets (square root stretch, colour bar: adu). a,b: raw data. c–f: zooms on the regions framed in (a,b). g–j: model prediction on the same regions of interest. k–n: residuals on the same regions of interest. o,p: statistics of the residuals. For comparison, Gaussian profiles of standard deviation σ = 47 adu (HR 8799) and σ = 23 adu (RY lup) are plotted. The region of interest framed in orange in (b) is the region detailed in Fig 6. 
Despite the numerous discarded pixels in the raw data flagged in Figs. 7d,f, the algorithm predicts a model, as shown in Figs. 7h,j, where no structure is visible in the residuals displayed in Figs. 7l,n, except for the defective pixels and the pixels masked by a dust particle.
The zooms in Figs. 7c,g,k,e,i,m focus on regions closer to the coronagraph mask where the flux is the highest. No structure is visible in the residuals of Fig. 7m. On the contrary, on the limit of the coronagraph mask in Fig. 7k, vertical blue and red structures can still be discerned. This is because of a slight lateral shift of the spectra in the raw data compared to the positions predicted by the calibrated forward model. This phenomenon is also visible on the reduction of the star PSF acquisitions (not presented here). In these two cases, the object inducing these shifts is a star, that is, an unresolved and coherent object inducing stronger coherent crosstalk effects that are not totally suppressed by the BIGRE design, as described by Antichi et al. (2009). These effects cannot be estimated by the calibration acquisitions or accounted for by the forward model which considers only incoherent crosstalk.
Figures 7o,p compare the histogram of the raw data (red), the histogram of the residuals between the data and the model prediction (blue), and a Gaussian profile (dashed black). The statistics of the raw data are correctly matched by the statistics of the model prediction. The residuals have approximately Gaussian distributions.
3.4. Image registration and pixel scale calibration
In coronagraphic mode, the centre of the field of view is given by the position of the masked star. In order to determine the star position behind the coronagraph’s mask, specific acquisitions are obtained by applying a periodic phase pattern on the deformable mirror of the instrument. This induces four firstorder diffraction patterns of the masked star as presented in Fig. 8b. The star is at the centre of the square formed by these four satellite images, as presented by the black dots and dashed lines on the figure.
Fig. 8. Study of the chromatic scaling of the diffraction images of the host star for the two datasets. a: coloured projection of the HR 8799 reconstructed hyperspectral cube (cubic root stretch of the intensity). b: slice of the HR 8799 reconstructed hyperspectral cube (λ = 1486 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). The five black dots represent the fitted positions of the four images of the star and of the centre of the fitted square (dashed lines). c: linear fits of the square diagonals according to the wavelength for each dataset at two different times in the night with the proposed PIC method and, for the first time in the night, with the SPHERE reduction pipeline (RP, Delorme et al. 2017). d: fitted positions of the squares at each wavelength compared to the estimation of the star position. Only the wavelengths whose position falls inside the grey circle of 0.4 voxels radius are used to fit the linear laws. 
With the SPHERE reduction pipeline, the centre of the field of view is determined after the reduction of all the acquisitions. Once the centre is known, all the reduced cubes are reinterpolated to centre the star on a given pixel. With the proposed method, this shift between the centre of the field and the centre of the IFS can be taken into account by the forward model. This is done by adapting the sampling positions of the interpolation operator I in the cube x. Doing so, the reduced cube is by construction centred on the host star. This avoids performing any further interpolation of the reduced data that would imply a slight loss in spatial resolution and stronger noise correlations due to the reinterpolation operation.
In the proposed method, the centre of the field of view is determined by the centres of the square diffraction patterns fit at each wavelength of the cube. To be robust to outliers, we first take the centroid of all the square centres and then iteratively take the centroid of the subset of square centres that are at a distance from the previous centroid of less than 0.4 voxels (≃2.6 mas). The results are shown in Fig. 8d for the two datasets and for two different acquisitions in the same night. Most of the centres are within 0.25 voxels (≃1.7 mas) of the estimated centre. No noticeable difference can be seen in the estimated parameters according to the dataset or the acquisition date.
Being a diffractiondriven phenomenon, these square patterns should scale with the wavelength. This can be seen by the four radial rainbowcoloured features on the coloured projection ⟨x⟩_{λ} in Fig. 8a. To study this scaling law, the square bestfitting the four spots in the reconstructed cube is estimated at each wavelength. The slope ξ of the linear law that best fits the diagonal s according to the wavelength s ≈ ξ λ is given in Fig. 8c for the two datasets (blue and red) for each acquisition date in the observation night (squares and circles). Table 1 summarises the fitted parameters for the two datasets and their two acquisition dates. The standard deviation of the chromatic slope is 0.02 voxel μm^{−1} for all datasets. This linear law is used to estimate a relative error on the spectral calibration given by the sample standard deviation σ_{λ} of computed on all spectral channels ℓ.
Estimation of the chromatic slope ξ of the linear law linking the diagonal s of the diffracted square pattern with the wavelength λ and the corresponding relative error on the spectral calibration σ_{λ} for the different datasets and their acquisition dates with the proposed method and, for the first acquisition date, with the SPHERE reduction pipeline (RP, Delorme et al. 2017).
As for the centre determination, no noticeable difference can be seen in the estimated parameters according to the dataset or the acquisition dates.
The differences in the chromatic slope are due to the choice of smaller pixels in the reconstructions in the proposed method compared to the SPHERE reduction pipeline. We note here that this is not a measure of the spatial resolution of the reduced datacubes that would be different with the SPHERE reduction pipeline or PIC. Indeed, the SPHEREIFS instrument resolution is fixed and is given by the pitch of the lenslets in the array. With the chosen reduction parameters, there are roughly two voxels per lenslet.
The small standard deviations on the estimated chromatic slope (≈0.02 voxel μm^{−1} for all datasets) reflect the good spectral calibration of both PIC and the SPHERE pipeline. The residuals in Fig. 8c were used to estimate the relative spectral calibration errors given in Table 1 that are well under 1%. This supports the quality of the wavelength calibration procedure.
Nevertheless, the residuals in Fig. 8c worsen around λ = 1.4 μm. This corresponds to the main atmosphere absorption band in Figs. 2g and 3g. As a consequence, the signaltonoise ratio to estimate the positions of the four diffraction patterns is worse and degrades the square estimation.
On the spectral range edges, in addition to the limited reconstructed flux, the wavelengths are extrapolated beyond the calibration laser wavelengths, as mentioned in the description of the model calibration in Appendix B. This may explain why the residuals are also worse at short and large wavelengths.
3.5. Postprocessing examples
As seen above, residual starlight is present in the form of speckles which limit the detection of faint pointsource objects. Such residuals can be suppressed by adopting different strategies based for example on the polarisation of the residual speckle pattern (differential polarimetric imaging; DPI) or assuming that it scales with the wavelength (spectral differential imaging; SDI) or that it is stable in time (angular differential imaging; ADI). These techniques are used in dedicated postprocessing algorithms such as cADI (Marois et al. 2006), LOCI (Lafreniere et al. 2007), KLIP (Soummer et al. 2012), ANDROMEDA (Cantalloube et al. 2015), or PACO (Flasseur et al. 2018), which are used to detect the faint signal of the exoplanets hidden in this speckle noise.
We chose to run a Template Locally Optimized Combination of Images (TLOCI) detection algorithm (Marois et al. 2013, 2014) on the hyperspectral cubes of the 46 best dates (among the 56 acquisitions) reduced with the SPHERE reduction pipeline and the proposed method.
TLOCI is an evolved version of the LOCI algorithm that is based on ADI where the intrinsic fieldofview rotation during the acquisitions is used to separate the residual light from the star masked by the coronagraph and the light coming from offaxis sources, that is, the objects of interest, for each wavelength of the reduced hyperspectral cube. Further, TLOCI uses a template for the spectrum of the soughtafter point sources in the hyperspectral cube to combine images along the spectral dimension.
Figure 9 gives the comparison between the results obtained with the SPHERE reduction pipeline and PIC. The global detection maps presented in Figs. 9e,f combine the S/N maps obtained at all wavelengths as well as the S/N derived for the detection of each planet as given in Table 2. If σ^{+} is the list of the n_{λ} positive S/N of a planet or a position in the detection map, the global S/N is computed as follows:
Fig. 9. Comparison between a TLOCI postprocessing of the SPHERE reduction pipeline (Delorme et al. 2017) (left column) and the proposed PIC method (right column) on the HR 8799 dataset. The contrast of the planet closest to the star is increased by the PIC method and several false positives are avoided. a–d: monochromatic images of the postprocessed hyperspectral cube and planet detection S/N (square root stretch of the intensity, colour bar: arbitrary unit) at λ = 972 nm (a,b) and λ = 1636 nm (c,d). e,f: maps of planet detection S/N combining all wavelengths (red circles: true positives, blue boxes: false positives with the SPHERE pipeline, colour bar: S/N). 
Comparison between the S/N of the innermost planet at large wavelengths for the SPHERE reduction pipeline (RP, Delorme et al. 2017) and the proposed method (PIC).
where ⟨ ⋅ ⟩_{λ} denotes spectral averaging.
At short wavelengths, as in Figs. 9a,b, the planet fluxes are too low to be seen. The suppression of the vertically striped artefacts with the proposed method leads to a better speckle removal by the TLOCI algorithm. This is especially striking close to the host star at longer wavelengths, as in Figs. 9c,d where the S/N of the innermost planet is strongly increased. This S/N gain can be observed at all wavelengths (except the shortest); see Table 2. In addition, the better reduction of the datacubes reduces the artefacts of postprocessing algorithms such as TLOCI, removing potential false detections (in the blue squares) and enlarging the usable area of the field of view.
The reductions on the RY lup dataset are compared by averaging the 80 temporal acquisitions after rotating them to compensate for the timevarying field rotation. These averaged reduced data are presented in Fig. 10. As a consequence of the imposed angular correction, the speckle pattern rotates and is averaged out while the accretion disc remains fixed and is enhanced. The coloured projections show that the disc is globally white without notable spectral variations. As previously noted, the edge of the field of view is better reconstructed with the proposed method (absence of outliers).
Fig. 10. Coloured projection of the postprocessing of the SPHERE reduction pipeline (Delorme et al. 2017) (a) and of the proposed PIC method (b) on the RY lup dataset (cubic root stretch of the intensity). 
4. Discussion and conclusions
Here, we propose a new algorithm named PIC designed to reduce the raw data of an IFS in order to produce a 2D + λ cube. Based on an inverse problem approach implementing a robust penalisation, the proposed method directly reconstructs the full hyperspectral cube.
Our method amounts to solving an inverse problem which provides a number of advantages compared to previous methods. The direct object extraction avoids a posteriori interpolation of the extracted spectra on a Cartesian spatial grid or a common wavelength sampling. Modelled by the forward model, the spectral crosstalk between neighbouring spectra is reduced and the spectral response of the instrument is deconvolved in the reduction. This framework also provides a natural and consistent way to deal with corrupted measurements (e.g. defective pixels or cosmic particles) so as to avoid artefacts in the result.
Contrary to other methods, the proposed algorithm explicitly introduces a spatiospectral regularisation. Our results demonstrate that this regularisation improves the quality of the reduced data by limiting the incidence of the noise and of outlier pixels which are automatically detected during the reduction.
In addition to this new reduction algorithm, we also describe an improved calibration procedure for the SPHEREIFS instrument. A model of the statistics of the noise in the raw data provided by the sensor is obtained from the calibrations files. Defective pixels are also flagged to be discarded in the reduction procedure. Polynomial laws for the positions and the dispersion are robustly fitted for each lenslet, providing a model that is as close as possible to the instrument physics. An autocalibration step is added to correct for possible shifts between the calibration and the science acquisitions.
The reconstructed spectra, in particular from the calibration observations presented in Appendix B, show a slight improvement of the spectral resolution due to the spectral deconvolution performed by the proposed datareduction method. The results on extended objects show that such a regularised method and careful calibration enhances the spatial contrast while reducing the impact of defective pixels along the spectral dimension. Some artefacts visible in some SPHERE reductions in the form of largescale structures (vertical bands) are also suppressed.
Applied to coronagraphic acquisitions, the method provides reconstructed hyperspectral cubes of improved quality and with fewer artefacts, which in turn lead to detection maps with better separation between sources (like exoplanets) and background speckle. In the quest to detect very faint exoplanets by direct imaging, the proposed reconstruction may contribute to improving the achievable contrast.
The forward model could be further refined while keeping the same general structure in order to account for nonGaussian microlense PSFs. Another refinement would be to account for the small spatial shifts of the spectra in the area that surrounds the coronagraph mask. However, this would, require a more complex calibration procedure, with a step where a refinement is computed directly on the science data. Finally, the method should be tested and certified routinely on a large amount of data.
This work can be considered as a preliminary building block from which more accurate and more complete reconstruction algorithms can be be built. Currently, only the IFS instrument is accounted for in the forward model. The next step is to include the telescope response via its chromatic PSF. This would allow an endtoend deconvolution of the reduced cube directly from the raw data. Our model could also be included in detection or imaging algorithms to separate the stellar leakages from the offaxis objects in highcontrast imaging.
Acknowledgments
This work was performed within the framework of the LABEX PRIMES (ANR11LABX0063) of Université de Lyon, within the program “Investissements d’Avenir” (ANR11IDEX0007) operated by the French National Research Agency (ANR). It has been supported by the (project DIAGHOLO). This work has made use of the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CeSAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon/CRAL, and supported by a grant from Labex OSUG@2020, within the program “Investissements d’Avenir” (ANR10LABX56). All the reductions are based on public data provided by the ESO Science Archive Facility and acquired for different programs: 60.A9372 (Fletcher L., 1 Feb. 2015), 60.A9357 (Carter J., 6 Dec. 2014), 095.C0298 (Beuzit J.L., et al., 4 Jul. 2015) and 097.C0865 (Beuzit J.L., et al., 15 Apr. 2016). The authors would like to thank Gratton R., Dohlen K., and Delorme P. for their fruitful discussions on the IFS principle and on the SPHERE reduction pipeline. We are grateful to Carter J. and Guiot P. for their end user’s view on the reduction of Io and Ganymede observations as well as to Vigan A. for having shared the data of Fig. C.2e. We are also thankful to the referee and its pertinent feedback.
References
 Antichi, J., Dohlen, K., Gratton, R. G., et al. 2009, ApJ, 695, 1042 [NASA ADS] [CrossRef] [Google Scholar]
 Bacon, R., Adam, G., Baranne, A., et al. 1995, A&AS, 113, 347 [NASA ADS] [Google Scholar]
 Bacon, R., Copin, Y., Monnet, G., et al. 2001, MNRAS, 326, 23 [NASA ADS] [CrossRef] [Google Scholar]
 Bacon, R., Bauer, S., Boehm, P., et al. 2006, Groundbased and Airborne Instrumentation for Astronomy, 6269, 62690J [NASA ADS] [CrossRef] [Google Scholar]
 Bagnasco, G., Kolm, M., Ferruit, P., et al. 2007, Cryogenic Optical Systems and Instruments XII, 6692, 66920M [CrossRef] [Google Scholar]
 Baranne, A., Queloz, D., Mayor, M., et al. 1996, A&AS, 119, 373 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Beuzit, J. L., Vigan, A., Mouillet, D., et al. 2019, A&A, 631, A155 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Blomgren, P., & Chan, T. 1998, IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society, 7, 304 [Google Scholar]
 Bowen, I. S. 1938, ApJ, 88, 113 [NASA ADS] [CrossRef] [Google Scholar]
 Brandt, T. D., Rizzo, M. J., Groff, T. D., et al. 2017, J. Astron. Telesc. Instrum. Syst., 3, 048002 [Google Scholar]
 Cantalloube, F., Mouillet, D., Mugnier, L. M., et al. 2015, A&A, 582, A89 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Charbonnier, P., BlancFeraud, L., Aubert, G., & Barlaud, M. 1997, Trans. Image Process., 6, 298 [NASA ADS] [CrossRef] [MathSciNet] [PubMed] [Google Scholar]
 Claudi, R. U., Turatto, M., Gratton, R. G., et al. 2008, in Groundbased and Airborne Instrumentation for Astronomy II, Proc. SPIE, 7014, 70143E [Google Scholar]
 Claudi, R. U., Giro, E., Anselmi, U., et al. 2011, in Optical Design and Engineering IV, Proc. SPIE, 8167, 81671S [Google Scholar]
 Courtes, G. 1982, Int. Astron. Union Colloq., 67, 123 [Google Scholar]
 Currie, T., Cloutier, R., Brittain, S., et al. 2015, ApJ, 814, L27 [NASA ADS] [CrossRef] [Google Scholar]
 Cushing, M. C., Vacca, W. D., & Rayner, J. T. 2004, PASP, 116, 362 [NASA ADS] [CrossRef] [Google Scholar]
 Delorme, P., Meunier, N., Albert, D., et al. 2017, in SF2A2017: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, eds. C. Reylé, P. Di Matteo, F. Herpin, et al., 347 [Google Scholar]
 Dohlen, K., Vigan, A., Mouillet, D., et al. 2016, Groundbased and Airborne Instrumentation for Astronomy VI, 9908, 99083D [Google Scholar]
 Draper, Z. H., Marois, C., Wolff, S., et al. 2014, Groundbased and Airborne Instrumentation for Astronomy V, 9147, 91474Z [Google Scholar]
 Drory, N., MacDonald, N., Bershady, M. A., et al. 2015, AJ, 149, 77 [NASA ADS] [CrossRef] [Google Scholar]
 Eisenhauer, F., Abuter, R., Bickert, K., et al. 2003, Instrument Design and Performance for Optical/Infrared Groundbased Telescopes, 4841 [Google Scholar]
 Flasseur, O., Denis, L., Thiébaut, É., & Langlois, M. 2018, A&A, 618, A138 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Groff, T. D., Chilcote, J., Kasdin, N. J., et al. 2016, Groundbased and Airborne Instrumentation for Astronomy VI, 9908, 99080O [Google Scholar]
 Hanus, J., Marchis, F., Viikinkoski, M., Yang, B., & Kaasalainen, M. 2017, A&A, 599, A36 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hinkley, S., Oppenheimer, B. R., Zimmerman, N., et al. 2011, PASP, 123, 74 [NASA ADS] [CrossRef] [Google Scholar]
 Hogg, R. V. 1979, Am. Stat., 33, 108 [Google Scholar]
 Holland, P. W., & Welsch, R. E. 1977, Commun. Stat. Theory Methods, 6, 813 [CrossRef] [Google Scholar]
 Horne, K. 1986, PASP, 98, 609 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Huber, P. 1996, Robust Statistical Procedures: Second Edition, CBMSNSF Regional Conference Series in Applied Mathematics (Society for Industrial and Applied Mathematics) [Google Scholar]
 Huber, P. J. 2011, Robust Statistics (Springer) [Google Scholar]
 Jovanovic, N., Martinache, F., Guyon, O., et al. 2015, PASP, 127, 890 [NASA ADS] [CrossRef] [Google Scholar]
 King, O. R., Fletcher, L. N., & Ligier, N. 2019, Compositional Mapping of Europa with VLT/SPHERE, EPSCDPS20195961, 13 [Google Scholar]
 Lafreniere, D., Marois, C., Doyon, R., Nadeau, D., & Artigau, E. 2007, ApJ, 660, 770 [NASA ADS] [CrossRef] [Google Scholar]
 Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P.E, 1998, J. Optim., 9, 112 [Google Scholar]
 Langlois, M., Pohl, A., Lagrange, A.M., et al. 2018, A&A, 614, A88 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Lantz, B., Aldering, G., Antilogus, P., et al. 2004, Optical Design and Engineering, 5249 [Google Scholar]
 Larkin, J., Barczys, M., Krabbe, A., et al. 2006, Groundbased and Airborne Instrumentation for Astronomy, 6269, 62691A [Google Scholar]
 Law, D. R., Cherinka, B., Yan, R., et al. 2016, AJ, 152, 83 [NASA ADS] [CrossRef] [Google Scholar]
 Lockhart, K. E., Do, T., Larkin, J. E., et al. 2019, AJ, 157, 75 [NASA ADS] [CrossRef] [Google Scholar]
 Macintosh, B. A., Graham, J. R., Palmer, D. W., et al. 2008, Adaptive Optics Systems, 7015, 701518 [NASA ADS] [CrossRef] [Google Scholar]
 Macintosh, B., Graham, J. R., Barman, T., et al. 2015, Science, 350, 64 [NASA ADS] [CrossRef] [Google Scholar]
 Marois, C., Lafreniere, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, ApJ, 641, 556 [Google Scholar]
 Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Marois, C., Correia, C., Véran, J.P., & Currie, T. 2013, Proc. Int. Astron. Union, 8, 48 [CrossRef] [Google Scholar]
 Marois, C., Correia, C., Galicher, R., et al. 2014, in Adaptive Optics Systems IV, Int. Soc. Opt. Photon., 9148, 91480U [NASA ADS] [CrossRef] [Google Scholar]
 Mesa, D., Gratton, R., Zurlo, A., et al. 2015, A&A, 576, A121 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Mesa, D., Langlois, M., Garufi, A., et al. 2019, MNRAS, 488, 37 [NASA ADS] [CrossRef] [Google Scholar]
 Modigliani, A., Hummel, W., Abuter, R., et al. 2007, ArXiv eprints [arXiv:astroph/0701297] [Google Scholar]
 Mugnier, L. M., Fusco, T., & Conan, J.M. 2004, J. Opt. Soc. Am. A, 21, 1841 [NASA ADS] [CrossRef] [Google Scholar]
 Nocedal, J. 1980, Math. Comput., 35, 773 [CrossRef] [Google Scholar]
 Pavlov, A., MöllerNilsson, O., Feldt, M., et al. 2008, in Advanced Software and Control for Astronomy II, Proc. SPIE, 7019, 701939 [NASA ADS] [CrossRef] [Google Scholar]
 Perrin, M. D., Ingraham, P., Follette, K. B., et al. 2016, in Groundbased and Airborne Instrumentation for Astronomy VI, Proc. SPIE, 9908, 990837 [Google Scholar]
 Rothman, L., Gordon, I., Babikov, Y., et al. 2013, J. Quant. Spectr. Rad. Transf., 130, 4 hITRAN2012 special issue [Google Scholar]
 Rudin, L. I., Osher, S., & Fatemi, E. 1992, J. Phys. D, 60, 259 [NASA ADS] [CrossRef] [Google Scholar]
 Ruffio, J.B., Mawet, D., Czekala, I., et al. 2018, AJ, 156, 196 [Google Scholar]
 Soubies, E., Soulez, F., McCann, M. T., et al. 2019, Inverse Probl., 35, 104006 [NASA ADS] [CrossRef] [Google Scholar]
 Soummer, R., Pueyo, L., & Larkin, J. 2012, ApJ, 755, L28 [NASA ADS] [CrossRef] [Google Scholar]
 Sparks, W. B., & Ford, H. C. 2002, ApJ, 578, 543 [NASA ADS] [CrossRef] [Google Scholar]
 Thiébaut, E. 2002, Proc. SPIE, 4847, 174 [NASA ADS] [CrossRef] [Google Scholar]
 Titterington, D. M. 1985, A&A, 144, 381 [NASA ADS] [Google Scholar]
 Vanderriest, C. 1980, PASP, 92, 858 [Google Scholar]
 Vanderriest, C., Bacon, R., Georgelin, Y., LeCoarer, E., & Monnet, G. J. 1994, Instrumentation in Astronomy VIII, 2198 [Google Scholar]
 Vérinaud, C., Hubin, N., Kasper, M., et al. 2006, Proc. SPIE, 6272, 62720M [CrossRef] [Google Scholar]
 Vigan, A., Gry, C., Salter, G., et al. 2015, MNRAS, 454, 129 [NASA ADS] [CrossRef] [Google Scholar]
 Weilbacher, P. M., Streicher, O., Urrutia, T., et al. 2012, Software and Cyberinfrastructure for Astronomy II, 8451, 84510B [CrossRef] [Google Scholar]
 Zimmerman, N., Brenner, D., Oppenheimer, B. R., et al. 2011, PASP, 123, 746 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Detector calibration
Each SPHEREIFS scientific acquisition comes with a set of calibration files described in Pavlov et al. (2008). These files are split into two subsets: one dedicated to the calibration of the sensor and the other dedicated to the calibration of the lenslet array, that is, the spectral dispersion. In each configuration, a set of n_{a} acquisitions are performed for various exposure times τ ∈ ℝ^{nt}. The calibration files that we use in our calibration of the sensor are the following:

detector flatfield acquisitions y^{f} performed with an integrating sphere producing a uniform light in the optical system after the lenslet array for different light sources (lasers or broadband sources);

background acquisitions y^{bg} performed without any flux of interest reaching the sensor. The recorded signal is then mainly due to the thermal background. These measurements are performed in three different configurations: (a) y^{bg/c} ∈ ℝ^{m1 × m2 × nt × na} with part of the instrument optical path closed (i.e. same optical path as for the flatfield acquisitions, without the lenslet array); (b) y^{bg/o} ∈ ℝ^{m1 × m2 × nt × na} with opened optical path (i.e. complete optical path of the instrument from the entrance focus, including the lenslet array); (c) y^{bg/s} ∈ ℝ^{m1 × m2 × nt × na} with the shutter of the instrument opened, the telescope pointing at a dark region of the sky to record the thermal emission of the atmosphere.
In the following, we describe how these different images are combined in order to estimate the parameters that characterise the detector. We first introduce notations for generic operations on images:

⟨a⟩_{i} is the empirical mean of a along the dimension indexed by the letter i,

𝒱_{i}(a) is the empirical variance of a along the dimension indexed by the letter i,

ℳ_{i}(a) is the median of a along the dimension indexed by the letter i,

𝒜_{i}(a) = ℳ_{i}(a−ℳ_{i}(a)) is the median absolute deviation of a along the dimension indexed by the letter i. It is often used as a robust estimator of the standard deviation, see Huber (2011).
The calibration procedures are illustrated using the two datasets HR 8799 and RY lup introduced in the main text.
A.1. Bias calibration
The bias term b_{k} in Eq. (3) is the sum of different contributions:
with y^{b} the bias of the detector, y^{dc} the dark current, y^{th} the thermal emission of the instrument and y^{sky} the thermal emission of the sky. All these contributions but the detector bias y^{b} are proportional to the exposure time. Hence the bias in the raw data shall follow an affine law of the exposure time. The coefficients of this law can be estimated from the background calibration data given by:
where f^{bg} is the background flux in adu s^{−1} accounting for the dark current and the thermal emission, τ_{t} is the exposure time and ϵ_{k, a, t} is the noise supposed to be independent.
Due to the high level of instrumental thermal emission, the term is predominant even at short exposure times and the constant bias is negligible. In the following, we therefore assume that: y^{b} = 0. The estimated flux at a given pixel k is then obtained by a simple linear fit of the averaged background calibration data q^{bg} = ⟨y^{bg}⟩_{a} ∈ ℝ^{m1 × m2 × nt} weighted by their empirical precision (∀k, t):
Finally, in Eq. (5), the term b_{k} is estimated by or by with τ^{exp} the exposure time and depending on whether the sky contribution must be removed or not.
Figure A.1 shows the estimated flux for the RY lup dataset with the shutter opened. It is very similar to the estimated flux with the shutter closed (not presented here), meaning that the thermal background is mainly due to the internal components of the instrument. The figure also highlights some regions of the sensor where some dust masks the thermal emission, such as Figs. A.1b,c, and other regions with a high density of anomalous pixels, such as Fig. A.1d.
Fig. A.1. Estimation of the temporal background flux on the RY lup dataset, with the shutter opened (cubic stretch, colour bar: adu s^{−1}). b–e: regions of interest of (a). 
A.2. Remnant signal
During a nighttime calibration procedure, it happens that the sky background y^{bg/s} is polluted by a remnant signal from a previous acquisitions f^{pa}, as seen in Figs. A.2a,b. This is typically the case in high contrast imaging mode when the target star is decentred in order to measure the offaxis PSFs and the spectral energy distribution of the star, just before a sky background is acquired, as shown in Fig. A.2c on RY lup.
Fig. A.2. Illustration of the remnant signal suppression from the sky background acquisitions (RY lup dataset, colour bar: adu s^{−1}). a,b: averaged sky background fluxes ⟨f^{bg/s}⟩_{a}. c: previous acquisition flux f^{pa}. d: corrected sky background flux . e: estimated remnant signal in (c). 
As several sky background acquisitions are performed, the remnant signal flux decreases while the background flux remains constant. These two contributions are iteratively split under the following assumptions:



∀a ∈ ⟦1, n_{a} − 1⟧, β_{a} ≥ β_{a + 1} ≥ 0,

where the ≈ sign indicates that some discrepancy is expected because of noise.
Algorithm 2 implements a simple iterative method to estimate the background with a reduced contribution of the remnant signal . Figures A.2d,e show that this algorithm is effective and that the two contributions can be correctly identified. The remnant artefact is strongly attenuated in the estimated background flux while the remnant signal contribution is not polluted by an offset due to the background.
1: ⊳Initial background estimate
2: for i from 1 to n_{rs} do
3: ⊳Update foreground estimate
4: for a from 1 to n_{a} do ⊳Loop over calibration backgrounds
5: ⊳Leastsquares estimate of β_{a}
6: ⊳Apply constraints
7: ∀(k, a), ⊳Subtract remnant signal
8: ⊳Update background estimate
9: ∀k, ⊳Apply constraints
10: return
A.3. Gain calibration
The factor g_{k} in Eq. (3) is a gain converting the signal of interest [M x]_{k} at pixel k into digital values. This gain accounts for the quantum efficiency, for the optical throughput and for the electronic gain. The pixelwise gain is estimated from flatfield acquisitions y^{f} obtained under a flux that is made as uniform as possible over the whole sensor. These acquisitions must first be precorrected from the background corresponding to the situation where the shutter is closed:
To account for the fact that the flatfield calibrations were acquired under different illuminations, the estimated gain is then given by fitting the averaged (background compensated) flatfield calibrations as a linear function of their median intensity weighted by their empirical precisions (∀k, t):
Fig. A.3. Estimation of the flatfield correction factor on the RY lup dataset. b–e: regions of interest of (a). 
Figure A.3 shows the gain calibrated on the RY lup dataset. Contributions from the optics and from the electronics can be identified. The vertical stripes are due to the gain differences between the 32 reading channels of the sensor. The curved horizontal lines are probably artefacts due to parallel plates on the optical path. Large darker areas are attributable to dust particles on the optics. The dust particles directly laying on the sensor and already visible in the calibrated background flux, such as in Figs. A.1b,c, are now clearly visible, as shown in the zooms of Figs. A.3b,c.
A.4. Defective pixels identification
The different calibration files can be used to form a first estimate of the map of defective pixels, noted δ^{cal} in Algorithm 1. Figure A.4 represents the discarded pixels for the RY lup dataset as well as the reason of their flagging.
Fig. A.4. Identified defective pixels in the RY lup dataset. White: Pixels out of the lenslet array field. Green: pixels flagged by the median filter on the dark acquisitions. Red: pixels flagged by their anomalous variance in the dark acquisitions. Blue: pixels flagged by the median filter on the flatfield acquisitions. Dark blue: pixels flagged by their anomalous variance in the flatfield acquisitions. Grey: unflagged pixels. b–e: regions of interest of (a). 
The spectral flatfield, which is discussed in more detail in Appendix B and in Fig. C.1a, is first used to identify the active area of the sensor (the grey area of Fig. A.4). Outside of this area the pixels are discarded. In addition we discard also the first four and the last four columns and rows of the sensor. This active area corresponds to the grey square region in the figure.
The defective pixels are then identified by analysing the statistics of the different calibration datasets y^{c} ∈ ℝ^{m1 × m2 × na × nt} and their model (∀a) with a pixelwise map and exposurewise factors respectively given by:
where we recall that . For each of these three sets, a map of defective pixels is built as presented in Algorithm 3.
1: ⊳Difference with a 5 × 5 median filter
2: ⊳Threshold on the tolerance
3: ⊳Count of the discarded pixels
4: ⊳Residuals of the linear fit
5: ⊳Weighted cost of the residuals
6: ⊳Discard already flagged pixels
7: ⊳Init. of the map on the statistics of y^{c}
8: for n_{w} worst c_{k} ⊳Discarding n_{w} worst statistics
9: ⊳δ^{v} update
10: ⊳Global map on v
11: return δ^{cal}
A first set δ^{a} of n_{w} defective pixels is built by forming a residual map by subtracting to p^{c} its local median in a 5 × 5 pixels neighbourhood and identifying as defective the pixels whose absolute value in is greater than a tolerance α^{a} multiplied by the median absolute deviation . In Fig. A.4, such defective pixels are marked in green if they are found via the background calibration files (shutter opened or closed) and in blue if they are found via the detector flatfield acquisitions.
In a second step, the statistics of the residuals of the linear fit p^{c} in the averaged acquisitions ⟨y^{c}⟩_{a}, according to u^{c} weighted by their empirical variance , are analysed. The n_{w} pixels with the worst costs (n_{w} corresponds to the number of defective pixels previously detected), that is to say, an anomalous variance, are discarded among the remaining pixels not yet flagged as defective. This gives a second map δ^{v} of defective pixels. In Fig. A.4, such defective pixels are marked in red if they are found via the background calibration files (shutter opened or closed) and in dark blue, if they are found via the detector flatfield acquisitions.
The maps δ^{a} and δ^{v} built for the three sets are then combined to obtain the final map of discarded pixels δ^{cal} This method clearly identifies the pixels impacted by the presence of dust particles laying on the sensor, as shown in blue the zooms of Figs. A.4b,c.
As one would expect, looking at the redflagged pixels, it appears that some of the pixels with an anomalous variance cluster around defective pixels. This is because, at high fluxes, defective pixels overflow and corrupt surrounding pixels. This justifies the choice of discarding the n_{w} worst variances. Some areas, such as the one of Fig. A.4d, have a high density of discarded pixels.
The zooms of Figs. A.4b–e must be compared with the ones of Figs. A.1b–e and A.3b–e: some of the discarded pixels would have been hardly identifiable in the estimated background fluxes or in the gains .
A.5. Noise variance estimation
In this section, we describe how to estimate the variance v_{k} of the additive noise used in Eq. (4) on each pixel k, namely the variance at null flux , the socalled readout noise, and η_{k}, which contains the conversion from the number of photons to adu in the Poisson noise term of this equation.
As the different calibration images are not sufficient to estimate these parameters independently for each pixel k, they are assumed to be constant on the whole sensor:
This approximation holds if all the pixels behave similarly and if g_{k} ≃ 1 whatever k.
The coefficients are estimated using all the calibration files y^{c} ∈ {y^{bg/o}, y^{bg/c}, y^{f}}. An affine law is fitted to their median empirical variances : q^{c} ≈ η p^{c} + v^{0} 1, where is the median flux in the calibration images y^{c}. Here, η p^{c} is consequently the variance associated with the Poisson noise induced by the median flux. Weights , corresponding to the inverse of the square of the median absolute deviation are used to weight each sample of the fit:
Results obtained on HR 8799 and RY lup calibration files are given in Fig. A.5 and in Table A.1.
Fig. A.5. Estimation of the noise model for the HR 8799 (a) and RY lup (b) datasets. The model is fitted on the median fluxes and the median variances of the different calibration files: the dark currents with the shutter close (red) and open (green) and the flatfields (blue). The error bars are obtained by the median absolute deviation on all the sensor pixels for each acquisition set. The grey law is fitted without taking into account the errors. The black law is fitted by accounting for the errors. The red and blue insets are zooms on regions of low incident fluxes. 
Estimation of the coefficients for the two datasets, when accounting for the median absolute deviation of the empirical variances (weighted least squares) or not (standard least squares).
Accounting for the median absolute deviation of the empirical variance improves the estimation: the estimated line (in black) is less affected by values with large uncertainty than an unweighted least square fit (grey line).
It appears that the smaller errors on the low values help to counterbalance the lever arm effect on the large fluxes that are noisier. In the case of the RY lup dataset, the weighted fit reduces the influence of an outlier point around a flux of 2750 adu whose variance exceeds 4000 adu^{2}. This can be seen in the estimation of presented in Table A.1: as expected the weighted least squares gives more similar coefficients for the two datasets than the standard least square estimation.
Appendix B: Spectral dispersion calibration
This appendix describes the calibration of the lenslet array, that is, how to determine the operator P : ℝ^{nh × nλ} → ℝ^{m1 × m2} that represents the spectral dispersion produced by the individual lenslets projected on the sensor.
To build this operator, several calibration files are used:

Spectral flatfield acquisitions y^{s} are obtained when an integrating sphere produces a uniform light upstream of the lenslet array with a broadband lamp. These images help to calibrate the 2D location of each spectrum and the lenslet transmissions as well as to identify defective lenslets.

Laser acquisitions y^{w} are obtained with calibrated lasers placed upstream of the lenslet array. Their wavelengths are λ^{cal} ∈ {987.72,1123.71,1309.37,1545.10}nm. Only the first three lasers are used for the YJ band calibrations ().
For each of these files, a set of several acquisitions a ∈ ⟦1, n_{a}⟧ are performed for a given exposure time τ. In the following, only their average, corrected for the background and the gain, is considered:
B.1. Dispersion model
As previously described, after dispersion, the spectrum produced by a given lenslet is modelled as a weighted sum of elementary normalised Gaussian patterns, one for each reconstructed wavelength. For a pattern of size σ ∈ ℝ^{+}, centred on κ ∈ ℝ^{2} in pixel coordinates, the value at a given pixel k is:
In practice, to save memory and computational time, the Gaussian patterns are truncated outside a region of a few pixels. As a consequence, P is a sparse matrix.
The reduced hyperspectral cube produced by the standard reduction pipeline of SPHERE (Pavlov et al. 2008) possesses n_{λ} = 39 wavelengths. In our calibration method, the lists of reconstructed wavelengths λ are extended beyond the edges of the spectrum:

for the YHmode, n_{λ} = 44 wavelengths are reconstructed (and the calibration lasers are used);

for the YJmode, n_{λ} = 47 wavelengths are reconstructed (and the first calibration lasers are used).
Thus, if κ ∈ ℝ^{2 × nλ} is the list of the positions of the elementary Gaussian patterns in a spectrum and σ ∈ ℝ^{nλ} the list of their sizes, the light incoming from a given lenslet h is dispersed to form a spectrum u on the sensor, whose value at pixel k is:
where Λ is the spectrum of the lamp and γ_{h} is the transmission of lenslet h.
For each lenslet, the position κ and the width σ of each Gaussian pattern are assumed to follow a polynomial law of degree d as a function of the wavelength. At wavelength λ_{ℓ}, the value of the polynomial p_{ℓ} defined by the coefficients c ∈ ℝ^{d + 1} is:
The spectral projection operator P is defined based on the list γ ∈ ℝ^{nh} of the lenslet transmissions, the list c^{κ} ∈ ℝ^{nh × 2 × (dk+1)} of the coefficients of the polynomial law of degree d_{k} describing the positions of the elementary patterns for each lenslet, and the list c^{λ} ∈ ℝ^{nh × (dλ+1)} describing the coefficients of the polynomial law of degree d_{λ} of the size of the elementary patterns for each lenslet. With these notations, the model of the spectrum at a given pixel k produced by the lenslet h is:
In practice, the degree of the polynomial model is dictated by the number of calibration spots in each spectrum: i.e. d_{k} = 3 in the YHmode and d_{k} = 2 in the YJmode. For both cases, we use d_{λ} = 2.
B.2. Dispersion calibration
With the notations defined for the robust penalisation, the estimation of these dispersion coefficients is obtained by minimising the following data fidelity term for a given lenslet h:
applied on the spectral flatfield {y^{s},Λ,λ} or on the wavelength calibration , where is the list of the amplitudes of the spots for each lenslet.
Estimating the dispersion parameters from the laser calibration images y^{w} is not suitable because some laser spots may fall into regions with many defective pixels, or be totally hidden by a dust particle. Using the spectral flatfield images y^{s} is more adapted since the signal covers many more pixels, but is very limited in terms of wavelength characterisation. It is best to combine the two calibration files via a global data fidelity term:
where the hyperparameter μ_{w} balances the influence of the wavelength calibration compared to the spectral flatfield.
Minimising Eq. (B.7) to determine the dispersion parameters of a given lenslet h implies knowledge of the spectrum Λ of the lamp. This spectrum can be robustly estimated from all the lenslets by minimising the following data fidelity term:
regularised by the smoothing term μ_{Λ} ℛ_{λ}(Λ) previously defined in Eq. (12) and constrained to the domain:
The spectrum of the lamp is initialised on a few neighbouring spectra at the centre of the sensor field. Each lenslet spectrum and its dispersion coefficients are then identified neighbour after neighbour by minimising Eq. (B.7).
This first estimation can be refined to account for the overlap of neighbouring spectra. If the impact on the position coefficients c^{κ} is minor, it influences the size coefficients c^{λ} in particular on the spectra edges. That is why the first set of parameters is refined n_{c} times by Algorithm 4 by iteratively working on the residuals.
1: for i from 1 to n_{c} do ⊳n_{c} refinement loops
2: ⊳Λ update, fixed
3: ⊳Global residuals
4: for h from 1 to n_{h} do ⊳Loop on the lenslets
5: ⊳h^{th} lenslet residuals
6: ⊳ update on (r^{h},y^{w}), fixed
7: ⊳ update on r^{h}, fixed
8: return
At each iteration of the refinement loop, the parameters of the model are alternatively estimated. First, the lamp spectrum is robustly fitted on all the lenslets by minimising Eq. (B.8) with the VMLMB algorithm.
The positions and amplitudes of each spot are then estimated, one lenslet after another, after the contributions of all other lenslets have been removed from the data. These estimations are also performed with VMLMB. In a third step, the size of the patterns and the lenslet transmissions γ_{h} are estimated using a simplex search method (Lagarias & Reeds 1998).
Doing so, the estimation of the position parameters is split from the pattern size law fit. Thus, is not influenced by the monochromatic spots size in y^{w} whereas they are supposed to fit continuous spectra in y^{s}.
B.3. A superresolved model for the lenslet PSF?
Considering a superresolved model, as proposed by Brandt et al. (2017), is straightforward in the proposed framework. It consists in replacing the Gaussian patterns ξ_{k} with a more accurate model of the lenslet PSFs. The real difficulty comes from the calibration of this model. In particular, the calibration files available for the SPHERE instrument are not sufficient to derive an accurate superresolved PSF model for several reasons.
Firstly, the calibration lasers simultaneously illuminate the IFU. Thus, the foot of the PSF produced by one laser overlaps with the PSFs of the other wavelengths. A sequential acquisition (one wavelength at a time) would be necessary to directly recover the whole support of the PSFs without requiring to unmix the PSFs. Secondly, even with a single monochromatic illumination, all lenslets are illuminated at once, which leads to two issues. The spots produced by the different lenslets lie on a regular grid and slightly overlap, which creates a degeneracy of the superresolved model. Some interference also occurs between patterns which are not representative of the PSFs obtained with incoherent light. The selective illumination of one lenslet at a time would solve these issues but would be time consuming and would raise technical instrumental issues. Finally, the number of distinct wavelengths used in the calibration procedure is too low to allow a fine parameterisation of a PSF model, and in particular to address the variability from one lenslet to another. A larger set of wavelengths, covering the whole spectrum (to prevent extrapolations on the edges of the spectral domain) would be necessary. For these different reasons, such a superresolved model was not implemented in the present paper.
Appendix C: Results of the dispersion calibration
C.1. Dispersion
Results of Algorithm 4 are illustrated in Fig. C.1 for the calibration in the YHmode of the HR 8799 dataset.
Fig. C.1. Calibration of the spectral projection for the HR 8799 dataset (square root stretch, colour bar: adu). a: averaged wavelength calibration acquisition y^{w}. b: averaged spectral flatfield calibration acquisition y^{s}. c: zoom of (a). The found positions for the calibration wavelengths are highlighted by coloured dots. The black circles are proportional to the size of the Gaussian patterns. d: simulation of the wavelength calibration by the projection model. e: zoom of (b). The positions of the wavelengths simulated in the forward model are highlighted by coloured dots. f: simulation of the spectral flat by the projection model. g: residuals of the first estimation of the wavelength calibration. h: final residuals of the wavelength calibration. i: residuals of the first estimation of the spectral flat calibration. j: final residuals of the spectral flat calibration. k: histogram of the residuals of the spectral flat calibration along the iterations. For comparison purposes, a Gaussian profile of standard deviation σ = 175 adu is plotted. 
The laser spots are visible in the wavelength calibration acquisition y^{w} for each lenslet in Figs. C.1a,c. The coloured dots mark the positions of the spots fitted by the Algorithm 4. The radii of the dark circles indicate the size of the Gaussian patterns. The chromatic evolution of the fitted size is visible with a minimal spot extension around the third calibration wavelength.
The residuals in Fig. C.1g of the first iteration of the wavelength calibration present a strong diversity due to inaccurate values of the size of the Gaussian patterns and incorrect amplitudes for the spots Λ^{cal}. Some patterns are not wellaligned with the laser spots, as suggested by nonsymmetric residuals on some spots (which indicates a bad wavelength calibration). The final residuals in Fig. C.1h, after n_{c} = 10, are more homogeneous, with centred residuals on each spot. Nonetheless, all the spots present red ringshaped residuals and seem to worsen. This suggests that the pattern size given by c^{λ} is overestimated when applied to the laser spots. This is expected as these spots are monochromatic whereas the dispersion law is fitted on the continuous spectra of the spectral flatfield file y^{s}. This also implies that a wavelength calibration only performed on the wavelength calibration file y^{w} underestimates the size parameters.
The spectra of the n_{h} ≈ 19 000 lenslets of the spectral flatfield acquisition y^{s} are visible in Figs. C.1b,d. The coloured dots represent the positions of the n_{λ} = 44 wavelengths of the forward model. It can be seen that the spectral range goes beyond the calibration laser spots, especially in the longer wavelengths (red dots). The extrapolation of the positions beyond these calibrated spots leads to wider spatial steps in the short wavelengths (blue dots) while the positions are more packed in the longer wavelengths (red dots). This may lead to a degraded wavelength calibration in the outer regions of the spectral range (see Fig. 8 and associated discussion).
In the residuals of the first calibration of the spectral flat calibration in Fig. C.1i, it appears that the overall signal is overestimated. This may be due to the fact that the overlap of the spectra is not taken into account at this first iteration. The dust presents in Fig. A.3c is clearly identifiable. After the n_{c} = 10 refinements, the residuals in Fig. C.1j improve, especially on the spectra edges, to fall below ∼2.5% of the lamp spectrum intensity. This is confirmed by the curves of Fig. A.3k that show the evolution of the histogram of the residuals along with the refinement iterations. Starting from a spread distribution, the residuals squeeze towards a Gaussian curve with smaller dispersion.
Blue structures in Fig. C.1j indicate that some regions are still slightly underestimated by the model, leading to positive residuals. This corresponds to the small bump around 400 adu in the residue distribution. This is due to the fact that the Gaussian pattern model is an approximation of the spots diffracted by the hexagonal apertures of the lenslets. They have “feet” in privileged directions that are not fitted by an axisymmetric pattern, to which optical aberrations may be added. This also explains the blue positive background around the laser spots in Figs. C.1a,c.
In terms of wavelength centring of the forward model, as mentioned by Vigan et al. (2015), the SPHERE reduction pipeline (Pavlov et al. 2008) needs an extrapatch to correct its wavelength calibration. To check our calibration, the wavelength calibration file y^{w} is reconstructed using Algorithm 1. The results are given in Fig. C.2 for the two datasets.
Fig. C.2. Wavelength calibration of the model on the HR 8799 (YHmode, on the left panel) and the RY lup (YJmode, on the right panel) datasets (colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube. c,d: slice of the reconstructed hyperspectral cube (λ = 1314 nm for the HR 8799 dataset, λ = 1122 nm for the RY lup dataset). e: median intensity ℳ_{θ}(x) computed on each slice of the reconstructed hyperspectral cube for each dataset compared with the reduction proposed by Vigan et al. (2015). The wavelengths of the calibration lasers are highlighted by dashed grey lines). 
The coloured projections of Figs. C.2a,b and the monochromatic slices given in Figs. C.2c,d show that the reconstructed laser illuminations are not uniform but are characteristic of speckle noise.
The median intensity ℳ_{θ}(x), computed on each slice of the reconstructed hyperspectral cube, is shown in Fig. C.2e for each dataset. Maxima match well the wavelength of the calibration lasers, represented by vertical dashed lines. The effective resolution is estimated from the full width at half maximum of each laser. The average resolution of the YJmode and YHmode is R = 57 and R = 31 respectively. These values are compared to the standard reduction method based on a direct mapping (Vigan et al. 2015) in the YHmode from which the estimated average spectral resolution is R = 20. The better spectral resolution using PIC is close to the SPHEREIFS theoretical resolution R = 30 of Beuzit et al. (2019) and shows the impact of the spectral deconvolution performed by the inverse problems approach (this deconvolution is possible because the forward model accounts for the spectral mixing).
C.2. Transmission
The calibration procedure also gives the achromatic transmission map γ of the IFU, including the lenslet array, the prism, and the other associated lenses; it is shown in Fig. C.3a, with a median transmission of ℳ_{h}(γ) = 1.
It can be seen that the transmission map γ further corrects the effect of the dust particles on the optical path that cannot be suppressed only by the sensor flatfield correction factor . Indeed, comparing Figs. A.3a and C.3a, it appears that these dust particles have a counterpart in the transmission map γ. Even if the effect is minor, Fig. C.3b shows that the lamp spectrum Λ is refined with the iterations of the calibration loop. The corrections are mainly due to the refinement of the dispersion law c^{λ}. Indeed, the dilution of the intensity in a Gaussian pattern according to its size must be counterbalanced by its amplitude. Some features are visible in the lamp spectra that can participate to the robust wavelength calibration.
C.3. Autocalibration
A last correction must be added in the calibration of the forward model. Indeed, it appears that the data on the sky are not perfectly aligned with the calibration files. This is emphasised in Figs. C.4a–d that presents a region of interest of an acquisition on the sky (a), the fit of the model obtained before the autocalibration (b), and the corresponding residuals (d). The difference between the model before and after the alignment is shown in Fig. C.4c.
Fig. C.3. a: transmission map γ of the IFU estimated for the HR 8799 dataset. b: fitted spectrum of the lamp for the two datasets HR 8799 (YHmode) and RY lup (YJmode) along the iterations of Algorithm 4. 
Fig. C.4. Autocalibration for the HR 8799 dataset (square root stretch, colour bar: adu). a: zoom on a region of interest. b: model before the alignment. c: difference between the model prediction before and after the alignment. d: residuals after the alignment. e: statistics before (red) and after (black) the autocalibration step. 
To perform this alignment, n_{r} regions of interest, equally spread on the sensor and of m_{r} × m_{r} pixels, are extracted from the raw science data to perform an autocalibration. The regions are small and the spectra Λ^{a} ∈ ℝ^{nr × nλ} of the lenslets are considered to be equal on these reduced fields.
Then an algorithm similar to Algorithm 4 is applied. A refinement loop is performed to compute the global shift δ^{k} ∈ ℝ^{2} that is assumed to be constant for all the coefficients c^{κ}. Each iteration alternates two steps:

the spectrum of each of the n_{r} regions is fitted via Eq. (B.8), applied on the region of interest, assuming that the shift δ^{k} is known;

the shift δ^{k} is fitted by combining all the regions of interest by adapting Eq. (B.6), for fixed spectra .
The histograms of the residuals before and after the alignment are compared in Fig. C.4e. After the alignment they are less dispersed and more symmetrical. The estimated shifts for the two datasets are given in Table C.1. The shifts are small, of the order of a quarter of a pixel for the strongest correction in the case of the RY lup dataset. For the HR 8799 dataset, the correction is just a few percent of a pixel. Nonetheless, this shift corresponds to a measurable signal as seen in Fig. C.4c that compares the intensity predicted by the model before and after the autocalibration step.
Estimation of the shift δ^{k} in pixel to apply on the coefficients c^{κ} for the two datasets.
Appendix D: Numerical value of the different parameters
Table D.1 summarises the different parameters and hyperparameters of the different algorithms. The numerical values were the ones used to obtain the results presented in this paper.
The scaling factors s_{k} applied on the pixels k of the data are chosen to be constant all over the field: ∀k, s_{k} = s_{0}. As the role of s_{0} is to discriminate the data that deviate compared to the expected values, it depends on the overall data dynamics and must be adapted to each dataset. This can be done by looking at the distribution of the residuals such as the ones given in Figs. 7o,p: s_{0} is proportional to their standard deviation. The RY lup dataset having a low dynamics, this explains why its s_{0} is smaller.
The hyperparameters μ_{λ}, μ_{2D} and μ_{Λ} introduced in Eqs. (13) and (B.8) are normalised as follows:
where 𝜚 is a normalisation factor to scale the number of unknowns in the regularisation terms with the measurements in the data fidelity term. In other words, it accounts for the number of reconstructed spectra and wavelengths compared to the number of pixels in the forward model. The normalisation by accounts for the quadratic development of the Cauchy function close to zero. It ensures that the regularisation scales coherently with the scaling coefficient s_{0}. and are manually set. Here, is chosen to be as small as possible while rejecting the highfrequency modes that appear in the reduced spectra due to the oversampling of the forward model as mentioned by Draper et al. (2014). Also, must be high enough to participate in the outlier pixel detection while avoiding adding too much smoothing in the reduced cubes. To do so, it is placed at the limit at which the acquisition noise starts to pollute the reduced cubes, and consequently depends on the data dynamic range.
For the refinement of the spectral dispersion model, to refine the parameters of each lenslet, the local spectra are extracted and analysed on their region of interest. These regions are defined as the window that encompasses the position of the different Gaussian patterns, extended on each side by 2 pixels, leading to a window of roughly 5 × 45 pixels for each spectrum. In addition, at each loop, of the calibration, the parameter μ_{w} in Eq. (B.7) is updated so that the cost on the spectral flatfield acquisition, defined as
is balanced with the cost on the wavelength calibration file, defined as
to favor the wavelength calibration file as follows:
Numerical values of the parameters and hyperparameters used in the different algorithms.
All Tables
Estimation of the chromatic slope ξ of the linear law linking the diagonal s of the diffracted square pattern with the wavelength λ and the corresponding relative error on the spectral calibration σ_{λ} for the different datasets and their acquisition dates with the proposed method and, for the first acquisition date, with the SPHERE reduction pipeline (RP, Delorme et al. 2017).
Comparison between the S/N of the innermost planet at large wavelengths for the SPHERE reduction pipeline (RP, Delorme et al. 2017) and the proposed method (PIC).
Estimation of the coefficients for the two datasets, when accounting for the median absolute deviation of the empirical variances (weighted least squares) or not (standard least squares).
Estimation of the shift δ^{k} in pixel to apply on the coefficients c^{κ} for the two datasets.
Numerical values of the parameters and hyperparameters used in the different algorithms.
All Figures
Fig. 1. Forward model M x = P I C x of an integral field spectrograph with the example of the SPHEREIFS instrument (b). The telescope forms an image of the targeted object x (a,d) on the instrument that projects a spectrum of each of its spatial elements onto the sensor (c,g). The lowcrosstalk BIGRE design of the IFU (b) proposed by Antichi et al. (2009) is based on two hexagonal lenslet arrays whose effect is first modelled as a convolution C by their pupil shape, averaging the light received by a single lenslet (d → e). An interpolation I then accounts for the sampling performed by the first array of lenslets of the IFU (red) at the lenslet positions (green dots) (e → f). Finally, their spectrum is dispersed by prisms on the sensor, corresponding to a sparse matrix P linking the different wavelengths seen by a given lenslet to the location of their projection on the detector (f → g). Photo Credit: Lucasfilm Ltd. 

In the text 
Fig. 2. Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Ganymede (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1018 nm and λ = 1556 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f). 

In the text 
Fig. 3. Comparison between the SPHERE reduction pipeline (RP, Delorme et al. 2017) (a,c,e) and the proposed method (PIC) (b,d,f) on the Jovian moon Io (square stretch of the intensity, colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube (square stretch of the intensity). c–f: slices of the reconstructed hyperspectral cube (at λ = 1068 nm and λ = 1353 nm). g: spectra extracted from the reduced hyperspectral cubes at the circled positions. The two vertical dashed lines indicate the wavelengths of the monochromatic images shown in (c–f). 

In the text 
Fig. 4. Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the HR 8799 dataset from a single IFS exposure. The proposed method removes artificial background structures. a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1068 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d). 

In the text 
Fig. 5. Comparison between the SPHERE reduction pipeline (Delorme et al. 2017) (a,c,e,g) and the proposed PIC method (b,d,f,h) on the RY lup dataset from a single IFS exposure. The proposed sky background estimation removes the host star remnant and the proposed reconstruction algorithm is robust to aberrant measurements (see Fig. 6). a,b: coloured projection of the reconstructed hyperspectral cube (cubic root stretch of the intensity). c,d: slice of the reconstructed hyperspectral cube (at λ = 1165 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). e–h: zooms on (a–d). 

In the text 
Fig. 6. Robust defectivepixel identification on the sensor region corresponding to the region of interest of Figs. 5 and 7b. In addition to the known defective pixels (green), the robust penalisation can flag additional unusable pixels in a given acquisition that are blinking (red) or hit by an energetic particle (blue). a,b: raw data (a) and intensity (b) predicted by the model (colour bar: adu). c: equivalent weights obtained by the robust penalisation (cubic stretch). d: new map of the unconsidered pixels. e–h: zooms on (a–d). 

In the text 
Fig. 7. Residuals of the model on the HR 8799 (on the left column) and the RY lup (on the right column) datasets (square root stretch, colour bar: adu). a,b: raw data. c–f: zooms on the regions framed in (a,b). g–j: model prediction on the same regions of interest. k–n: residuals on the same regions of interest. o,p: statistics of the residuals. For comparison, Gaussian profiles of standard deviation σ = 47 adu (HR 8799) and σ = 23 adu (RY lup) are plotted. The region of interest framed in orange in (b) is the region detailed in Fig 6. 

In the text 
Fig. 8. Study of the chromatic scaling of the diffraction images of the host star for the two datasets. a: coloured projection of the HR 8799 reconstructed hyperspectral cube (cubic root stretch of the intensity). b: slice of the HR 8799 reconstructed hyperspectral cube (λ = 1486 nm, cubic root stretch of the intensity, colour bar: arbitrary unit). The five black dots represent the fitted positions of the four images of the star and of the centre of the fitted square (dashed lines). c: linear fits of the square diagonals according to the wavelength for each dataset at two different times in the night with the proposed PIC method and, for the first time in the night, with the SPHERE reduction pipeline (RP, Delorme et al. 2017). d: fitted positions of the squares at each wavelength compared to the estimation of the star position. Only the wavelengths whose position falls inside the grey circle of 0.4 voxels radius are used to fit the linear laws. 

In the text 
Fig. 9. Comparison between a TLOCI postprocessing of the SPHERE reduction pipeline (Delorme et al. 2017) (left column) and the proposed PIC method (right column) on the HR 8799 dataset. The contrast of the planet closest to the star is increased by the PIC method and several false positives are avoided. a–d: monochromatic images of the postprocessed hyperspectral cube and planet detection S/N (square root stretch of the intensity, colour bar: arbitrary unit) at λ = 972 nm (a,b) and λ = 1636 nm (c,d). e,f: maps of planet detection S/N combining all wavelengths (red circles: true positives, blue boxes: false positives with the SPHERE pipeline, colour bar: S/N). 

In the text 
Fig. 10. Coloured projection of the postprocessing of the SPHERE reduction pipeline (Delorme et al. 2017) (a) and of the proposed PIC method (b) on the RY lup dataset (cubic root stretch of the intensity). 

In the text 
Fig. A.1. Estimation of the temporal background flux on the RY lup dataset, with the shutter opened (cubic stretch, colour bar: adu s^{−1}). b–e: regions of interest of (a). 

In the text 
Fig. A.2. Illustration of the remnant signal suppression from the sky background acquisitions (RY lup dataset, colour bar: adu s^{−1}). a,b: averaged sky background fluxes ⟨f^{bg/s}⟩_{a}. c: previous acquisition flux f^{pa}. d: corrected sky background flux . e: estimated remnant signal in (c). 

In the text 
Fig. A.3. Estimation of the flatfield correction factor on the RY lup dataset. b–e: regions of interest of (a). 

In the text 
Fig. A.4. Identified defective pixels in the RY lup dataset. White: Pixels out of the lenslet array field. Green: pixels flagged by the median filter on the dark acquisitions. Red: pixels flagged by their anomalous variance in the dark acquisitions. Blue: pixels flagged by the median filter on the flatfield acquisitions. Dark blue: pixels flagged by their anomalous variance in the flatfield acquisitions. Grey: unflagged pixels. b–e: regions of interest of (a). 

In the text 
Fig. A.5. Estimation of the noise model for the HR 8799 (a) and RY lup (b) datasets. The model is fitted on the median fluxes and the median variances of the different calibration files: the dark currents with the shutter close (red) and open (green) and the flatfields (blue). The error bars are obtained by the median absolute deviation on all the sensor pixels for each acquisition set. The grey law is fitted without taking into account the errors. The black law is fitted by accounting for the errors. The red and blue insets are zooms on regions of low incident fluxes. 

In the text 
Fig. C.1. Calibration of the spectral projection for the HR 8799 dataset (square root stretch, colour bar: adu). a: averaged wavelength calibration acquisition y^{w}. b: averaged spectral flatfield calibration acquisition y^{s}. c: zoom of (a). The found positions for the calibration wavelengths are highlighted by coloured dots. The black circles are proportional to the size of the Gaussian patterns. d: simulation of the wavelength calibration by the projection model. e: zoom of (b). The positions of the wavelengths simulated in the forward model are highlighted by coloured dots. f: simulation of the spectral flat by the projection model. g: residuals of the first estimation of the wavelength calibration. h: final residuals of the wavelength calibration. i: residuals of the first estimation of the spectral flat calibration. j: final residuals of the spectral flat calibration. k: histogram of the residuals of the spectral flat calibration along the iterations. For comparison purposes, a Gaussian profile of standard deviation σ = 175 adu is plotted. 

In the text 
Fig. C.2. Wavelength calibration of the model on the HR 8799 (YHmode, on the left panel) and the RY lup (YJmode, on the right panel) datasets (colour bar: arbitrary unit). a,b: coloured projection ⟨x⟩_{λ} of the reconstructed hyperspectral cube. c,d: slice of the reconstructed hyperspectral cube (λ = 1314 nm for the HR 8799 dataset, λ = 1122 nm for the RY lup dataset). e: median intensity ℳ_{θ}(x) computed on each slice of the reconstructed hyperspectral cube for each dataset compared with the reduction proposed by Vigan et al. (2015). The wavelengths of the calibration lasers are highlighted by dashed grey lines). 

In the text 
Fig. C.3. a: transmission map γ of the IFU estimated for the HR 8799 dataset. b: fitted spectrum of the lamp for the two datasets HR 8799 (YHmode) and RY lup (YJmode) along the iterations of Algorithm 4. 

In the text 
Fig. C.4. Autocalibration for the HR 8799 dataset (square root stretch, colour bar: adu). a: zoom on a region of interest. b: model before the alignment. c: difference between the model prediction before and after the alignment. d: residuals after the alignment. e: statistics before (red) and after (black) the autocalibration step. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.