Issue 
A&A
Volume 649, May 2021



Article Number  A70  
Number of page(s)  7  
Section  Astronomical instrumentation  
DOI  https://doi.org/10.1051/00046361/202140354  
Published online  12 May 2021 
Focalplaneassisted pyramid wavefront sensor: Enabling framebyframe optical gain tracking
^{1}
Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France
email: vincent.chambouleyron@lam.fr
^{2}
DOTA, ONERA, Université Paris Saclay, 91123 Palaiseau, France
^{3}
IFREMER, Laboratoire Detection, Capteurs et Mesures (LDCM), Centre Bretagne, ZI de la Pointe du Diable, CS 10070, 29280 Plouzane, France
Received:
15
January
2021
Accepted:
1
March
2021
Aims. With its high sensitivity, the pyramid wavefront sensor (PyWFS) is becoming an advantageous sensor for astronomical adaptive optics (AO) systems. However, this sensor exhibits significant nonlinear behaviours leading to challenging AO control issues.
Methods. In order to mitigate these effects, we propose to use in addition to the classical pyramid sensor a focal plane image combined with a convolutive description of the sensor to fast track the PyWFS nonlinearities, the socalled optical gains (OG).
Results. We show that this additional focal plane imaging path only requires a small fraction of the total flux while representing a robust solution to estimating the PyWFS OG. Finally, we demonstrate the gain that our method brings with specific examples of bootstrapping and handling noncommon path aberrations.
Key words: instrumentation: adaptive optics / telescopes
© V. Chambouleyron et al. 2021
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The pyramid wavefront sensor (PyWFS), which was proposed for the first time in 1996 by Ragazzoni (1996), is an optical device used to perform wavefront sensing. Inspired by the Foucault knife test, the PyWFS is a pupil plane wavefront sensor performing optical Fourier filtering with a glass pyramid with four sides that is located at the focal plane. The purpose of this glass pyramid is to split the electromagnetic (EM) field into four beams producing four different filtered images of the entrance pupil. This filtering operation allows the conversion of phase information at the entrance pupil into amplitude at the pupil plane, where a quadratic sensor is used to record the signal (Vérinaud 2004, Guyon 2005). Recently, the PyWFS has gained the interest of the astronomical community because it offers a higher sensitivity than the classical ShackHartmann wavefront sensor (WFS) that is commonly used in adaptive optics (AO) systems (Esposito & Riccardi 2001). However, the PyWFS exhibits nonlinearities that prevents a simple relation between the incoming phase and the measurements, leading to control issues in the AO loop. Previous studies (Korkiakoski et al. 2008, Deo et al. 2019a) have demonstrated that one of the most striking effect of this undesirable behaviour is a timeaveraged frequencydependent loss of sensitivity when the PyWFS is working in presence of nonzero phase. This detrimental effect can be mitigated by providing an estimation of the socalled optical gains (OG), which are a set of scalar values encoding the loss of sensitivity with respect to each component of the modal basis. The goal of this paper is to present a novel way of measuring the OG. In the first section we introduce the concept of the linear parametervarying system (LPVS) to describe the PyWFS, which opens the possibility of estimating the OG frame by frame instead of considering a timeaveraged quantity. In the second section, we present a practical implementation of the method, enabling framebyframe OG tracking. Finally, we illustrate this OG tracking strategy in the context of closedloop bootstrapping and handling noncommon path aberrations (NCPAs).
2. PyWFS seen as a linear parametervarying system
2.1. PyWFS nonlinear behaviour and optical gains
In the following, we call s the output of the PyWFS. This output can be defined in different ways. The two main definitions are called ‘full frame’ or ‘slope maps’. In the first case, s is obtained by recording the full image of the WFS camera, for which a reference image corresponding to a reference phase has been removed (Fauvarque et al. 2016). In the second case, the WFS image is processed to reduce the useful information to two pupil maps usually called ‘slope maps’ (Ragazzoni 1996). The work presented here remains valid for fullframe or slopemap computation, and we decided to use the fullframe definition throughout.
When described with a linear model, the PyWFS outputs are linked with the incoming phase ϕ through an interaction matrix called M. This interaction matrix can be built with a calibration process that consists of sending a set of phase maps (usually a modal or a zonal basis) to the WFS with the deformable mirror (DM) and then record the derivative δs(ϕ_{i}) of the PyWFS response for each component of the basis (Fig.1). This operation is most commonly performed with the socalled pushpull method, consisting of successively sending each mode with a positive and then negative amplitude a to compute the slope of the linear response,
Fig. 1. Sketch of the PyWFS response curve for a given mode ϕ_{i}. The pushpull method around a nullphase consists of computing the slope of this curve at ϕ_{i} = 0. 
The interaction matrix (also called Jacobian matrix) is then the collection of the slopes recorded for all modes,
In this linear framework, we can then link the measured phase with the output of the PyWFS by the relation
This matrix computation formalism has interesting properties that are required in the AO control loop. However, the PyWFS exhibits substantial nonlinearities that make the equation above only partially true. Mathematically, the deviation from linearity is expressed with the following inequality: s(aϕ_{i} + ϕ)≠s(aϕ_{i})+s(ϕ), where ϕ is a nonnull given phase. When working around ϕ, the slope of the linear response of the sensor is therefore modified,
During AO observation, the sensor is working around a nonnull phase ϕ corresponding to the residual phase of the system. As a consequence of Eq. (4), the response of the system is modified. Previous studies suggested updating the response slopes to mitigate this effect by relying on two main concepts. The first concept is the stationarity of the residual phases (Rigaut et al. 1998). For a given system and fixed parameters (seeing, noise, etc.), we can compute an averaged response slope for each mode. It has been proven (Fauvarque et al. 2019) that under this stationarity hypothesis, the averaged response slope depends on the behaviour of the statistical residual phases through their structure function (D_{ϕ}), ⟨δs_{ϕ(ϕi)}⟩=δs_{Dϕ}(ϕ_{i}). The second concept is the diagonal approximation (Korkiakoski et al. 2008). This approximation implies considering no crosstalk between the modes, which means that the response slopes are only modified by a scalar value for each mode. This value is known as the OG. We then have , where is the OG associated with the mode i for a given residual phase perturbation statistics characterised by the structure function D_{ϕ}. In this approximation, the shape of the response is left unchanged.
Finally, the interaction matrix is updated by multiplying by a diagonal matrix T_{Dϕ} called the OG matrix, whose diagonal components are ,
We used the scalar product presented in Chambouleyron et al. (2020) to calculate the diagonal components of this matrix,
Several approaches to practically compute this matrix can be found in the literature. They can be split into two categories: those that are invasive for the science path, consisting of sending some probe modes to the DM to return to the OG (Esposito et al. 2015, Deo et al. 2019a), and those that rely on the knowledge of the statistics of the residual phases through the telemetry data to estimate the OG (Chambouleyron et al. 2020). In all the proposed methods, the OG can be seen as an evaluation of a timeaveraged loss of sensitivity of the sensor. Being able to accurately retrieve OG allows compensating for the sensitivity loss.
2.2. LPVS approach
As described by Eq. (4), the PyWFS outputs are affected by the incoming phase. The timeaveraged definition of the interaction matrix M_{Dϕ}, is limited to a statistical behaviour of the PyWFS, even though it has good properties. We propose a framework that addresses the nonlinearities in real time, with an interaction matrix that is updated at every frame. To do so, we first assumed that the diagonal hypothesis holds. Then, and inspired by the automatic field domain, the PyWFS is now considered as an LPVS (Rugh & Shamma 2000): Its linear behaviour encoded by the interaction matrix is modified at each frame according to the incoming phase. Under this framework, the new expression of the PyWFS output can be written as
where T_{ϕ} is the OG matrix for the given measured phase ϕ. Assuming the diagonal approximation holds, we can extract T_{ϕ} from the interaction matrix computed around ϕ,
For a given system, repeating this operation on a set of different phases will eventually lead to the timeaveraged definition of the OG matrix,
To illustrate the difference between the timeaveraged response and a single realisation, we performed the simulation presented in Fig. 2. These simulations were made with parameters consistent with an 8m telescope and for two seeing conditions. All results showed in this paper rely on endtoend simulations performed with the OOMAO MATLAB toolbox (Conan & Correia 2014). The exact conditions and parameters are summarised in Table 1. In the simulation, we can compute the exact PyWFS response by freezing the entrance phase and performing a calibration process around this working point. We therefore computed T_{ϕ} for 1000 residual phase realisations, and show the OG variability for two seeing conditions in Fig. 2. This represents an optimistic context where the Fried parameter r0 is fixed through the complete simulation. By estimating the T_{ϕ} with a timeaveraging strategy, the errors on the OG corresponding to a given residual phase can reach more than some dozen percent (OG exhibiting a maximum deviation from the averaged value are highlighted. This result illustrates the potential gain of performing a framebyframe estimation of the OG instead of a timeaveraged one. In the next section, a practical means for performing this framebyframe gainscheduling operation is presented.
Fig. 2. Variability of closedloop OG. For given system parameters we compute T_{ϕ} for 1 000 phase realisations in two seeing configurations: r0 = 18 cm and r0 = 12 cm. The variability of the framebyframe OG is shown in the histogram in the right panel and by the highlighted extreme OG curves for each r0 case. 
Simulation parameters.
3. Gainscheduling camera
3.1. Principle
Obtaining an estimate of the OG values (the diagonal of T_{ϕ}) requires obtaining additional information describing the working point of the PyWFS at each moment, independently of the PyWFS measurements themselves. To this end, a specific sensor called gainscheduling camera (GSC) is implemented.
Empirically, it is well known that the PyWFS sensitivity depends on the structure of the EM field when it reaches the pyramid mask. For instance, the more this field is spread over the pyramid mask, the less sensitive the PyWFS. In addition, because sensitivity and dynamic range are opposing properties, a wellknown technique used to increase the PyWFS dynamic range consists of modulating the EM field around the pyramid apex. In order to keep track of the sensor regime, we therefore suggest probing this EM field by acquiring a focal plane image synchronously with the Pyramid WFS data. This can be achieved by placing a beam splitter before the pyramid mask and recording the signal with a focal plane camera that has the same field of view as the pyramid (Fig. 3).
Fig. 3. Gain scheduling camera: A focal plane camera that records the intensities of the modulated EM field with the same pyramid field of view. This operation requires using part of the flux from the pyramid path. 
In this configuration, the focal plane camera, hereafter called the GSC, records the intensity of the modulated EM field seen by the pyramid. By using the same exposure time and frame rate as the WFS camera, the signal observed is then an instantaneous AOcorrected pointspread function (PSF) convolved with the circle of modulation. This is illustrated in Fig. 4, where the modulation circle is shown on the left, and the replicas of this modulation circle by the focal plane speckles are shown on the right. By denoting Ω_{ϕ} the GSC signal, we can therefore write
Fig. 4. Left: gain scheduling camera image for a flat wavefront. The white circle is produced by the tiptilt modulation of the pyramid signal. Right: gain scheduling camera image for a given closedloop residual phase. 
where ω is the modulation weighting function. This function can be thought of as a map of the incoherent positions reached by the EM field on the pyramid during one integration time of the WFS camera. This function is thus a circle for the circularly modulated PyWFS (Fig. 5 right). Ω_{ϕ} has to be understood as the effective modulation weighting function: The phase to be measured produces its own modulation, leading to PyWFS loss of sensitivity, and the GSC is therefore a way to monitor this additional modulation.
Fig. 5. Left: arg(m), the shape of the pyramid phase mask in the focal plane. Right: ω, the modulation weighting function: Different positions reached by the EM field during one integration time. 
The next step is now to link this focal plane information with the PyWFS optical gains and merge the GSC and PyWFS signal in one final set of WFS outputs. In a previous work (Chambouleyron et al. 2020), we demonstrated that the convolutive model of the PyWFS developed by Fauvarque et al. (2019) can be used to predict the averaged OG if the statistical behaviour of the residual phases (through the knowledge of their structure function) is known. In Eq. (11) we recall the expression of the PyWFS output in this convolutive framework,
where IR is the impulse response of the sensor and the star denotes the convolutive product. In the framework of the infinite pupil approximation, the impulse response around a flat wavefront can be expressed through two quantities, the mask complex function m and the modulation function ω (Fig. 5),
We propose here to combine this model with the signal delivered by the GSC in order to compute the impulse response IR_{ϕ} of the PyWFS around each individual phase realisation. To do this, we replaced ω by the GSC data as described in Eq. (13),
This new way to compute the impulse response can be considered as using the impulse response given for an infinite pupil system (Eq. 12) for which we replaced the modulation weighting function by the energy distribution at the focal plane, including both the modulation and the residual phase.
Now that we are able to compute IR_{ϕ} at each frame, we can estimate the OG matrix through the following computation of its diagonal components as described in Chambouleyron et al. (2020),
where IR_{calib} is the impulse response computed for the calibration state, most commonly for ϕ = 0 (Fig. 4, left).
3.2. Accuracy of the estimation
It is now possible to test the accuracy of our estimator by comparing and T_{ϕ}. To do this, we computed the true T_{ϕ} through endtoend simulations by proceeding through the ideal way described in section above: An interaction matrix was computed around each given residual phase, from which the OG matrix was derived (Eq. 8). This provides the ground truth to which the gains estimated with the GSC are compared.
First results are shown in Fig. 6 for different seeing and modulation conditions. As illustrated in Fig. 6, the real and estimated OG agree well, demonstrating the accuracy of the proposed method.
Fig. 6. OG estimation for given residual phases thanks to the GSC are compared with endtoend simulation for different parameters (same framework as in Fig. 2). OL: Open loop, and CL: closedloop residual phases. Left: r0 = 12 cm and r_{mod} = 3λ/D. Middle: r0 = 12 cm and r_{mod} = 5λ/D. Right: r0 = 18 cm and r_{mod} = 3λ/D. 
For the parameters used in our simulations, the estimation remains accurate regardless of whether we are in open loop or closed loop. The ripples seen in the groundtruth OG curves are smoothed in the convolutive framework. The convolutive product given in Eq. (11) tends to smooth the output of the PyWFS even when the impulse response is computed around a nonzero phase. Figure 6 also shows a slight deviation for loworder modes for a lowmodulation regime and a strong entrance phase (open loop here).
3.3. Robustness to noise
The GSC has shown to be a reliable way to perform a fast OG tracking, but it requires using a fraction of the photons available in the sensing path. This inevitably competes with the gain of sensitivity provided by the PyWFS. The goal of this section is then to demonstrate that our GSC approach is only weakly affected by photon noise and therefore requires only a small number of photons while performing an accurate framebyframe OG estimation. To this end, we propose to inject noise in the data delivered by the GSC and to probe the effect on the OG estimation.
We ran simulations with the same parameters as described above. The sensing path works around the central wavelength λ_{c} = 550 nm with the given bandwidth Δλ = 90 nm and an ideal transmission of 100%. The exposure time of the GSC is 2 milliseconds (frame rate of the loop), and 10% of the photons are used by the GSC camera. The GSC pixel size corresponds to Shannon sampling of the diffractionlimited PSF. In this given configuration, the data recorded by the GSC for a given closedloop residual phase (r0 = 14 cm, r_{mod} = 3 λ/D) are presented in Fig. 7 (top) for (a.) a noisefree system, (b.) a guidestar magnitude equal to 8 (c.) a guidestar magnitude equal to 10, and (d.) a guidestar magnitude equal to 12. For these three noise configurations (mag = 8, 10, and 12), we estimated the OG for 500 realisations of the noise. The results are given Fig. 7 (bottom part). The introduction of noise leads to an increased OG estimation error, which logically scales with the signaltonoise ratio (S/N) according to . However, the GSC approach also still performs a satisfactory OG estimation even for lowmagnitude guide stars. For even fainter guide stars, the noise effect might be mitigated by integrating the GSC data over several frames. A tradeoff between noise propagation and OG error would then be required.
Fig. 7. Top: closedloop GSC images for different entrance fluxes. In the chosen configuration, the exposure time is 2 ms, and we collect 10% of the photons in the sensing path. a: infinite number of photons. b: Guidestar magnitude = 8 (n_{ph} = 55 000 on the GSC). c: Guidestar magnitude = 10 (n_{ph} = 9000 on the GSC). d: Guidestar magnitude = 12 (n_{ph} = 1400 on the GSC). Bottom: OG estimate for the noisefree system compared with the three noisy configurations. 
These results are crucial because they demonstrate that the GSC can be used with only a small fraction of WFS photons, leading to a limited repercussion on the S/N on the PyWFS. We therefore have a way to estimate the OG, and to some extent increase the linearity of the sensor while having a reduced effect on its sensitivity.
3.4. GSC spatial sampling
Another aspect is the sampling of the GSC detector with respect to the modulated PSF. If an undersampling could be considered, it would reduce the number of pixels required by the GSC, and consequently reduce the practical implementation complexity. To test this, we ran our algorithm for various samplings of the GSC in order to see the effect on the OG estimation. The results for a given closedloop residual phase (r0 = 14 cm, r_{mod} = 3) are given in Fig. 8. The sampling of the PSF can go below the Shannon sampling (2 pixels per λ/D) without significant effect on the estimate. This result depends on the modulation radius r_{mod} used, and we note that the OG estimate is not affected as long as the pixel size d_{px} satisfies the Shannon criterion for the modulation radius,
Fig. 8. Effect of the GSC sampling on OG estimate for a given closedloop residual phase (r0 = 14 cm, r_{mod} = 3). Top: images delivered by the GSC with different samplings. Bottom: effect on the OG estimate. 
When this criterion is not respected, the undersampled modulation circle is seen as a disc (Fig. 8), which affects the OG estimate for loworder modes.
As a concrete example, a PyWFS for the Extremely Large Telescope (ELT) working at λ = 800 nm with a field of view of 2 arcsecs and with a sampling of Shannon/4 on the GSC would require a GSC camera with no more than 250 × 250 px. This limited size allows for the use of lowreadout noise cameras such as GSC, and remaining in a photonnoise limited regime.
To conclude this section, we have shown that it is possible to perform OG fasttracking by using an image of the modulated EM field at the focal plane. Our method uses a socalled GSC providing nonbiased information on the working point of the PyWFS, and the subsequent OG estimate using a convolutive model. We demonstrated that the GSC can work with a limited number of photons and pixels, which makes the practical implementation fully feasible. The next section is dedicated to quantifying the performance benefits of OG fast tracking with the GSC.
4. Application to specific AO control issues: Bootstrapping and NCPA handling
As shown in the previous sections, the GSC allows tracking the PyWFS OG frame by frame and compensating for these nonlinearities. We illustrate here two possible situations in which the GSC can significantly improve the performance: bootstrapping and NCPA handling.
4.1. Bootstrapping
During the AO loop bootstrap, the PyWFS faces large amplitude wavefronts (due to uncorrected turbulence), leading to significant nonlinearities that may prevent the loop from closing. Therefore this step is critical because it corresponds to the moment at which the OGs are the most important. Monitoring them frame by frame in order to update the reconstructor helps closing the AO loop. Because of the timescales involved in the AO loop bootstrap, this problem cannot be tackled by other OG handling techniques that were previously studied in the literature. The best solutions already proposed endures necessarily delays of a few frames (Deo et al. 2019b). Here, we can estimate the OG corresponding to the current measurement frame: This is an unprecedented feature. We show different images delivered by the GSC during the bootstrap operation in Fig. 9. The corresponding estimated OGs are also plotted, compared with the endtoend computation giving the true OG values. While the loop is closing, the OG varies from low values to higher values, indicating that the residual phases reaching the PyWFS decrease: The loop is closing, and the DM is starting to correct the atmospheric aberrations. Our technique performs a precise OG followup during all the steps of the process, at the frame rate of the loop.
Fig. 9. Bootstrapping with the help of the GSC. Top: images delivered by the GSC at a time. t = 0 represents the beginning of the servoloop. The frame rate of the AO loop is still fixed at 2 ms with r0 = 12 cm and r_{mod} = 3λ/D. Bottom: OG estimate during bootstrapping for the corresponding images on the left. Lower OG corresponds to higher residuals on the pyramid, hence to the first frames of loop closure. 
We can use our frame by frame OG estimation to update the reconstructor while the loop is closing. The reconstructor is the pseudoinverse of the interaction matrix. We can therefore relate it to the OG matrix and the calibration interaction matrix through the following formula:
By doing so, we show that it is possible to close the loop faster. A simulation example is presented by comparing a loop bootstrap with and without OG compensation by the GSC camera (Fig. 10). This example, with a limited benefit in practice, shows how a fast OG tracking combined with the corresponding update of the reconstructor can be applied to mitigate all types of shorttimescale residual variations, such as seeing bursts.
Fig. 10. OGcompensated bootstrap vs. OGuncompensated bootstrap. 
4.2. NCPA handling
Handling NCPA is emerging as one of the main issues due to PyWFS OG, as was demonstrated for instance on the Large Binocular Telescope (Esposito et al. 2015). How to handle this issue while having an accurate OG estimation was discussed in a previous paper (Chambouleyron et al. 2020). We briefly recall the main problem. The NCPA reference measurements are recorded around a diffractionlimited PSF and need to be rescaled by the OG while working on sky: s(ϕ_{NCPA})↩s_{ϕ}(ϕ_{NCPA}). To compute s_{ϕ}(ϕ_{NCPA}), we need to have estimate T_{ϕ},
We show here the results of a simulation in which we used the GSC to handle NCPA in the AO loop. We retained the same simulations parameters as before (caption of Fig. 2). The PyWFS modulation radius was r_{mod} = 3 λ/D and r0 = 14 cm. The interaction matrix was computed around a flat wavefront. We injected 200 nm rms of NCPA into our system, distributed with a f^{−2} power law on the first 25 KL modes (except for tiptilt and focus). In this configuration and for a flat wavefront in the science path (H band), the PSF in the wavefront sensing path (V band) is given in Fig. 11a and the signal Ω_{ϕNCPA} seen by the GSC is shown in Fig. 11b.
Fig. 11. a: PSF on the pyramid apex when a flat wavefront is set in the science path. b: GSC signal when there are no residual phases and for a flat wavefront in the science path. c: GSC signal during closedloop around NCPA. 
We then proceeded in the following way: We closed the loop on the turbulence, and after 5 s of closedloop operation, the NCPA was added to the system. These NCPAs were then handled with different configurations, and the results were compared with the NCPAfree case. Figure 12 illustrates the results. The main conclusions from Fig. 12 are listed below.

When the NCPA is not compensated for (orange plot), the loop converges toward a flat wavefront in the sensing path. This induces a high loss of Strehl ratio (SR) in the science path, corresponding to the NCPA.

When a reference map s(ϕ_{NCPA}) in the PyWFS measurement is used without updating it by the OG, this leads to a divergence of the loop (socalled NCPA catastrophe, yellow plot). This can be explained by the fact that because of the OG, the PyWFS introduces too much NCPA, creating an even stronger aberrated wavefront. This aberrated wavefront increases the OG in the next frame, which continues to increase the aberration, and so on. This quickly causes the loop to diverge.

When the reference map is compensated for by the timeaveraged OG computed in the first 5 s of the loop by a longexposure image of the GSC (purple plot), no NCPA catastrophe appears, and the final performance reaches an averaged SR of 82%.

When the reference map is compensated for by the OG computed at each frame, using the GSC camera (green plot), the final performance reaches an averaged SR of 86%. This solution is better than the previous one because we monitor the OG at each frame, and we also take the effect of the NCPA themselves on the OG into account. To illustrate this, we show the GSC image for a given closedloop residual when the NCPA is compensated for in Fig. 11c.
Fig. 12. Strehl ratio for different cases of NCPA handling. In this simulation context, the case for which we compensate for NCPA without scaling by the OG leads to a diverging loop. 
This study is a clear demonstration that our strategy can solve the AO control issue due to PyWFS OG. It also shows that even if the OGs are compensated for on a framebyframe basis, the ultimate performance (without NCPA) cannot be reached. This limitation is mainly due to the LPVS approach, which is characterized by a linear description of the whole sensing problem. Improving the performance further would probably mean starting to consider other nonlinear (second or thirdorder description) solutions, which goes beyond the computation framework of a simple matrix.
5. Conclusion
The PyWFS is a complex optical device exhibiting strong nonlinearities. One way to deal with this behaviour while keeping a matrix computation formalism is to consider the PyWFS as a LPVS. To probe the sensing regime of this system at each measurement, a gain scheduling loop needs to be implemented that gives information on the sensor regime at every moment. With this perspective, the OG compensation can be deployed on a framebyframe basis. We provided here an innovative solution to this end: the GSC combined with a convolutive model. As such, the PyWFS data synchronously merged on a framebyframe basis with GSC data can be thought of as a single WFS combining images from different lightpropagation planes. It therefore provides an efficient way to compensate for nonlinearities at each AO loop frame without any delay, and it significantly improves the final performance of the AO loop in terms of sensitivity and dynamic range as well as robustness. It also allows unambiguously disentangling the effect of OG from the full AO loop gain, which is a fundamental advantage for NCPA compensation. The GSC solution has now to be implemented on the AO facility bench LOOPS at the LAM for an experimental demonstration (JaninPotiron et al. 2019).
Acknowledgments
This work benefited from the support of the WOLF project ANR18CE310018 of the French National Research Agency (ANR). It has also been prepared as part of the activities of OPTICON H2020 (20172020) Work Package 1 (Calibration and test tools for AO assisted EELT instruments). OPTICON is supported by the Horizon 2020 Framework Programme of the European Commission’s (Grant number 730890). Authors are acknowledging the support by the Action Spécifique Haute Résolution Angulaire (ASHRA) of CNRS/INSU cofunded by CNES. Vincent Chambouleyron PhD is cofunded by “Région Sud” and ONERA, in collaboration with First Light Imaging. Finally, part of this work is supported by the LabEx FOCUS ANR11LABX0013.
References
 Chambouleyron, V., Fauvarque, O., JaninPotiron, P., et al. 2020, A&A, 644, A6 [EDP Sciences] [Google Scholar]
 Conan, R., & Correia, C. 2014, Proc. SPIE Int. Soc. Opt. Eng., 9148, 91486C [Google Scholar]
 Deo, V., Gendron, É., Rousset, G., et al. 2019a, A&A, 629, A107 [CrossRef] [EDP Sciences] [Google Scholar]
 Deo, V., Rozel, M., BertrouCantou, A., et al. 2019b, Adaptive Optics for Extremely Large Telescopes conference, 6th edn. Québec, France [Google Scholar]
 Esposito, S., & Riccardi, A. 2001, A&A, 369, L9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Esposito, S., Pinna, E., Puglisi, A., et al. 2015, Adaptive Optics for Extremely Large Telescopes 4Conference Proceedings, 1 [Google Scholar]
 Fauvarque, O., JaninPotiron, P., Correia, C., et al. 2019, J. Opt. Soc. Am. A, 36, 1241 [NASA ADS] [CrossRef] [Google Scholar]
 Fauvarque, O., Neichel, B., Fusco, T., Sauvage, J.F., & Girault, O. 2016, Optica, 3, 1440 [NASA ADS] [CrossRef] [Google Scholar]
 Guyon, O. 2005, ApJ, 629, 592 [NASA ADS] [CrossRef] [Google Scholar]
 JaninPotiron, P., Chambouleyron, V., Schatz, L., et al. 2019, Adaptive Optics with Programmable Fourierbased Wavefront Sensors: a Spatial Light Modulator Approach to the LOOPS Testbed [Google Scholar]
 Korkiakoski, V., Vérinaud, C., & Louarn, M. L. 2008, in Adaptive Optics Systems, eds. N. Hubin, C. E. Max, & P. L. Wizinowich, Int. Soc. Opt. Photonics (SPIE), 7015, 1422 [Google Scholar]
 Ragazzoni, R. 1996, J. Mod. Opt., 43, 289 [NASA ADS] [CrossRef] [Google Scholar]
 Rigaut, F. J., Veran, J. P., & Lai, O. 1998, in Adaptive Optical System Technologies, eds. D. Bonaccini, & R. K. Tyson, Int. Soc. Opt. Photonics (SPIE), 3353, 1038 [NASA ADS] [CrossRef] [Google Scholar]
 Rugh, W. J., & Shamma, J. S. 2000, Automatica, 36, 1401 [Google Scholar]
 Vérinaud, C. 2004, Opt. Commun., 233, 27 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
All Figures
Fig. 1. Sketch of the PyWFS response curve for a given mode ϕ_{i}. The pushpull method around a nullphase consists of computing the slope of this curve at ϕ_{i} = 0. 

In the text 
Fig. 2. Variability of closedloop OG. For given system parameters we compute T_{ϕ} for 1 000 phase realisations in two seeing configurations: r0 = 18 cm and r0 = 12 cm. The variability of the framebyframe OG is shown in the histogram in the right panel and by the highlighted extreme OG curves for each r0 case. 

In the text 
Fig. 3. Gain scheduling camera: A focal plane camera that records the intensities of the modulated EM field with the same pyramid field of view. This operation requires using part of the flux from the pyramid path. 

In the text 
Fig. 4. Left: gain scheduling camera image for a flat wavefront. The white circle is produced by the tiptilt modulation of the pyramid signal. Right: gain scheduling camera image for a given closedloop residual phase. 

In the text 
Fig. 5. Left: arg(m), the shape of the pyramid phase mask in the focal plane. Right: ω, the modulation weighting function: Different positions reached by the EM field during one integration time. 

In the text 
Fig. 6. OG estimation for given residual phases thanks to the GSC are compared with endtoend simulation for different parameters (same framework as in Fig. 2). OL: Open loop, and CL: closedloop residual phases. Left: r0 = 12 cm and r_{mod} = 3λ/D. Middle: r0 = 12 cm and r_{mod} = 5λ/D. Right: r0 = 18 cm and r_{mod} = 3λ/D. 

In the text 
Fig. 7. Top: closedloop GSC images for different entrance fluxes. In the chosen configuration, the exposure time is 2 ms, and we collect 10% of the photons in the sensing path. a: infinite number of photons. b: Guidestar magnitude = 8 (n_{ph} = 55 000 on the GSC). c: Guidestar magnitude = 10 (n_{ph} = 9000 on the GSC). d: Guidestar magnitude = 12 (n_{ph} = 1400 on the GSC). Bottom: OG estimate for the noisefree system compared with the three noisy configurations. 

In the text 
Fig. 8. Effect of the GSC sampling on OG estimate for a given closedloop residual phase (r0 = 14 cm, r_{mod} = 3). Top: images delivered by the GSC with different samplings. Bottom: effect on the OG estimate. 

In the text 
Fig. 9. Bootstrapping with the help of the GSC. Top: images delivered by the GSC at a time. t = 0 represents the beginning of the servoloop. The frame rate of the AO loop is still fixed at 2 ms with r0 = 12 cm and r_{mod} = 3λ/D. Bottom: OG estimate during bootstrapping for the corresponding images on the left. Lower OG corresponds to higher residuals on the pyramid, hence to the first frames of loop closure. 

In the text 
Fig. 10. OGcompensated bootstrap vs. OGuncompensated bootstrap. 

In the text 
Fig. 11. a: PSF on the pyramid apex when a flat wavefront is set in the science path. b: GSC signal when there are no residual phases and for a flat wavefront in the science path. c: GSC signal during closedloop around NCPA. 

In the text 
Fig. 12. Strehl ratio for different cases of NCPA handling. In this simulation context, the case for which we compensate for NCPA without scaling by the OG leads to a diverging loop. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.