Issue 
A&A
Volume 649, May 2021



Article Number  A158  
Number of page(s)  19  
Section  Astronomical instrumentation  
DOI  https://doi.org/10.1051/00046361/202038330  
Published online  01 June 2021 
ELTscale elongated LGS wavefront sensing: onsky results^{⋆}
^{1}
Department of Physics, Durham University, South Road, Durham DH1 3LE, UK
email: lisa.f.bardou@durham.ac.uk
^{2}
LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen, 92195 Meudon, France
^{3}
European Southern Observatory, 85748 Garching, Germany
^{4}
INAFOAR National Institute for Astrophysics, Via Frascati 33, 00078 Monte Porzio Catone, RM, Italy
^{5}
GEPI, Observatoire de Paris, Université PSL, CNRS, 5 Place Jules Janssen, 92195 Meudon, France
^{6}
First Light Imaging S.A.S., Europarc Sainte Victoire Bâtiment 6, Route de Valbrillant, Le Canet, 13590 Meyreuil, France
^{7}
Laboratoire d’Astrophysique de Marseille, 38 rue F. JoliotCurie, 13388 Marseille Cedex 13, France
^{8}
Institut de Planétologie et d’Astrophysique de Grenoble, Université Grenoble Alpes, CS 40700, 38058 Grenoble Cedex 9, France
^{9}
German Aerospace Center (DLR), Institute of Communications and Navigation, 82234 Weßling, Germany
Received:
3
May
2020
Accepted:
16
September
2020
Context. Laser guide stars (LGS) allow adaptive optics (AO) systems to reach greater sky coverage, especially for AO systems correcting the atmospheric turbulence on large fields of view. However LGS suffer from limitations, among which is their apparent elongation which can reach 20 arcsec when observed with large aperture telescopes such as the European Southern Observatory 39 m telescope. The consequences of this extreme elongation have been studied in simulations and laboratory experiments, although never onsky, yet understanding and mitigating those effects is key to taking full advantage of the Extremely Large Telescope (ELT) six LGS.
Aims. In this paper we study the impact of wavefront sensing with an ELTscale elongated LGS using onsky data obtained with the AO demonstrator CANARY on the William Herschel telescope (WHT) and the ESO Wendelstein LGS unit. CANARY simultaneously observed a natural guide star and a superimposed LGS launched from a telescope placed 40 m away from the WHT pupil.
Methods. Comparison of the wavefronts measured with each guide star allows the determination of an error breakdown of the elongated LGS wavefront sensing. With this error breakdown, we isolate the contribution of the LGS elongation and study its impact. We also investigate the effects of truncation or undersampling of the LGS spots.
Results. We successfully used the elongated LGS wavefront sensor (WFS) to drive the AO loop during onsky operations, but it necessitated regular calibrations of the noncommon path aberrations on the LGS WFS arm. In the offline processing of the data collected onsky we separate the error term encapsulating the impact of LGS elongation in a dynamic and quasistatic component. We measure errors varying from 0 nm to 160 nm rms for the dynamic error and we are able to link it to turbulence strength and spot elongation. The quasistatic errors are significant and vary between 20 nm and 200 nm rms depending on the conditions. They also increase by as much as 70 nm over the course of 10 m. We do not observe any impact when undersampling the spots with pixel scales as large as 1.95″, while the LGS spot full width half maximum varies from 1.7″ to 2.2″; however, significant errors appear when truncating the spots. These errors appear for fields of view smaller than 10.4″ to 15.6″, depending on the spots’ elongations. Translated to the ELT observing at zenith, elongations as long as 23.5″ must be accommodated, corresponding to a field of view of 16.3″ if the most elongated spots are put across the diagonal of the subaperture.
Key words: instrumentation: adaptive optics / methods: observational / telescopes / atmospheric effects
Data are only available at the CDS via anonymous ftp to cdsarc.ustrasbg.fr (130.79.128.5) or via http://cdsarc.ustrasbg.fr/vizbin/cat/J/A+A/649/A158
© L. Bardou et al. 2021
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Laser guide stars (LGS) are used to provide adaptive optics (AO) systems with guide stars which can be placed anywhere in the sky so as to extend sky coverage (Foy & Labeyrie 1985). The Extremely Large Telescope (ELT) currently under construction by the European Southern Observatory (ESO) will benefit from up to six LGS (Tamai et al. 2018), which will be used by most of the instruments to provide widefield AO correction, for example with HARMONI (Laser Tomography AO, Neichel et al. 2016) and MAORY (MultiConjugated AO, Diolaiti et al. 2016) in the first generation of instruments, and MOSAIC (MultiObject AO, Morris et al. 2016) in the second generation of instruments. However, wavefrontsensing with LGS suffers from limitations (Wizinowich et al. 2006) that are amplified on a telescope of the size of the ELT, so that it is crucial to study their implementation beforehand.
Sodium LGS, such as will be used on the ELT, are created using a laser tuned to the resonant excitation of sodium atoms at a wavelength of 589 nm. The sodium atoms are situated in a mesospheric layer approximately 90 km high and 10 km thick (Pfrommer & Hickson 2010). The light beacon created is therefore a cylinder, and as it is imaged further away from the laser launch position, it becomes an elongated object. When imaged through a Shack–Hartmann (SH) wavefront sensor (WFS), the resulting pattern is that of elongated spots varying in length across the pupil and expanding radially with respect to the laser launch position (Véran & Herriot 2000; van Dam et al. 2006).
Additionally, this sodium layer varies in thickness, altitude, and atom density profile (Pfrommer & Hickson 2014). Therefore, the shape of the elongated spots varies over time, and turbulenceinduced focus cannot be sensed using a LGS (Herriot et al. 2006). Furthermore, tip and tilt also cannot be sensed with a LGS because the laser beam undergoes a first deflection with the uplink propagation (Rigaut & Gendron 1992). Finally, since LGS are at a finite distance from the telescope, the wavefront sensed using LGS suffers from focus anisoplanatism (or cone effect) with respect to the wavefront sensed using a natural guide star (NGS; Fried & Belsher 1994).
On the ELT the LGS will be launched from the edge of the telescope pupil and the elongation will reach values of up to 20″. The varying size, profile, and orientation of the LGS spots, as well as the extreme elongations reached, raises new concerns regarding the accuracy of wavefront sensing on such an object. Several studies have already been carried out on the subject. In particular, Gilles & Ellerbroek (2006) have introduced the matched filter, an algorithm to determine the position of the LGS spots in the subapertures field of view, and therefore the local wavefront slope. Thomas et al. (2008) and Gratadour et al. (2010) have investigated the accuracy of different algorithms to measure elongated spots positions. Anugu et al. (2018) have also looked into various implementation of correlationbased algorithms to determine the positions of the spots. Tallon et al. (2008) have explored the impact of spot elongation on wavefront reconstruction and how to improve its robustness through the use of priors on noise. Muller et al. (2011) have investigated the possible effects of differential anisoplanatism, the error induced by anisoplanatism within the LGS spots because of its extreme elongation, and found it to be small.
Further questioning arises when considering how to design the WFS. In all probability, the cameras available for the LGS WFS will not have enough pixels to provide good sampling of the most elongated spots without truncation. In particular, studies using simulations (Schreiber et al. 2014) and laboratory prototypes (Patti et al. 2018) have shown that truncation of LGS spots introduced errors in the AO correction, including noncommon path aberrations that depend on the sodium density profile and are hard to calibrate. One of the solutions that has been envisioned is to ignore the measurements from the most elongated spots (Neichel et al. 2016). Gendron (2016) has explored modifying the SH design to make the SH pattern more compact. In any case, early designs of LGS AO systems on the different ELTs include a Truth Sensor in order to directly measure the spurious residual noncommon path aberrations generated by the LGS WFS using an NGS WFS running at a low rate (Herriot et al. 2010; Diolaiti et al. 2012).
In order to complement these studies, an onsky experiment was proposed (Rousset et al. 2014). Onsky testing seems to be mandatory before the first light of the ELT, so as to be fully confronted with all the atmospheric aspects of measuring the wavefront using an ELTscale elongated LGS. The goal is to quantify and validate the coupling of all the effects, which is hardly taken into account in numerical simulations or laboratory experiments. To that end, we use the multiobject AO demonstrator CANARY (Gendron et al. 2011) in conjunction with ESO’s Wendelstein LGS Unit (WLGSU; Bonaccini Calia et al. 2010). CANARY operates on the William Herschel Telescope (WHT) on La Palma. For this experiment, the WLGSU is placed approximately 40 m away from the WHT so as to replicate the ELT maximum elongation on CANARY LGS WFS. The WHT then acts as a portion of the pupil of the ELT. Figure 1 illustrates the configuration of the experiment with respect to the ELT pupil plane.
Fig. 1. Illustration of the WHT pupil (dark green) compared to the ELT pupil (light green), with the outer circle of 38.542 m in diameter and the orange star representing the position of the laser launch telescope. The SH pattern corresponds to an average of a thousand images from CANARY LGS WFS taken during sequence 3 (see Sect. 2.3). 
The main goal of the experiment is to derive an error breakdown of wavefront sensing with an elongated LGS. To achieve this the LGS is superimposed on an NGS from the point of view of the WHT. The wavefront measured with the NGS is used as a reference to which the wavefront measured with the LGS is compared. To build the error breakdown, the difference between the two measurements is decomposed into different known errors attached to wavefront sensing. The residual error (i.e., the part of the difference between the two wavefronts that remains unaccounted for) then gives clues to the impact of the LGS elongation.
The aim of this paper is to present this error breakdown, how it is derived, and the results obtained with the data gathered with this experiment during the last run of observation to this date comprising five nights between September and October 2017. Additionally, this work is used to compare the performance of two different algorithms to measure slopes, based on applying the centre of gravity and the correlation to the WFS images, and the effects of spot truncation and spot undersampling. It should be noted that previous papers have been published using the data collected with this experiment, with Basden et al. (2017) focusing on the onsky use of matched filter. Preliminary versions of this work have also been presented in conference proceedings (Bardou et al. 2017, 2018).
The outline of the paper is the following. In Sect. 2 we present the experimental setup and the observational strategy implemented and an overview of the conditions of observations encountered. In Sect. 3 we present the wavefront sensing model adopted to derive the error breakdown. In Sect. 4 we explain how the data was processed to compute the different terms of the error breakdown. In Sect. 5 we focus on the behaviour of the residual error before concluding.
2. Experimental setup
Figure 2 presents the general layout of the experiment. As stated previously, the experiment relies on simultaneously measuring the onaxis wavefront from two different guide stars: the elongated LGS and a superimposed NGS. The elongation of the LGS is produced by placing the WLGSU generating the LGS approximately 40 m from the WHT (4.2 m diameter) on which the CANARY system used for this experiment operates. In addition, simultaneous high resolution images of the laser plume are taken using an imager installed on the Isaac Newton Telescope (INT, 2.54 m in diameter). Finally, photometric data are acquired using a small 356 mm telescope situated next to the Laser Launch Telescope (LLT). In this paper we focus particularly on the AO results, so we do not further discuss the other systems. In this section we detail the setup of CANARY before presenting the observations proceedings and the data selected from these observations.
Fig. 2. General setup of the experiment. 
2.1. AO setup
CANARY (Gendron et al. 2011, 2016; Vidal et al. 2014; Morris & Gendron 2014) is installed on one of the Nasmyth platforms of the WHT. The bench begins after a derotator and the telescope focus. A first optical relay images the pupil on a 52actuator deformable mirror (DM) and a tiptilt mirror. All WFS, including the LGS WFS (LS^{1}), are placed after this first relay. In the second focal plane, at the output of the relay, three offaxis NGS WFS can patrol the field of view within a 2.5′ diameter disc. Another optical relay is placed after the second focal plane, after which a dichroic separates the NGS light between an onaxis WFS, called the truth sensor (TS), and an infrared camera imaging in the H band.
The three offaxis NGS WFS are used to derive the distribution with respect to altitude, using the Learn and Apply algorithm (Vidal et al. 2010), which in turn allows the cone effect contribution to be estimated (see Sect. 4.6 for further explanation). This constrains the targets that CANARY can observe, which must be asterisms of four NGS comprised in a field of view of 2.5′ and whose magnitude in the visible must be around 10, with three stars arranged around a central star and no further than 1.25′ from it.
The infrared images provide Strehl ratio estimates when the AO loop is closed. This gives us an additional tool to compare the loop performance when AO correction is driven by the TS or by the LS measurements.
Before the second focal plane, the LGS flux is diverted towards the LS bench using a 40 nm bandpass filter. On the LS bench, the pupil is imaged on another tiptilt mirror, called the steering mirror, with a range of ±3.25″. Its role is to compensate for the laser launch jitter. Large offsets placed on the steering mirror are offloaded to the laser launch telescope (LLT) every 20 s to avoid saturation. Further along the LGS path, a narrow bandpass filter (4 nm) is used to reject the light from the NGS. Finally, the LS focus can be adjusted between 70 km and infinity to track the sodium layer as its projected altitude varies with zenith angle.
The systems runs at 150 Hz and the exposures of the WFS are synchronised. In particular, this allows us to compare the measurement of the two onaxis WFS without having to take into account a temporal error. The measured latency showed that the latency between the two WFS was of the order of 0.1 frame, low enough for any temporal error to be negligible.
The configurations and performance characteristics of each WFS are given in Table 1. All WFS sample the wavefront with 7 × 7 subapertures resulting in subapertures of 0.6 m. This size was chosen so that the LS subapertures would be similar to the ones under consideration for the ELT LGS WFS. The wavefront sampling of the TS was then chosen to be the same as that of the LS, so as to use the same wavefront reconstruction on both WFS and facilitate the comparison between the two measurements. However, the field of view of the LS (19.5″) is much larger than that of the TS (3.84″) to avoid truncation of the LGS images. Additionally, the field of view of the LS is sampled by more pixels (30 × 30 compared to 16 × 16 for the TS) in order to have the best sampling possible of the seeing limited LGS spots. This sampling is limited by the total number of pixels and the configuration of the OCAM camera, whose central rows are not light sensitive, and therefore must be positioned between two subapertures.
WFS characteristics.
The LS is equipped with a field stop whose size is twice the subaperture field of view limit (19.5″) given by the 30 × 30 pixels. This large field stop allows the LGS spots to extend on more than 30 pixels or move beyond this limit due to strong tiptilt vibrations, without being truncated. It also eases the procedure to acquire the LGS spots at the beginning of a new observation.
2.2. Observational strategy
In this study we use the data gathered during an observing run of five nights which took place between 27 September and 2 October 2017 (with a break on the night of 29 September).
We defined an observing ‘sequence’, which was repeated throughout the observations. This sequence was divided in ‘acquisitions’ which consist of blocks of 5000 frames in which raw images from all WFS, WFS slopes, and actuator commands were recorded. The sequence lasts approximately 20 min and uses different configurations of the AO loop, as summarised in Table 2. Each AO configuration is repeated twice before moving on to the next one.
AO loop configurations across an observing sequence.
The sequence begins with a longer acquisition (10k frames) to provide a large enough statistical sample of the turbulence. The vertical distribution of the refractive index structure constant is derived from the slopes of all four NGS WFS using the Learn and Apply algorithm (Vidal et al. 2010). This knowledge is then used to estimate the contribution of the cone effect in the difference between the measurement of the two onaxis WFS (see Sect. 4.6).
The dithering acquisitions allow measurements of centroid gain for the TS and LS (see Sect. 4.4). During these acquisitions, the tiptilt mirror is used to circularly modulate the image position at 15 Hz with a fixed amplitude of 0.6″ in diameter. To closely monitor the centroid gain these acquisitions are regularly repeated throughout the sequence.
Open loop acquisitions were made to provide data in which only the turbulence is recorded on the WFS. However, the steering mirror was still active in order to stabilise the LGS spots on the LS. The LLT was not protected by a dome and the laser jitter was amplified by windinduced vibrations. It was therefore vital that the steering mirror stayed active all the time, except during the dithering acquisitions, when it would have disturbed the centroid gain measurement.
Acquisitions in closed loop were also performed during each sequence. The loop was first driven using the TS and later using the LS. In the latter case, the tiptilt mirror, observed by all WFS in the system, was still driven by the TS measurements. During the closed loop acquisitions, IR images were taken in the H band, with an exposure time of 1 s. Strehl ratios (SRs) were then measured on the average image over the 27 exposures taken during one acquisition. These SRs were used to provide an initial estimation of the AO loop performance. As can be seen in Fig. 3, the SR measured when the loop was closed on either WFS yield comparable performance. However these performance levels could only be reached after onsky calibration of the LS reference slopes immediately before closing the loop on the LS. This calibration was made over 500 frames while the loop was still driven by the TS since the LGS spots were then in the position corresponding to the flattest wavefront possible on the IR camera. The LGS reference slopes are measured after the two acquisitions in closed loop on the TS. Without this calibration, strong static aberrations would prevent a quality image from forming on the IR camera. Slopes were computed using a brightest pixel centre of gravity algorithm (Basden et al. 2012). The number of brightest pixels selected varied between 12 and 20 for the TS, and 90 and 120 for the LS.
Fig. 3. Strehl ratio measured during the closed loop acquisitions on 15 selected sequences when the loop was driven by the TS (blue circles) or the LS (orange triangles). The Strehl ratio was not measured during sequence 3, and the missing points correspond to rejected acquisition (see Sect. 2.3). 
Finally, in the same spirit as the open loop acquisitions, ‘slow’ loop acquisitions were also performed, during which the loop was closed on the TS but with very low gains (0.01). The goal of these acquisitions is to compensate the bench quasistatic aberrations to obtain better positioned spots in the SH, especially for the TS which suffers from strong astigmatism due to the optical configuration of CANARY.
2.3. Data selection
The data is analysed through an error breakdown of the difference between the measurements of the two onaxis WFS. The residual part of the difference is attributed to the LGS elongation. For this to be valid the data must not be contaminated by errors other than those accounted for in the error breakdown. There are two main cases in which data has to be rejected. The first one is when windshake of the LLT causes high amplitude LGS jitter which cannot be compensated by the jitter mirror in the LS arm. The second case is when turbulence becomes too strong and the TS is no longer working within its linear range, typically just before the spots starts merging between subapertures. Data was selected using mix of automatic rejection criteria and manual examination. As a result, 15 sequences were selected, each containing at least 14 valid acquisitions, presented in Table 3 along with the corresponding observation conditions. Table 4 gives the coordinates and magnitude in V band of the guide star used for the TS, as well as the name of the corresponding asterism^{2}.
Observation conditions for each sequence of selected data.
Coordinates and Vband magnitudes of the central star of the asterism observed by CANARY, i.e., the star used as natural guide star on the TS.
Table 3 focuses in particular on the distance b (Col. 4) between the LLT and WHT projected perpendicular to the pointing direction and the zenith angle za (Col. 5) as they are linked to the LGS elongation η according to the equation from van Dam et al. (2006):
Here t_{Na} is the thickness of the sodium layer and h_{Na} the altitude of the sodium layer. In this case the baseline between the LLT and WHT changes with the pointing direction because the two telescopes are not on the same pointing mechanism.
To give a more detailed illustration of the elongations variations, Fig. 4 represents the elongation measured in average across the subapertures for each acquisitions with respect to cos(za) × b. The elongations are determined by measuring the length of the spot after they have been thresholded at 20% of the maximum intensity. The measured elongations vary significantly between 11.2″ and 16.9″ over all the acquisitions. When computed for a baseline of 39 m and at zenith (Table 3), the average elongation may reach up to 22.4″ for the observing conditions of sequence 3. Along the thin axis, the full width at half maximum (FWHM) of the spot varies from 1.7″ to 2.2″. The widths are quite large considering the seeing, but it should be noted that no effort was made to focus the laser during the onsky observations. We note that several sequences (2, 4, 5, 6, 14) do not follow the expected general trend with the variation of cos(za)×b given by Eq. (1). This is due to the sodium profile evolution during the sequence.
Fig. 4. Mean elongation above 20% of maximum intensity with respect to cosine of the zenith angle multiplied by the distance between WHT and LLT for all acquisitions selected. Each symbol and colour corresponds to a sequence. 
In addition, Fig. 5 presents examples of spot images and the corresponding profiles along the spot axes. To derive those profiles, we first determined the angle of rotation of the spot within their subapertures by fitting a line along the spot elongated axis. The fit is an orthonormal distance regression performed on subaperture images averaged over 150 frames (1 s) and thresholded at 5% of the maximum. Only nonzero pixels, weighted by their normalised intensity, were used in the fit. The average image is then oversampled by a factor of ten before the intensities are summed perpendicularly to the spots axes in bins of 1″ to derive the profile. Oversampling the images allows the allocation of intensities to their proper bin along the spots axes, otherwise artefacts arise from being undersampled along those axes which are rotated with respect to the image native axes. The elongations and FWHM presented in Table 3 are obtained from these profiles. Figure 5 also helps to show why we choose to threshold the spot profile at 20% of the maximum. A threshold too low (e.g., ≤10%) biases the elongation towards higher values because of the feet observed for example during sequence 6. A threshold too high (e.g., ≥40%) overlooks large parts of the spots for very asymmetric profiles, as observed during sequence 5.
Fig. 5. Spot examples, the corresponding sequence number is given in the titles, and the corresponding profiles along the elongated (blue) and the thin (orange) axis. Images are averaged over 500 frames. Their intensities are in photoelectron, whereas the 1D profiles have been normalised. One example has been taken per set of temporally continuous sequences. 
For the sake of brevity, some of the results will be presented using data obtained from only three of the sequences. We have chosen sequences which represent well the diversity of conditions we encountered. Two sequences have relatively symmetric sodium profiles, one shorter (sequence 10) and one longer (sequence 15), and the other sequence has a strongly asymmetric profile (sequence 5). The seeing conditions are good for sequence 5, poor for sequence 10, and excellent for sequence 15.
3. Wavefront error breakdown
In this section we present the wavefront sensing model we adopted, which leads to the definition of the error breakdown terms.
3.1. Wavefront decomposition
To build our wavefront sensing model we begin by describing the composition of the incident wavefront on each WFS. The different components are the atmospheric turbulence, the DM, the common path aberrations (CPA), and the noncommon path aberrations (NCPA). The atmospheric turbulence seen by each of the wavefront sensors is not the same due to focus anisoplanatism between the LGS and the NGS. In addition, the NCPA are obviously different for each wavefront sensor. Therefore, the total incoming phase on each wavefront sensor at any given time can be written for the TS,
and for the LS,
3.2. Slope measurement
An ideal SH WFS limited only by its spatial sampling can be described as a linear operator 𝒲 that converts the incoming phase into a vector of slopes s. In our case, since the two onaxis WFS share the same wavefront sampling, this operator is theoretically the same for both WFS and the slopes vector comprises 72 slopes corresponding to measurement along each axis of the 36 valid subapertures.
Due to the limited spatial sampling, the higher spatial frequencies of the incoming wavefronts are interpreted as lower spatial frequencies through the phenomenon of aliasing. However, since the two WFS share the same wavefront sampling they also share the same aliasing error for a given wavefront.
A more realistic wavefront sensor produces noisy measurements due to the combination of photon noise and readout noise from the detector. The propagation of the noise on the slopes s_{noise} depends on how the positions of the spots are measured. In addition, this measurement can induce a loss of sensitivity in the form of a centroid gain γ (Véran & Herriot 2000). In our case we consider that each slope, two for each subaperture, has its own centroid gain, so that γ is a vector of the same dimension as the number of slopes.
Finally, we can write the slope measurements for either of the wavefront sensors:
Here WFS is either the LS or TS and ° denotes the elementwise multiplication. Here γ affects both noise and the ideal WFS measurement because of how our processing pipeline is constructed: the centroid gain is compensated before the noise is estimated so that effectively it is as if we consider that the centroid gain affects the noise measurements. Since γ is compensated before the wavefront reconstruction, it will not be considered in the rest of this section, but we come back to its estimation and compensation in Sect. 4.4.
3.3. Wavefront reconstruction
We reconstruct the wavefront on the first seven radial orders of the Zernike polynomials to match the resolution the SH can attain with its seven subapertures across a diameter. This corresponds to polynomials 2–36 with Noll numbering (Noll 1976). The reconstruction is achieved using a matrix M_{ZR} which is the generalised inverse of the theoretically derived matrix M_{ZI} which maps the SH response to the Zernike modes. The vector of Zernike polynomials coefficients z is therefore obtained by the following operation:
In this paper the z vector is expressed in nanometres.
Using the terms introduced in Eqs. (2)–(4), and remembering that the centroid gain has been compensated before the wavefront reconstruction, the resulting measurement in phase space can be decomposed this way for each of the two onaxis WFS:
and
3.4. Wavefront difference
We now focus on the analysis of Δz = z^{TS} − z^{LS}. The LS cannot sense the tip, tilt, and focus properly; therefore, these modes are excluded from the comparison, such that z vectors contain modes 5–36 only. Before writing the full decomposition of Δz, we make a few remarks to simplify its expression.
First of all, the terms concerning the DM phase and the common path aberrations are the same for both WFS as they share the same wavefront sampling and the same wavefront reconstruction.
We group the difference between the two turbulent terms ( and ) in one (z_{cone}) to describe the impact of the cone effect. It is computed so as to also take into account the difference in aliasing between the two WFS measurements (see Sect. 4.6).
The information about the noncommon path aberrations is comprised in the reference slopes of each WFS. When the spots are positioned on the reference slopes, the wavefront is optimised on the science path (in CANARY it is the infrared camera), which means that the aberrations of the science path have been removed. However, the path of the WFS under consideration has its own aberrations that must not be corrected as they do not impact the science path. Therefore, the reference slopes of a given WFS correspond to
Therefore, we can write
where are the reference slopes projected on the Zernike polynomials using Eq. (5).
Finally, the difference between the two measured wavefronts at any given time can be written
In this last equation we introduced a term z_{res} to cover the eventual residual difference between the two wavefronts. The study of this residual error tells us whether the LGS spot elongation introduces errors in addition to those traditionally taken into account such as noise.
In practice we study the wavefront difference for each acquisition. So we actually analyse the temporal average of the spatial variance of the difference between the two measured wavefronts, which can be expressed using the Zernike decomposition, using ⟨⟩ to symbolise the average over the acquisition duration T:
This expression can be rewritten as:
The first expression on the righthand side of Eq. (12) corresponds to the dynamic component of the error breakdown, which is the temporal average spatial variance of the centred difference between the two wavefronts, denoted . The second expression is the static component, which is the spatial variance of the temporal mean of the difference between the two wavefronts, denoted .
3.4.1. Dynamic error breakdown
Looking again at the terms of Eq. (10), we can say that both noise terms and the cone effect term are dynamic terms: their temporal mean value is zero. The residual error may have a dynamic and a static component. In addition, these terms are statistically independent so that , the dynamic component of the error breakdown, is the sum of both noise terms, the cone effect term, and the dynamic residual error term.
The values of , , and are derived from the WFS measurements, and is estimated from the measured profile, allowing us to compute :
3.4.2. Static error breakdown
In the static part of the error breakdown we deal with reference slopes. Assuming that the noise and cone effect contribution have a zero mean average, we can write Eq. (10) averaged through time:
The static difference between the two wavefronts does not contain physical meaning as it mainly comprises the static aberrations of the bench. The term that actually interests us is the static residual error:
To be consistent with the dynamic error breakdown, we can express this residual error as a wavefront spatial variance:
4. Wavefront sensor data processing
In this section we detail how the different terms of the error breakdown are derived from the data saved during the observations. The data was processed offline after the observations.
4.1. Raw image processing
For the LS alterations were made to the raw images to test SH designs that would be more realistic with regard to the cameras available for the ELT LGS WFS. The SH pattern was modified either by removing pixels on the edge of the field of view or by binning the pixels together. We therefore simulate a SH WFS with a smaller field of view or a larger pixel scale, respectively. When removing the pixels on the edges of the subapertures, we also simulate a field stop of the corresponding size: the spots do not overlap on the neighbouring subapertures.
We refer to the case where the SH pattern was not modified as the ‘full subaperture case’.
4.2. Slope estimation
In the following paragraphs we first detail the implementation of the algorithms used to measure slopes and explain how they are chosen. In this study we use two main approaches to measure the slopes. The centre of gravity approach is chosen for its simplicity of implementation and because it is the standard method used with SH wavefront sensing. Correlationbased algorithms have long been used for extended scene wavefront sensing, whether in solar AO (Michau et al. 1993; Rimmele & Radick 1998; Löfdahl 2010), nighttime AO (Poyneer 2003), or Earth observation (Rais et al. 2016), and as such should be well adapted for elongated LGS (Thomas et al. 2008; Anugu et al. 2018).
4.2.1. Centre of gravity
We recall here the formula to derive the slope s_{x} along one axis (here x) of a given subaperture image I(x, y):
For the TS, the slopes are computed using the centre of gravity, thresholded with a fixed number (20) of brightest pixels (Basden et al. 2012). The value of the 21st brightest pixel is subtracted from the image and all the pixels with negative values are set to zero before the centre of gravity is applied.
For the LS, a threshold equal to six times the electronic noise of the detector is applied on the image before the centre of gravity is computed. The threshold is subtracted from the images and the negative values are set to zero.
4.2.2. Correlation
We use the correlation only for the LS images. In this section we give a quick description of the correlation algorithm. A detailed description is given in Appendix A.
The principle of correlation is to use a model of the spot whose position is to be determined. This model, or kernel K, is then correlated with the noisy and turbulent image I. The position of the spot corresponds to the position of the maximum of the correlation map C, where
In our case the kernels are derived from the SH images averaged over 500 images. The kernels are computed for each acquisition (i.e., the reference is made within 30 s of the image it is applied to). Each subaperture has a different kernel to account for differences in elongation and rotation.
In our implementation, the crosscorrelation is performed in Fourier space, using a method similar to the one described in Thomas et al. (2006). Before applying the Fourier transforms, the kernel is quadrupled so that its total size is doubled along each axis. The WFS images are zeropadded until they reach the same size as the quadrupled kernel.
In Fourier space, an apodisation function is applied on the product of the Fourier transforms of the turbulent image and the kernel. The result is zeropadded to reach a size that is a power of two, before performing the inverse Fourier transform to obtain the correlation map.
Once the correlation map has been computed, a first estimate of the maximum position is derived from the available points. A 2D Gaussian fit is then performed on the points around that maximum, to refine its position.
4.2.3. Parameter choice
For this study we have chosen to use algorithms that could be used on all selected data without having to adjust the underlying parameters to the observing conditions, which would be, for example, the magnitude of the NGS, the strength of the turbulence, or the elongation and profile of the LGS spot (Bardou 2018).
To this end, a fixed number of brightest pixels were chosen to perform the centre of gravity on the TS images. Some of the NGS are very faint, so that a fixed threshold cannot efficiently suppress detector noise without removing almost all the signal from the star. A fixed number of brightest pixels provides flexibility to observe bright and faint guide stars. Conversely, for the LS, the LGS flux does not vary by more than a factor two across all observations and the S/N is consistently high. Therefore, a fixed intensity threshold before application of the centre of gravity is more relevant, whereas a fixed number of brightest pixels would not be able to follow the LGS spot size variations. In the case of the correlation the kernel is derived from the LS images, and therefore is naturally adapted to the observation conditions.
The reasoning behind these choices is that the goal of this study is not to find the best possible algorithm to compute the LS slopes, but rather to focus on the behaviours of the residual error within the error breakdown. Additionally, the choice of how the slopes are measured mainly affects noise propagation on the slopes and centroid gain. As both these effects are accounted for within the error breakdown, not tailoring the threshold exactly to the observations conditions should have a minimal impact on the residual error, provided the algorithms are not suboptimal. We have verified that we were not operating in this regime by optimising the aforementioned parameters of the centre of gravity (threshold values, number of brightest pixels) using simulations based on onsky data (Bardou 2018).
4.3. Slope projection
In the case of the LS we conduct some of the analyses along the axes of the elongated spot, so that the slopes need to be projected on different axes for each subaperture. Defining θ_{j} as the angle between the long axis of the spot and the xaxis of the detector for each subaperture j, such that the angle is contained within the range 0 to π, the projection matrix for each subaperture is a simple rotation matrix
such that
where u and v are the elongated and nonelongated axes of the spot, respectively. We denote P the projection matrix for all subapertures, with s_{xy} being the slope vector in detector space and s_{uv} the slope vector in spots space:
4.4. Centroid gain
Centroid gains are retrieved from dithering acquisition, where the tiptilt mirror is used to circularly modulate the images at 15 Hz. The interaction matrix M_{I} is used to convert tiptilt commands v into slopes s^{TT} = M_{I}v and then demodulate the measured slopes s to find the centroid gains γ for each axis of each subaperture:
In this equation the latency between the command sent to the mirror and the corresponding exposure on the WFS are taken into account so that s^{TT} and s^{WFS} are contemporary.
The compensation of the centroid gain is then performed by dividing the slopes by their corresponding centroid gain. The values used are the average centroid gains computed over the different dithering acquisition of the same sequence.
In the case of the LS, it is interesting to examine the values of the centroid gain found along the axes of the LGS spots. To achieve this the slopes reconstructed from the mirror commands and the measured values are projected along the spot axes before the centroid gain is computed.
Figure 6 shows the behaviour of centroid gain when the subaperture field of view is reduced. In this figure, we see that the reduction in the subaperture field of view leads to an important drop in centroid gain along the elongation axis. This indicates that the slope measurements become inaccurate as the spots becomes truncated. The drops occur for fields of view of less than 15.6″. Looking at the extension of the LGS spots in Table 3, this number is coherent with the minimum field of view that can be sustained without spot truncation.
Fig. 6. Centroid gain per slope for centre of gravity (top row) and correlation (bottom row) for sequence 5 (first column), 10 (second column), and 15 (third column) as the subaperture field of view is reduced. The first 36 points (left of the vertical solid line) in abscissae show the centroid gains for the slopes along the elongation axis; the remaining points (right of the vertical solid line) are the centroid gain along the minor axes of the spots. The solid coloured lines represent the mean value found across the different dithering acquisitions of a sequence, while the shaded areas show the full dispersions of the values found across the same sequence. 
Along the minor axis there is also a drop in sensitivity, even though it is much less pronounced. The short axis centroid measurement is the result of averaging the measurements all along the spot elongation which significantly reduces the impact of losing the least bright pixels at the spot extremities by truncation. The drop is more pronounced for sequence 5, during which the spots were more asymmetric, and thus where the brightest end of the spot is lost more quickly to truncation as the field of view reduced.
On a given spot profile, correlation or centre of gravity might be more robust to spot truncation. For example, during sequence 5, centre of gravity gains are higher; for sequences 10 and 15 it is the opposite. On average, however, correlation and centre of gravity behave similarly.
In the same way as in Fig. 6, Fig. 7 shows the behaviour of the centroid gain when the pixel scale is increased. There is very little impact of increasing the pixel scale because the largest pixel scale simulated corresponds to the FWHM of the spots. Centre of gravity gains are more sensitive to subsampling the change, while correlation gains are almost unaffected.
Figure 7 also shows that the correlation gains are not unitary, contrary to what is expected (Gratadour et al. 2010). This highlights the errors that are made when the centroid gain is measured. Simulations have shown that measuring the gain through dithering biases the results towards higher values (Bardou 2018), in agreement with the gains being higher than one in the same figure. The cause would be turbulence and vibrations of the LGS spots having a nonnegligible component at the frequency isolated by the dithering. The lower values for the correlation gains are found during sequence 15 which coincides with the best seeing observed.
Since we know that in reality using correlation does not require centroid gain calibration, we do not compensate for it when considering the slopes obtained with correlation in the full subaperture case (in Sect. 5.1). In all other cases we apply this correction. For the sake of continuity, we also apply it when the field of view is reduced or the pixel scale is increased (Sects. 5.2 and 5.3). On the TS slopes are always computed with a centre of gravity, and therefore the centroid gain is always corrected. The slopes, once corrected from the centroid gain, are projected on the Zernike polynomials as described by Eq. (5).
4.5. Noise
The noise contribution is derived from the slope autocorrelation by assuming that noise is temporally uncorrelated, unlike the signal from turbulence (Gendron & Léna 1995). On the slope autocorrelation, noise will therefore appear as a Dirac at τ = 0, while the turbulence contribution can be approximated by a parabola centred on τ = 0, where τ is the timeoffset in the autocorrelation. This parabola is estimated from the slope autocorrelation at τ = 1 and τ = 2. This computation is done on openloop reconstructed slopes whenever the loop was closed to ensure that there is a turbulent component to fit the parabola to.
4.5.1. TS noise
Noise is estimated along each axis of each subaperture. This measurement corresponds to the diagonal of the covariance matrix of . Since noise is not correlated spatially between the subapertures and the axes of a subaperture, this covariance matrix is diagonal. For our error breakdown, we seek to know the diagonal of the covariance matrix of :
Each term on the diagonal of corresponds to the noise on the corresponding Zernike mode; the total noise term is obtained by taking the trace of .
4.5.2. LS noise
On the LS the elongation of the LGS spots causes noise to be correlated between the axes of the detector inside a subaperture, so that the noise covariance matrix is no longer diagonal (Tallon et al. 2008). We have verified this property by examining the crosscorrelation of slopes along the x and yaxes of each subaperture and finding that the Dirac peak is indeed present, as shown in Fig. 8. Furthermore, as visible in the same figure, the crosscorrelation of the slopes projected along the spot axes does not feature a Dirac peak.
Fig. 8. Demonstration of the fitting of the Dirac (in red) on the crosscorrelation of slopes along the x and yaxis of a subaperture (blue line) and the absence of Dirac on the crosscorrelation of slopes projected on the elonged and nonelongated axes of the spots (orange line). 
Noise can then be measured on the autocorrelation of the slopes expressed along the spot axes (uv), leading to the construction of a diagonal noise covariance matrix , which is in turn projected on Zernike terms using the projection matrix P introduced in Sect. 4.3:
The total noise term is obtained by computing the trace of .
Alternatively, noise can also be measured directly along the axes of the detector by estimating the amplitude of the Dirac peak on the slopes auto and crosscorrelation. For the crosscorrelation, we estimate the contribution of the turbulence at τ = 0 by taking the average of the values at τ = −1 and τ = 1. This contribution is then subtracted from the total slope crosscorrelation at τ = 0 to retrieve the amplitude of the Dirac, as illustrated in Fig. 8. This allows us to build a noise covariance matrix for each subaperture j of the form
where n_{xx} and n_{yy} represent the noise measured on the slope autocorrelation along each axis of the subaperture.
To verify that our method to measure n_{xy} is valid, we have diagonalised . The eigenvalues found with this operation should correspond to the noise measured directly along the axes of the spot. This match is verified in Fig. 9, where the blue line and orange line are superimposed, the former representing noise measured on the slopes projected along the axes of the spots and the latter representing noise measured when diagonalising . In the same figure the measured noise is higher along the elongation axis. On both axes, four subapertures display considerably more noise; they correspond to the central subapertures that receive a little more than half the light a fully illuminated subaperture would because they are partially hidden behind the central obscuration.
Fig. 9. Noise measured on the LS for each slope along the elongated axis (first 36 slopes) and the thin axis (last 36 slopes) for one acquisition in sequences 5, 10, and 15 (left to right). The blue curves show the noise values obtained when projecting the slopes on the elongated and thin axes of the spot, the orange dashed curves show the values obtained when computing the eigenvalues of the covariance matrix . 
4.6. Cone effect
Using simulations we derive a transfer function H to convert the measured profile into , the cone effect contribution to the wavefront difference between TS and LS measurements. H is obtained with a similar method to the one previously used on CANARY to compute the tomographic error (Gendron et al. 2014b).
In the simulations, we define both WFS as ideal SH limited only by their wavefront sampling, with one of them observing a guide star at a finite altitude. The difference between the measurements of the two WFS is therefore only impacted by the cone effect, and the impact on the aliased phase is also accounted for. In this configuration, using the method described by Gendron et al. (2014a), we compute Cs_{cone}(h_{l}), the slope covariance matrix of the two WFS for different turbulent layer altitudes h_{l} and a unitary strength of turbulence defined as D/r_{0}(h_{l}) = 1.
The slope covariance matrices are then projected on the Zernike modes using the same principle as in Eq. (23). Using the linearity of wavefront variance with (i.e., ) and the independence of each turbulent layer, the total Zernike covariance matrix is
The trace of Cz_{cone} then gives .
By defining the transfer function H(h_{l}) as the trace of the matrix , becomes
where (D/r_{0}(h_{l}))^{5/3} is obtained from the turbulence profile measured onsky. Figure 10 shows the transfer function.
Fig. 10. Transfer function H with respect to h_{l}, the altitude of the turbulent layer, normalised by h_{Na}, the altitude of the sodium layer. 
Table 5 gives wavefront error due to the cone effect for sequences 5, 10, and 15. The corresponding measured profiles are plotted in Fig. 11. Since the measurement of the vertical turbulence profile was not made very often, this term of the error breakdown has the largest uncertainties attached to it. However its contribution remains relatively small as the telescope diameter is 4.2 m. In addition, the turbulence profile in La Palma is usually largely dominated by the ground layer (GarcíaLorenzo & Fuensalida 2011), which also contributes to small values for the cone effect.
Fig. 11. Turbulent profiles measured for sequences 5, 10, and 15 (left to right) at the beginning of the sequence (blue) and at the end (orange, profile measured at the beginning of the next sequence). 
Spatial variance of the cone effect expressed in nm rms, corresponding to the turbulent profiles shown in Fig. 11, where the beginning of the sequence corresponds to the time given in Table 3 and the profile for the end of the sequence corresponds to the measurements made for the beginning of the next sequence roughly 20 min later.
4.7. Reference slopes
The TS reference slopes are obtained by taking measurements, while the DM is shaped to provide the phasediversityoptimised point spread function (PSF) on the IR camera (Gratadour et al. 2013). Following the reference slopes calibration, we measured a Strehl ratio of 0.73 in H band on an internal reference source, a value consistent with that obtained in the previous phases of CANARY due to the high frequencies defects on the DM surface.
Whether on the bench or on sky, the LS reference slopes are measured while the loop is closed on the TS.
For the purpose of offline data processing, LS reference slopes are recomputed using acquisitions during which the loop was driven by the TS. Thresholds on the LS images used 90–120 brightest pixels during operations, different to the fixed threshold used offline. As the spots are asymmetric, different thresholds on the centre of gravity result in different reference slopes, therefore we cannot use the LS reference slopes measured onsky. Centroid gain is also compensated for in the onsky reference slopes.
For the correlation, we followed a different approach, similar to the one described in Basden et al. (2014). For each subaperture, the kernel K of the acquisition under consideration is computed, as described earlier, by taking the average image of the first 500 frames. The correlation kernel K^{CL} of the acquisition in closed loop on the TS during the same observation sequence is also computed, also by averaging the first 500 images of this acquisition. The result of the correlation between K^{CL} and K is then used to displace K, so that the kernel for the acquisition under consideration is placed in the same position as the one obtained in close loop. In that case the reference slopes are implicitly present in the correlation kernel. Since correlation is not affected by the centroid gain, there is also no need to take it into account.
5. Study of the residual errors
Thanks to the computations made in the previous section, we know all the terms necessary to compute , the dynamic residual error in Eq. (13), and , the static residual error in Eq. (16). In the following sections we use the square root of the different terms of the error breakdown.
5.1. Full subaperture
We first look at the behaviour of the residual error in the case of a full subaperture, the nominal CANARY LS subaperture containing 30 pixels across corresponding to a 19.5″ field of view. The LGS spot is reasonably well sampled and not truncated.
5.1.1. Dynamic residual error
The lefthand side of Fig. 12 shows the residual dynamic errors found for the 15 sequences. They are put in perspective with the values found for the total dynamic wavefront difference standard deviation σ_{Δφ} and noise on the LS . Since the residual error is found by quadratically subtracting terms from , negative values can be found that correspond to an incorrect evaluation of the terms. For the plots in this section we give the square root of the absolute residual error with the negative sign of the error, even though they are physically incorrect, rather than artificially clip the results at zero. The righthand side of Fig. 12 shows the values for noise on the TS and the cone effect σ_{cone}.
Fig. 12. Terms of the dynamic error breakdown for the different observation sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 
In Fig. 12 the residual error varies roughly between 0 nm and 100 nm rms. We expect the residual error to be linked with the characteristics of the LGS spots. This is in part verified as the smallest values are reached for sequences 6 and 8, during which the spots were relatively small and symmetric. In the last three sequences the residual error decreases, while the spots’ profiles and elongations stayed fairly similar and the seeing improved. This suggests that turbulence strength has an impact on this term and that weaker turbulence produces a smaller error.
The LS noise is the dominant portion of , except for the last three sequences during which the NGS was very faint. Use of the correlation algorithm results in a lower LS noise than using centre of gravity, but the difference in the residual error is very small and well within the dispersion of the values found. This shows that the assumptions made in Eq. (4) are verified: the choice of the algorithm to measure slopes principally affects the centroid gain values and the noise propagation on slopes, and these two terms are properly removed from the residual error. The difference in noise level between results obtained with the correlation and centre of gravity is more pronounced when the spots are asymmetric, illustrating that correlation makes more use of the features of the spots.
In Fig. 12, we assume that the loop configuration (e.g., dithering, open loop, closed loop) does not affect the dynamic terms of the error breakdown. Figure 13, which shows the variations in the different terms along time for three sequences, confirms that this assumption was justified; the patterns in the different terms cannot be linked to the loop configuration. However, the variations in the residual error σ_{res} appear to follow closely the variations in (dotted violet line with squares points), especially visible for sequence 10 and for other sequences not plotted here. Within a given sequence of observation, the noise variations on the TS should be linked mainly to the seeing conditions, which would make the NGS spots shrink or expand and thus modify the S/N. The link between the residual error and the TS noise therefore could indicate a dependence between the residual error and the seeing, which was already hinted at by the decrease in residual error over the last three sequences. Moreover, it is consistent with the low and even negatives residual errors measured in sequence 15. The low residual errors, presumably due to the very good seeing, easily become negative when another term is overestimated, which also hides the correlation between the variations in the TS noise and the residual error. In particular, both TS and LS noise are significantly higher during this sequence, more easily producing an overestimation of these terms.
Fig. 13. Dynamic terms of the error breakdown across time for three sequences. The labels along the xaxis correspond to the loop configuration: TT represents a dithering acquisition, TS means that the loop was driven by the TS, LS means that the loop was driven by the LS, LG means that the loop was driven by the TS with low gains, and OL means that the loop was open. Only the results found using correlation are plotted as the curves using centre of gravity follow the same pattern. 
On the other hand, the variations in noise on the LS are very small within one sequence, which was already visible in the small dispersion of values in Fig. 12. This confirms that the LGS noise is relatively independent from seeing conditions and is mainly driven by the sodium profile which, for our observations and the resolution of our LS, varies slowly enough not to produce large variations in noise.
The values used for the cone effect correspond to that of the nearest measured turbulence profile in time, hence the step in the values shown in Fig. 13 when the nearest profile in time known is the one measured after the sequence rather than the one measured at the beginning.
The variations of the residual error with respect to turbulence strength () is given in Fig. 14. In this plot we use the square of the residual error shown in the previous plots, as we expect a relation of proportionality between the variance of the phase and (i.e., the integral of ). The large dispersion of the points plotted in Fig. 14 results from the errors in the computation of , linked for instance to the variability of the true cone effect error from one acquisition to another. The dispersion can also be explained by depending from observing parameters other than turbulence strength, such as the sodium layer density profile.
Fig. 14. Residual dynamic error with respect to seeing conditions. Each point represents the measurement obtained from one acquisition; each symbol represents one sequence. 
In Fig. 14, the existence of a dependence between and the residual error is clear, but further work is necessary to establish the exact relation between the two terms. This dependence is at least in part due to the possible residual misalignment between the two onaxis wavefront sensors, although differential anisoplanatism would be comprised in this term. The large dispersion of the points plotted in Fig. 14 results from the error bars in the computation of , linked for instance to the variability of the true cone effect error from one acquisition to another, and from the possible dependence on several observing parameters, such as the sodium density profile.
For the sake of completeness, Fig. 15 shows the Zernike decomposition of the dynamic terms. In this figure, the cone effect is not removed from the residual term (for which we do not have Zernike decomposition) and becomes merely a ‘noiseless’ term, meaning that only the noise from both WFS are removed from the total dynamic error. We note that the phase difference variance expanded on the Zernike modes is dominated by the LS noise variance, as observed for the total variance in Fig. 12, especially for sequences 5 and 10. We recognise the expected decrease in all terms as the modes increase up to mode 25. The bump culminating in mode 29 is the signature of spatial aliasing.
Fig. 15. Decomposition of the dynamic terms along the first 36 Zernike polynomials for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. Only the results found using correlation are plotted; the curves using centre of gravity follow the same pattern. 
5.1.2. Static residual error
Figure 16 shows the residual static error through time for the three example sequences. The lowest values for each sequence corresponds to the point where the reference slopes are computed, indicated by the black circles. We obtain an estimate of the order of magnitude at which the static error grows by fitting a line through the measurement of m_{res} after the LS reference slopes are measured. Analysing data from all 15 sequences, we find increase rates ranging from 0.2 nm min^{−1} to 7 nm min^{−1}.
Fig. 16. Residual static error variations across time for each of the three example sequence. The duration of a full sequence is 20 m. The black circle indicates the closed loop acquisition during which the LS reference slopes were measured. The labels along the xaxis correspond to the loop configuration: TT represents a dithering acquisition, TS means that the loop was driven by the TS, LS means that the loop was driven by the LS, LG means that the loop was driven by the TS with low gains, and OL means that the loop was open. 
Figure 17 compares the (quasi)static error values for centre of gravity and correlation, but does not show any significant difference in the performance of either, whether in term of average value or dispersion. We observe in Fig. 17 that the overall residual static error may vary between 20 and 200 nm rms, which is a significant contribution to consider in an error budget.
Fig. 17. Static residual error for each sequence. The results shown are obtained for slopes computed either with centre of gravity or correlation. Points represent the mean value obtained across each sequence while the shaded areas represent the peak to valley dispersion of these values. 
There appears to be no obvious correlation between the residual error and the seeing, the elongation of the spots, or their asymmetry, nor between the variation in the residual error and the rate of rotation of the spots in the images or the rate of change in the baseline between LLT and WHT. Further work will be necessary to study the dependences of the quasistatic error.
5.2. Reduced fields of view
Figures 18 and 19 respectively present the dynamic and static residual errors obtained when the field of view of the subaperture is reduced. Here centroid gain is always compensated for. The behaviour is the same for both residual errors: they rise below a threshold that varies with the profile of the spot. This result echoes the changes observed in centroid gain values when the field of view diminishes: when the spot is truncated the slope measurements are no longer accurate. Logically, the asymmetrical spot (sequence 5) shows higher errors when it is truncated: the change in the profile of the spot seen in the subaperture undergoes more changes when the spot moves. We also see that, as expected, the error for the shorter spot (sequence 15) begins to increase for smaller fields of view.
Fig. 18. Dynamic terms with respect to subaperture field of view for the three example sequences. The points represent the average value across one sequence, the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 
Fig. 19. Static residual error with respect to subaperture field of view for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 
From these results we endeavoured to establish a lower limit on the field of view without truncation impacting the measurement. For each acquisition we determine the minimum field of view so that the average residual dynamic error is less than 10 nm more than the full subaperture error. The median minimum fields of view thus obtained for these sequences are shown in Fig. 20 as blue circles. These values are then projected along the elongation axis of the spot to make them correspond to the minimum length of the spot, regardless of the spot orientation, and are shown as orange squares. Finally, we also converted this length to match the equivalent length of a spot observed from a distance of 39 m and at zenith, using Eq. (1). The converted elongations are also shown in Fig. 20 as green triangles and represent the worst case elongation for the ELT. From these values we can determine that the ELT minimum field of view should be able to accommodate spots as long as 23″, corresponding to 16.3″ field of view, assuming that the most elongated spots are going to be along the diagonal of the subaperture.
Fig. 20. Spots elongations and fields of view without truncation effects for each sequence. The blue circles represent for each sequence the minimum subaperture field of view before the residual error rises to 10 nm more than the full subaperture residual error. The orange triangles correspond to the same field of view projected on the elongation axis of the spots. The red triangles represent the same length renormalised to correspond to a spot seen with a 39 m baseline and observed at zenith. The green circles represent the corresponding field of view for spots rotated by 45° in the subarperture. 
In this study the spots are recentred in the subapertures during the offline processing by applying the results of the correlation between the whole detector image averaged over a few seconds and a binary array reproducing the SH pattern with subapertures of 22 pixels (instead of 30). This allows for the spots to be well centred, whereas the spots onsky were positioned using a brightest pixel centre of gravity so that spots with a bright ‘head’ (e.g., in sequence 5) would have the head in the middle of the field of view and the faint tail would easily be truncated. Having recentred the spots in a way that ensures that the whole spot is well inside the subaperture makes us confident that the minimum fields of view without an impact of truncation are not badly estimated.
5.3. Increased pixel scale
Figures 21 and 22 show the effects of increasing the pixel scale on the dynamic and static residual errors, respectively. The impact is very small: the noise and static error increase slightly, while the dynamic residual errors diminishes. The large pixel scales seem to lead to an overestimation of the noise as the dynamic error decreases while the static error increases.
Fig. 21. Dynamic terms with respect to pixel scale for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 
Fig. 22. Static residual error with respect to pixel scale for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 
In our measurement, even when the pixel scale is very large, the short axis is not truly undersampled: the pixels divide the short axis in a different way as we move along the elongation axis of the spot. This explains the small impact of changing the pixel scale on our results.
On a nonelongated spot the impact will be greater, so that 1.95″ pixels cannot be considered for an ELT sensor. The choice will be dictated by an overall tradeoff on all the slopes to be measured in the whole pupil of the telescope, including the number of photons per subaperture, the phase reconstruction algorithm, the number of LGS, and the number of pixels available in the WFS detector. For example, Gratadour et al. (2010) found a pixel scale of 1.5″ for a LGS spot FWHM of 1.5″ in the short axis but for pessimistic photon return conditions. On the other hand, Thomas et al. (2008) determined that, with a sampling no coarser than 1.5 pixel in the FWHM, the undersampling error is negligible with respect to noise for photon levels as high as 10^{4} photons per subaperture per frame.
6. Conclusions
With this experiment we have been able for the first time to test the impact of the LGS elongation in real atmospheric conditions. We have demonstrated that it is possible to use an ELTscale elongated LGS to successfully drive the AO loop, with performance similar to that obtained with a NGS, on the condition that the noncommon path aberrations on the LS are regularly calibrated.
In this paper we have studied the impact of LGS elongation on wavefront sensing through an error breakdown. This error breakdown was obtained by comparing the wavefront measurement obtained with an elongated LGS and a NGS observed in the same direction. With this comparison we have removed the nonelongation errors from the LGS measurements to reach a residual error that can be attributed to the elongation of the LGS. We separated this error into dynamic and static components. The study of the dynamic component showed values varying between −50 and 160 nm rms with an average variation of 80 nm during a 20min observation sequence. We have demonstrated that this error is linked with seeing conditions and spot profile: the error increases with turbulence strength and sodium profile asymmetry within the elongated LGS. The static term varies between 20 and 200 nm rms and can grow by as much as 70 nm over the course of 10 m. The presence of these slowly varying aberrations indicates that reference NGS WFS, such as those foreseen for MAORY (Bonaglia et al. 2018) and HARMONI (Dohlen et al. 2018) are indeed essential for the LGS AO system on the ELT.
We used two standard algorithms to measure the LS slopes (centre of gravity and correlation) and saw that both could be used to accurately measure the wavefront. We also demonstrated that, as expected, correlation is more adapted to the elongated spots and the features produced by the sodium density profile. Using correlation reduces noise propagation, in particular for spots whose features are more pronounced. This study also allowed us to confirm the noise behaviour predicted by Tallon et al. (2008), and we presented robust methods for noise measurements.
We also studied the impact of changing the WFS design by simulating detectors with a smaller more realistic number of pixels. We simulated both coarser pixel scales and subapertures with a smaller field of view. We found that truncating the spots strongly affects the measurements. The smallest field of view without truncation errors ranged from 10.4″ to 15.6″ depending on the observing conditions. Translating these results to the ELT looking at the zenith, elongations as large as 23.5″ must be accommodated. Supposing that the ELT LGS WFS are oriented so as to have the most elongated LGS spots lying along the diagonal of the subaperture, the minimum field of view is then 16.3″. A large pixel scale (up to 1.95″) had little impact on the error; however, the impact would be stronger on a full ELT pupil scale. In that case, the spots close to the launch telescope have a small elongation, so that they would be more sensitive to undersampling. We measured short axis FWHM of the spots around 2.0″ in our experiment. At the ELT, the FWHM should be closer to 1.5″, and the pixel scale choice will have to be optimised in consequence. This optimisation of the pixel scale should be combined with the choice of the subaperture field of view while taking into account the variations of spot elongation in the ELT pupil and the number of available pixels in the WFS detector. Both parameters will result from a complex tradeoff which is outside the scope of this paper.
With the data we used in this study it is possible to investigate other algorithms to measure the slopes, algorithm that would produce lower noise transmission or a better accuracy by using knowledge of the sodium profile. More relevant for the current design for the ELT will be developing ways to measure slopes that are more robust to truncations, since even the largest detectors currently available do not allow for both good sampling of the spots and large enough field of view. However, our future work will first focus on better understanding the source of the slowly changing aberrations that we witnessed within the static term of the error breakdown, which is a key aspect to be able to optimise the use of reference NGS WFS on the ELT.
There is a very limited number of asterisms (groups of stars) that allow CANARY to use all four of its NGS WFS. The list of these asterisms and their names was derived by Brangier (2012).
Acknowledgments
We thank the referee for their useful comments that have improved this paper. This work has been supported by the OPTICON project (EC FP7 grant agreement 312430 and H2020 grant agreement 730890), by Action Spécifique Haute Résolution Angulaire (CNRS/INSU and CNES, France), by the European Southern Observatory and by the Science and Technology Funding Council (UK) (ST/P000541/1), UKRI Future Leaders Fellowship (UK) (MR/S035338/1). L. Bardou’s PhD has been funded by Fondation CFM pour la recherche. Data for this paper has been obtained under the International Time Programme of the CCI (International Scientific Committee of the Observatorios de Canarias of the IAC) with the WHT on the island of La Palma in the Observatorio del Roque de los Muchachos.
References
 Anugu, N., Garcia, P. J. V., & Correia, C. M. 2018, MNRAS, 476, 300 [Google Scholar]
 Bardou, L. 2018, PhD Thesis, Université Paris Diderot, France [Google Scholar]
 Bardou, L., Gendron, É, Rousset, G., et al. 2017, AO4ELT 2017 Conf. Proc., http://research.iac.es/congreso/AO4ELT5/media/proceedings/proceeding126.pdf [Google Scholar]
 Bardou, L., Gendron, É., Rousset, G., et al. 2018, SPIE Conf. Ser., 10703, 107031X [Google Scholar]
 Basden, A. G., Myers, R. M., & Gendron, E. 2012, MNRAS, 419, 1628 [NASA ADS] [CrossRef] [Google Scholar]
 Basden, A. G., Chemla, F., Dipper, N., et al. 2014, MNRAS, 439, 968 [NASA ADS] [CrossRef] [Google Scholar]
 Basden, A. G., Bardou, L., Buey, T., et al. 2017, MNRAS, 466, 5003 [Google Scholar]
 Bonaccini Calia, D., Friedenauer, A., Protopopov, V., et al. 2010, in Adaptive Optics Systems II, Proc. SPIE, 7736, 77361U [Google Scholar]
 Bonaglia, M., Busoni, L., Plantet, C., et al. 2018, SPIE Conf. Ser., 10703, 107034D [Google Scholar]
 Brangier, M. 2012, PhD Thesis, Université Paris Diderot, France [Google Scholar]
 Diolaiti, E., Schreiber, L., Foppiani, I., & Lombini, M. 2012, SPIE Conf. Ser., 8447, 84471K [Google Scholar]
 Diolaiti, E., Ciliegi, P., Abicca, R., et al. 2016, in Adaptive Optics Systems V, Proc. SPIE, 9909, 99092D [Google Scholar]
 Dohlen, K., Morris, T., Piqueras Lopez, J., et al. 2018, SPIE Conf. Ser., 10703, 107033X [Google Scholar]
 Foy, R., & Labeyrie, A. 1985, A&A, 152, L29 [NASA ADS] [Google Scholar]
 Fried, D. L., & Belsher, J. F. 1994, J. Opt. Soc. Am. A, 11, 277 [NASA ADS] [CrossRef] [Google Scholar]
 Gach, J. L., Feautrier, P., Balard, P., Guillaume, C., & Stadler, E. 2014, in Adaptive Optics Systems IV, Proc. SPIE, 9148, 914819 [Google Scholar]
 GarcíaLorenzo, B., & Fuensalida, J. J. 2011, MNRAS, 416, 2123 [Google Scholar]
 Gendron, E. 2016, in Adaptive Optics Systems V, SPIE Conf. Ser., 9909, 99095Z [Google Scholar]
 Gendron, É., & Léna, P. 1995, A&AS, 111, 153 [Google Scholar]
 Gendron, É., Vidal, F., Brangier, M., et al. 2011, A&A, 529, L2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Gendron, É., Charara, A., Abdelfattah, A., et al. 2014a, in Adaptive Optics Systems IV, SPIE Conf. Ser., 9148, 91486L [Google Scholar]
 Gendron, E., Morel, C., Osborn, J., et al. 2014b, in Adaptive Optics Systems IV, SPIE Conf. Ser., 9148, 91484N [Google Scholar]
 Gendron, É., Morris, T., Basden, A., et al. 2016, Proc. SPIE, 9909, 99090C [Google Scholar]
 Gilles, L., & Ellerbroek, B. 2006, Appl. Opt., 45, 6568 [Google Scholar]
 Gratadour, D., Gendron, E., & Rousset, G. 2010, J. Opt. Soc. Am. A, 27, A171 [Google Scholar]
 Gratadour, D., Gendron, E., & Rousset, G. 2013, in Proceedings of the Third AO4ELT Conference, eds. S. Esposito, & L. Fini, 67 [Google Scholar]
 Herriot, G., Hickson, P., Ellerbroek, B., et al. 2006, SPIE Conf. Ser., 6272, 62721I [Google Scholar]
 Herriot, G., Andersen, D., Atwood, J., et al. 2010, in Adaptive Optics Systems II, SPIE Conf. Ser., 7736, 77360B [Google Scholar]
 Löfdahl, M. G. 2010, A&A, 524, A90 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Michau, V., Rousset, G., & Fontanella, J. 1993, Real Time and Post Facto Solar Image Correction, 124 [Google Scholar]
 Morris, T., & Gendron, E. 2014, in Adaptive Optics Systems IV, Proc. SPIE, 9148, 91481I [Google Scholar]
 Morris, T., Basden, A., Buey, T., et al. 2016, in Adaptive Optics Systems V, Proc. SPIE, 9909, 99091I [Google Scholar]
 Muller, N., Michau, V., Robert, C., & Rousset, G. 2011, Opt. Lett., 36, 4071 [NASA ADS] [CrossRef] [Google Scholar]
 Neichel, B., Fusco, T., Sauvage, J. F., et al. 2016, in Adaptive Optics Systems V, Proc. SPIE, 9909, 990909 [Google Scholar]
 Noll, R. J. 1976, J. Opt. Soc. Am., 66, 207 [NASA ADS] [CrossRef] [Google Scholar]
 Nuttall, A. H. 1981, IEEE Trans. Acoust. Speech Signal Proces., 29, 84 [Google Scholar]
 Patti, M., Lombini, M., Schreiber, L., et al. 2018, MNRAS, 477, 539 [Google Scholar]
 Pfrommer, T., & Hickson, P. 2010, SPIE Conf. Ser., 7736, 773620 [Google Scholar]
 Pfrommer, T., & Hickson, P. 2014, A&A, 565, A102 [EDP Sciences] [Google Scholar]
 Poyneer, L. A. 2003, Appl. Opt., 42, 5807 [Google Scholar]
 Rais, M., Morel, J.M., Thiebaut, C., Delvit, J.M., & Facciolo, G. 2016, Appl. Opt., 55, 7836 [Google Scholar]
 Rigaut, F., & Gendron, E. 1992, A&A, 261, 677 [Google Scholar]
 Rimmele, T. R., & Radick, R. R. 1998, in Adaptive Optical System Technologies, Proc. SPIE, 3353, 1014 [Google Scholar]
 Rousset, G., Gratadour, D., Gendron, E., et al. 2014, in Adaptive Optics Systems IV, Proc. SPIE, 9148, 91483M [Google Scholar]
 Schreiber, L., Diolaiti, E., Arcidiacono, C., et al. 2014, SPIE Conf. Ser., 9148, 91486Q [Google Scholar]
 Tallon, M., TallonBosc, I., Béchet, C., & Thiébaut, E. 2008, in Adaptive Optics Systems, Proc. SPIE, 7015, 70151N [Google Scholar]
 Tamai, R., Koehler, B., Cirasuolo, M., et al. 2018, SPIE Conf. Ser., 10700, 1070014 [Google Scholar]
 Thomas, S., Fusco, T., Tokovinin, A., et al. 2006, MNRAS, 371, 323 [Google Scholar]
 Thomas, S. J., Adkins, S., Gavel, D., Fusco, T., & Michau, V. 2008, MNRAS, 387, 173 [Google Scholar]
 van Dam, M. A., Bouchez, A. H., Mignant, D. L., & Wizinowich, P. L. 2006, Opt. Express, 14, 7535 [Google Scholar]
 Véran, J.P., & Herriot, G. 2000, J. Opt. Soc. Am. A, 17, 1430 [NASA ADS] [CrossRef] [Google Scholar]
 Vidal, F., Gendron, É., & Rousset, G. 2010, J. Opt. Soc. Am. A, 27, A253 [Google Scholar]
 Vidal, F., Gendron, É., Rousset, G., et al. 2014, A&A, 569, A16 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Wizinowich, P. L., Le Mignant, D., Bouchez, A. H., et al. 2006, PASP, 118, 297 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Correlation algorithm
In this appendix, we provide additional details to the implementation of the correlationbased algorithm to measure slopes.
Let us recall the general principle of the method: it relies on the use of a kernel K which represents the turbulent spot at its reference position. This kernel is correlated with the turbulent image and the position of the maximum of the result of the correlation C gives the displacement of the spots with respect to its reference position. We recall Eq. (18) to obtain the correlation map:
To speed up the computation of the correlation, the operation is performed using a Fourier transform (Poyneer 2003),
with ℱ symbolising the Fourier transform and the conjugate.
Before the correlation is computed, the arrays containing the kernel and turbulent image must be doubled in size to avoid aliasing effects (Thomas et al. 2006). The turbulent image is zeropadded, but the kernel is quadrupled so that the final array looks like this:
Quadrupling the kernel is essential to avoid errors induced by the background: if a nonzero background is present in the kernel or the turbulent image, part of the correlation peak is made of the multiplication of the background of the image and the background of the turbulent spot and biases the measurement towards zero. Quadrupling the kernel allows the removal of this error. Quadrupling the kernel is equivalent to nulling every other point in the Fourier space, and this operation need not be applied to the turbulent image.
In Fourier space, an apodisation function w is applied on the product of the Fourier transforms of the turbulent image and the complex conjugate of the Fourier transform of the kernel:
Here N is the number of points on which the function is applied (Nuttall 1981). To extend the function to two dimensions, a revolution is applied to it. The result is zeropadded until the array dimensions are a power of two. Zeropadding has little influence in increasing the precision of the correlation, but the use of an apodisation function is essential. The zeropadding is therefore only here to speed up the inverse Fourier transform to obtain the correlation map in direct space. Choosing an apodisation function is a tradeoff between maximising the difference between its Fourier transform central peak and first sidelobe, or minimising the width of the central peak. We choose to optimise the former aspect, which translates as smoothing the correlation map, which in turn helps the fit and reduces the noise. We tested a few different apodisation functions, and while the one selected yielded better performance in terms of noise transmission, there was very little difference between apodisation functions.
Once the correlation map has been computed, the maximum position (x_{m}, y_{m}) is derived among the available points. A 2D Gaussian fit is then performed on the points around that maximum to refine its position. The polynomial function fitted is
This corresponds to a paraboloid centred on the points x_{m0}, y_{m0}:
The slope on the xaxis is then given by
and similarly on the yaxis.
The polynomial coefficients are derived from the leastsquares method
where † denotes the generalised inverse. The inverse matrix contains the x and y coordinates of the points used for the fit, expressed in local coordinates centred on (x_{m}, y_{m}) to the appropriate power. The rightmost term of the equation is a vector containing the logarithm of the intensities of the correlation at those points.
Since the correlation has a larger frame due to zeropadding, the number of points n around the maximum used to perform the fit is chosen to correspond to the smallest odd number of points to reach a size close to three original pixels. If we choose more pixels, the accuracy diminishes as the zone fitted goes further than a Gaussian.
This fit is the same as described in Löfdahl (2010) as the 2D leastsquares fit, with slight variations. In our implementation it can be operated on a flexible number of pixels around the maximum, and using the logarithm of the intensities of the correlation map, thus its denomination of Gaussian fit. We found consistently worse results when using a parabolic fit (using the correlation intensities without applying a logarithm first).
We also tested thresholding the image before applying the correlation, and found that it does not improve performance.
All Tables
Coordinates and Vband magnitudes of the central star of the asterism observed by CANARY, i.e., the star used as natural guide star on the TS.
Spatial variance of the cone effect expressed in nm rms, corresponding to the turbulent profiles shown in Fig. 11, where the beginning of the sequence corresponds to the time given in Table 3 and the profile for the end of the sequence corresponds to the measurements made for the beginning of the next sequence roughly 20 min later.
All Figures
Fig. 1. Illustration of the WHT pupil (dark green) compared to the ELT pupil (light green), with the outer circle of 38.542 m in diameter and the orange star representing the position of the laser launch telescope. The SH pattern corresponds to an average of a thousand images from CANARY LGS WFS taken during sequence 3 (see Sect. 2.3). 

In the text 
Fig. 2. General setup of the experiment. 

In the text 
Fig. 3. Strehl ratio measured during the closed loop acquisitions on 15 selected sequences when the loop was driven by the TS (blue circles) or the LS (orange triangles). The Strehl ratio was not measured during sequence 3, and the missing points correspond to rejected acquisition (see Sect. 2.3). 

In the text 
Fig. 4. Mean elongation above 20% of maximum intensity with respect to cosine of the zenith angle multiplied by the distance between WHT and LLT for all acquisitions selected. Each symbol and colour corresponds to a sequence. 

In the text 
Fig. 5. Spot examples, the corresponding sequence number is given in the titles, and the corresponding profiles along the elongated (blue) and the thin (orange) axis. Images are averaged over 500 frames. Their intensities are in photoelectron, whereas the 1D profiles have been normalised. One example has been taken per set of temporally continuous sequences. 

In the text 
Fig. 6. Centroid gain per slope for centre of gravity (top row) and correlation (bottom row) for sequence 5 (first column), 10 (second column), and 15 (third column) as the subaperture field of view is reduced. The first 36 points (left of the vertical solid line) in abscissae show the centroid gains for the slopes along the elongation axis; the remaining points (right of the vertical solid line) are the centroid gain along the minor axes of the spots. The solid coloured lines represent the mean value found across the different dithering acquisitions of a sequence, while the shaded areas show the full dispersions of the values found across the same sequence. 

In the text 
Fig. 7. Same as Fig. 6, but with increasing pixel scale instead of decreasing fields of view. 

In the text 
Fig. 8. Demonstration of the fitting of the Dirac (in red) on the crosscorrelation of slopes along the x and yaxis of a subaperture (blue line) and the absence of Dirac on the crosscorrelation of slopes projected on the elonged and nonelongated axes of the spots (orange line). 

In the text 
Fig. 9. Noise measured on the LS for each slope along the elongated axis (first 36 slopes) and the thin axis (last 36 slopes) for one acquisition in sequences 5, 10, and 15 (left to right). The blue curves show the noise values obtained when projecting the slopes on the elongated and thin axes of the spot, the orange dashed curves show the values obtained when computing the eigenvalues of the covariance matrix . 

In the text 
Fig. 10. Transfer function H with respect to h_{l}, the altitude of the turbulent layer, normalised by h_{Na}, the altitude of the sodium layer. 

In the text 
Fig. 11. Turbulent profiles measured for sequences 5, 10, and 15 (left to right) at the beginning of the sequence (blue) and at the end (orange, profile measured at the beginning of the next sequence). 

In the text 
Fig. 12. Terms of the dynamic error breakdown for the different observation sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 

In the text 
Fig. 13. Dynamic terms of the error breakdown across time for three sequences. The labels along the xaxis correspond to the loop configuration: TT represents a dithering acquisition, TS means that the loop was driven by the TS, LS means that the loop was driven by the LS, LG means that the loop was driven by the TS with low gains, and OL means that the loop was open. Only the results found using correlation are plotted as the curves using centre of gravity follow the same pattern. 

In the text 
Fig. 14. Residual dynamic error with respect to seeing conditions. Each point represents the measurement obtained from one acquisition; each symbol represents one sequence. 

In the text 
Fig. 15. Decomposition of the dynamic terms along the first 36 Zernike polynomials for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. Only the results found using correlation are plotted; the curves using centre of gravity follow the same pattern. 

In the text 
Fig. 16. Residual static error variations across time for each of the three example sequence. The duration of a full sequence is 20 m. The black circle indicates the closed loop acquisition during which the LS reference slopes were measured. The labels along the xaxis correspond to the loop configuration: TT represents a dithering acquisition, TS means that the loop was driven by the TS, LS means that the loop was driven by the LS, LG means that the loop was driven by the TS with low gains, and OL means that the loop was open. 

In the text 
Fig. 17. Static residual error for each sequence. The results shown are obtained for slopes computed either with centre of gravity or correlation. Points represent the mean value obtained across each sequence while the shaded areas represent the peak to valley dispersion of these values. 

In the text 
Fig. 18. Dynamic terms with respect to subaperture field of view for the three example sequences. The points represent the average value across one sequence, the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 

In the text 
Fig. 19. Static residual error with respect to subaperture field of view for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 

In the text 
Fig. 20. Spots elongations and fields of view without truncation effects for each sequence. The blue circles represent for each sequence the minimum subaperture field of view before the residual error rises to 10 nm more than the full subaperture residual error. The orange triangles correspond to the same field of view projected on the elongation axis of the spots. The red triangles represent the same length renormalised to correspond to a spot seen with a 39 m baseline and observed at zenith. The green circles represent the corresponding field of view for spots rotated by 45° in the subarperture. 

In the text 
Fig. 21. Dynamic terms with respect to pixel scale for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 

In the text 
Fig. 22. Static residual error with respect to pixel scale for the three example sequences. The points represent the average value across one sequence; the shaded areas represent the peak to valley dispersion of the values measured in the course of a sequence. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.