Open Access
Issue
A&A
Volume 673, May 2023
Article Number A29
Number of page(s) 13
Section Astronomical instrumentation
DOI https://doi.org/10.1051/0004-6361/202244961
Published online 28 April 2023

© The Authors 2023

Licence Creative CommonsOpen Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.

1 Introduction

Over the past 30 yr, more than 5000 exoplanets have been discovered around nearby stars1. Most of these discoveries draw on indirect techniques (i.e., transits and radial velocity). However, indirect detection techniques are limited in the physical and chemical information they provide about exoplanets. On the other hand, direct imaging enables a detailed characterization of exoplanets (Currie et al. 2022a; Traub & Oppenheimer 2010, and references therein). Direct imaging can provide much more detailed information on the atmospheres of exoplanets through spectroscopic analyses (e.g., Barman et al. 2011, 2015; Currie et al. 2011; Lacy et al. 2019). From spectra, we can assess whether an atmosphere exists and determine the composition and the possible presence of exobiology. This exceeds what can be achieved with indirect techniques (Lopez-Morales et al. 2019).

For this reason, direct imaging is a key science focus of current ground-based telescopes as well as of future ground-based extremely large telescopes (ELTs) and space-based telescopes (National Academies of Sciences, Engineering, and Medicine 2021). To deliver a high-contrast imaging (HCI) performance for direct imaging, HCI systems employ coronagraphs to suppress starlight. A direct detection of exoplanets also requires removing the noisy, structured stellar halo with extreme precision: Ground-based telescopes use extreme adaptive optics (AO) correction to compensate for wavefront errors induced by atmospheric turbulence, and space-based observatories use high-precision wavefront control to remove wave-front variations due to thermal and mechanical disturbances. Leading HCI systems have been implemented on current large ground-based telescopes, including the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) at the Very Large telescope (Beuzit et al. 2019), the Gemini Planet Imager (GPI) at the Gemini South Observatory (Macintosh et al. 2014), the Keck Planet Imager and Characterizer (KPIC) at the Keck Observatory (Delorme et al. 2021), the Magellan extreme AO instrument (MagAO-X) on the Magellan Clay 6.5 m telescope at Las Campanas Observatory (Males et al. 2022), and the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) at the Subaru telescope (Jovanovic et al. 2015). However, the ~20 planets that have been directly imaged thus far are typically in young systems (e.g., ~1–100 Myr) and represent the extremes of planet formation: Superjovian planets at typically Neptune-like-or-greater projected separations, typically at angular separations of 0.5″–1″, with planet-to-star contrasts of ~10−4–10−6 (e.g., Marois et al. 2008; Lagrange et al. 2010; Rameau et al. 2013; Currie et al. 2014, 2022b, 2023; Chauvin et al. 2017; Haffert et al. 2019). Detecting jovian planets in reflected light at 1–5 au requires contrasts of ~10−7–10−9. Habitable-zone Earth-sized planets around low-mass stars have comparable contrasts, but are located at very small angular separations within several λ/D of the star in the near-infrared, even for 30 m class telescopes. Detecting these planets from the ground requires new advances in wavefront control to achieve contrasts that are factors of 100–1000 deeper at small angular separations.

The contrast of HCI systems is limited by both static and dynamic wavefront errors (WFEs). Static WFEs generally include optical defects and non-common path aberrations (NCPAs). Dynamic WFEs on ground-based telescopes mainly consist of AO residual errors from atmospheric turbulence. Several post-processing methods are available to remove the static speckles, such as angular differential imaging (ADI; Marois et al. 2006), reference star differential imaging (RDI; Ruane et al. 2019), spectral differential imaging (SDI; Sparks & Ford 2002), and polarimetric differential imaging (PDI; Kuhn et al. 2001). Even though these post-processing methods effectively improve the contrast in the science image, they are still limited by the quasi-static and dynamic speckles (Guyon 2004; Martinez et al. 2012; Martinache 2013; Skaf et al. 2021, 2022). The ideal solution for removing quasi-static and dynamic speckles is focal plane wavefront sensing and control (FPWFS&C).

The FPWFS&C techniques use a focal-plane camera as a wavefront sensor (WFS) to measure and actively compensate for the NCPAs and dynamic WFE. Several FPWFS&C techniques have been developed, and the specific requirements and performance limitations are presented in Jovanovic et al. (2018). Some FPWFS&C techniques such as COFFEE (Paul et al. 2013), speckle nulling (Bordé & Traub 2006), and electric field conjugation (EFC; Give’on et al. 2007) require field modulation using the deformable mirror (DM) to estimate the wavefront aberration due to the quadratic relation between the focal-plane image and aberration.

Consequently, these methods cannot be used simultaneously with science observations as the field modulation interrupts the science acquisition by creating additional speckles in the science image. Other techniques such as a asymmetric pupil Fourier wavefront sensor (Martinache 2013), a Zernike phase-mask sensor (N’Diaye et al. 2013), and Fast & Furious (Bos et al. 2020) are only compatible with non-coronagraphic mode. On the other hand, tip-tilt sensing (QACITS; Huby et al. 2015) is only compatible with coronagraphic mode. Other methods such as the self-coherent camera (SCC; Galicher et al. 2010) and the holographic modal wavefront sensor (Wilby et al. 2017) provide a 100% science duty cycle. However, they require additional optics that make their optical system complex. Moreover, most of the FPWFS&C techniques mentioned above cannot run fast and continuously during science observations and are therefore only suitable for slow NCPA corrections. On the other hand, linear dark field control (LDFC; Miller et al. 2017) does not need field modulation or additional components, which means that it can be operated in real-time during science observations with a 100% duty cycle without complicating the system. LDFC is a wavefront stabilization technique and cannot generate the high-contrast zone that is also known as a dark hole (DH) in the science image by itself. As mentioned above, every method has advantages and disadvantages, and therefore we must select the proper method to have high contrast and a duty cycle for direct imaging of exoplanets. One strategy for the Wide Field Infrared Survey Telescope (WFIRST) Coronagraph Instrument (CGI) is to generate the high-contrast zone using EFC with a far brighter reference star near a science target and then slewing to the target (Bailey et al. 2018). However, it is hard to sustain the high-contrast zone because of dynamic aberrations. We can restore the contrast by slewing back to the reference star, but it is not ideal for the observation’s duty cycle (Currie et al. 2020).

We propose a method providing both deep contrast and a 100% science duty cycle by combining two FPWFS&C methods, EFC and spatial LDFC. EFC has been proven to generate a DH around a star at the 10−8 level in the air and 10−9 or lower in vacuum combined with advanced coronagraphy (Belikov et al. 2011, 2012; Trauger et al. 2011; Cady et al. 2015). For this reason, EFC is widely used in high-contrast imaging testbeds (Potier et al. 2020). Furthermore, Potier et al. (2022) achieved on-sky DH using the EFC method. Spatial LDFC is a wavefront stabilization technique that is able to maintain a high-contrast state, as demonstrated in previous demonstrations in the laboratory (Currie et al. 2020; Miller et al. 2021) and on-sky (Bos et al. 2021). Our strategy for directly imaging planets is first to generate the high-contrast zone using EFC with the internal source or a bright reference star near the science target, and then apply DM corrections while observing the science target by closing the spatial LDFC loop. In other words, EFC takes care of static WFE, including NCPAs, and spatial LDFC provides high-speed wavefront stabilization without DM probing. Ultimately, this approach maintains deep contrast with a 100% duty cycle. Miller et al. (2017) have shown promising results in numerical simulations. Furthermore, we present the first laboratory demonstration of combining EFC with spatial LDFC on the Subaru/SCExAO testbed in this paper. Spatial LDFC requires only a one-sided DH with an unsuppressed bright field on the focal plane image. To generate the one-sided DH, we implement the implicit EFC (iEFC) algorithm newly developed in Haffert et al. (2022). The most significant advantage of this iEFC algorithm is that it does not require the numerical model of the optical system, in contrast to classical EFC, which means that the error between the numerical model and the actual system can be minimized. It also allows us to have a deeper contrast in the DH compared to previous spatial LDFC implementations (i.e., speckle nulling and coronagraphs). We also provide a detailed analysis and discussion of the proposed method for practical implementation. Above all things, the most significant advance of this paper is that a fully practical solution is provided for the first time to deploy a high-speed FPWFS&C.

In Sect. 2, we describe the details of the iEFC algorithm and the calibration process for the actual implementation. In Sect. 3, we briefly mention the principle of spatial LDFC and show the calibration process of spatial LDFC. In Sect. 4, we present a detailed analysis and discussion of combining EFC with spatial LDFC for practical implementation. In Sect. 5, we present a short introduction to the SCExAO testbed and the laboratory demonstration results of the EFC and spatial LDFC on the SCExAO bench. Last, we conclude with a discussion of the results and future work towards on-sky implementation in Sect. 6.

2 Implicit EFC

2.1 Principle of iEFC

In this section, we review the key equations of the iEFC algorithm, first as per the approach of Give’on et al. (2007), and the “implicit” extension thereof introduced by Haffert et al. (2022). When wavefront aberrations ϕWF are small (<1 rad), the star electric field Es on the science camera can be approximated using a Taylor expansion as (Give’on et al. 2007; Potier et al. 2020) (1)

where is the electric field of the target image in the focal plane (science camera), A is the pupil illumination function, ϕWF is the aberration in the pupil plane, and C[·] is the linear coronagraph operator. EWF = iC[WF] thus denotes the first-order disturbance to the electric field of the focal plane induced by wavefront aberrations. However, we can only measure the intensity of the stellar speckles: |Es|2. To retrieve the electric field of the focal plane, pair-wise probing has been most commonly used (Give’on et al. 2007; Potier et al. 2020; Haffert et al. 2022). This method temporally modulates the speckle intensity by introducing known aberrations of opposite signs, and it is referred to as “pair-wise” probing. The acquired intensity in the science camera with a small probe phase on the DM can be written as (2)

N pairs of DM probes are used, resulting in the acquisition of 2N focal plane images , . Give’on et al. (2007) showed that the difference between the pairs of measured intensities with multiple (N) probes can be written as (3)

where 𝔎(·) and 𝔍(·) represent the real and imaginary parts of the complex electric field, respectively, and where the are the electric field corrections induced by the probes. We can rewrite Eq. (3) by substituting each matrix by δ, X, and ℰ from the left to the right: (4)

To solve this linear matrix problem and reconstruct the electric field ℰ, we need at least two pairs (N = 2) of probes corresponding to four images. We used three pairs (N = 3) of probes in the actual implementation for better performance, and the optimization of the probe is described in Sect. 2.2.

In the framework of classical EFC (Give’on et al. 2007), the DM command α to minimize the star electric field Es can be determined by the differential intensities with pair-wise probing, which can be written as follows: (5)

where X is the electric field induced by DM probes, and G is the linear transformation matrix representing the electric field induced by each DM mode or actuator (according to the representation used for the DM command vector). However, classical EFC (Eq. (5)) requires a numerical model of the optical system to build the G matrix and reconstruct the focal plane electric field from measured intensities. Because of this reconstruction process, the classical EFC is inherently a model-dependent algorithm, which means that the final performance and convergence speed depend on the accuracy of the system model. In contrast, iEFC does not require knowledge of the numerical model. The description of iEFC starts with the forward-propagation equation from DM to the electric field of the focal plane, (6)

which shows that there is a single response matrix, Z = XG, that relates the effect of the DM modal coefficients with the modulated difference images. Since there is no intermediate electric field reconstruction matrix in iEFC, the command to remove speckles and create the DH in the focal plane can be computed without knowledge of the system model, that is, Z can be directly measured. The solution of Eq. (6) can be written as (7)

The detailed calibration process of the response matrix Z and control matrix Z are presented in Sect. 2.3.

2.2 Probe optimization

We chose single actuator pokes as the probes in the pupil plane to modulate focal plane intensity. Em thus modulate a large area of the focal plane (about the whole control region of the DM). We then need to find actuators that can be used for the actual implementation as the probes. To find actuators for the best probe, we integrated the intensity response over the DH region of the focal plane image for each actuator poke. We then selected the actuators with the most significant response. Figure 1 shows the layout of the DM actuators for the AO loop with the SCExAO pupil mask. The DM has 45 actuators across the beam diameter, and the pupil mask includes the central obscuration, spider, and the block for the dead actuators (actuators that cannot be modulated). We poked each actuator in the pupil (~ 1400 actuators) and measured the corresponding intensity response inside of the DH. To measure intensity responses Ires, we subtracted the initial intensity from the intensity with the actuator poke . The measured intensity responses corresponding to each actuator are shown in Fig. 2.

The value at each actuator position indicates the normalized total flux when the corresponding actuator is poked. In our demonstration, we used a classical Lyot coronagraph, so that the shape of the Lyot stop is clearly visible. The total number of actuators hidden by the Lyot stop is 569 (~40% of the actuators in the pupil). Most actuators blocked by the Lyot stop have nearly zero responses, while most actuators transmitted by the Lyot stop have good responses in the focal plane. We also visualize the highest response (best case) and the lowest response (worst case) in the DH in Fig. 3. We realized that we needed to select the probe actuator that is not blocked the Lyot stop to have a sufficient focal plane response.

According to this result, we selected actuator 723 as probe 1 and its neighbors’ actuators 691 and 755 as probes 2 and 3, respectively, according to the strategy of Potier et al. (2020). We also used three probes for a better performance than when using two probes. The actuators used for the probes are plotted in cyan in Fig. 1.

thumbnail Fig. 1

Numbering and layout of DM actuators with the SCExAO pupil mask. There are central obscuration, spider, and the mask for the dead actuator. The cyan actuators are used as the probes in the actual implementation.

thumbnail Fig. 2

Normalized total flux inside of the DH values shown on the DM coordinates. Total flux means the sum of the flux of all pixels inside of the DH. The numbering of all actuators is the same as Fig. 1.

2.3 iEFC calibration

In Sect. 2.1 we described the principle of the iEFC algorithm. This section describes the calibration steps required to build a response and control matrix. The response matrix can be directly measured by poking the DM modes. The measured difference when adding mode j with amplitude α to the DM is (8)

where Zj is the column j of matrix Z. To remove the static electric field Xℰ, both positive and negative pokes need to be applied, and they can be written as (9)

The difference between the δs then results in twice the mode response, (10)

This measurement is called the double-difference image. The double difference is taken to remove all static aberrations from the response matrix. The final response matrix is constructed by repeating the double-difference measurement for all k DM modes to be used for control. is a measured response matrix from the actual optical system including the noise, (11)

The modal basis we adopted for building consists of the orthogonal AO control modes derived from the measured response of the pyramid wavefront sensor. As shown in Fig. 4, these AO control modes are arranged by spatial frequency from the lower to the higher mode so that the number of modes can be set according to the spatial frequency range to be controlled. This is especially valuable for the control of the specific region of the focal plane. By selecting the number of modes, we can minimize the impact beyond the DH area in the focal-plane image. Figure 4 shows an example of the AO control modes in the iEFC demonstration. It clearly shows the ordering by spatial frequency.

The control matrix is then calculated by inverting using the singular-value decomposition (SVD) method. When inverting the response matrix using the SVD method, most EFC algorithms use Tikhonov regularization or a modal truncation to ensure a more stable solution. We used the Tikhonov regularization. Figure 5 shows the normalized singular values of the SVD. From these values, we selected a modal cutoff point; modes beyond this cutoff were regularized in the inversion (Miller et al. 2021; Bos et al. 2021). In the actual implementation, we set γ = 5 × 10−2, which regularizes the modes above the mode number 576. We set the γ value empirically by monitoring the convergence quality and speed of the EFC loop.

thumbnail Fig. 3

Intensity responses of the best case (left) and worst case (right). The white half-doughnut shape indicates the DH region, and the values outside of the DH were masked out.

thumbnail Fig. 4

Examples of AO control modes used to build the response matrix. This clearly shows that the AO control modes are ordered by spatial frequency. The same color bars are applied for all panels.

thumbnail Fig. 5

Normalized singular values for all 1190 modes of (solid blue line). The dashed red line represents the modes below the Tikhonov regularization (<5 × 10−2) threshold (dotted gray line).

3 Spatial LDFC

3.1 Principle of spatial LDFC

The main principle of spatial LDFC is to measure and compensate for the phase aberration in the pupil plane using the linear relation between intensity variations of the bright field (i.e., the area of the focal plane outside of the DH) and phase aberrations. This section briefly describes spatial LDFC, and a more detailed mathematical explanation is presented in Miller et al. (2021). We can express the intensity variation in the bright field ∆I due to the aberration as follows: (12)

where IBF is the intensity of the bright field, and is the unaberrated (reference) image after the EFC process, as shown in Fig. 6 (center). This equation shows that the bright field intensity variation is a linear function of the pupil plane aberration due to in the bright field. In contrast, the dark field response is quadratic because the electric field generated by the pupil plane aberration is much greater than the nominal focal plane electric field . Figure 6 clearly shows the linear response of a pixel in the bright field (left) and the quadratic response of a pixel in the dark field (right) with the induced aberration in the pupil plane.

Using this linear relation between the bright field and the pupil plane aberration, we directly retrieved EWF and used the bright field as a servo closed-loop input stabilizing the depth of the DH initially generated by EFC. The critical advantage of spatial LDFC is that this algorithm only needs a single intensity variation (∆I) to compensate for the aberration, in contrast to other methods that need the field modulation. This means that spatial LDFC provides a duty cycle of nearly 100% and does not interfere with science observations. However, spatial LDFC has sign ambiguity for even modes when it runs with the science image at focus (Miller et al. 2021). To overcome this sign ambiguity, an asymmetric pupil shape or the addition of a defocus term to the system is required. As shown in Fig. 1, we have an asymmetric pupil shape in the system. This allows us to overcome the sign ambiguity of spatial LDFC.

thumbnail Fig. 6

Measured pixel response of both the bright (left) and the dark field (right) corresponding to the same low-amplitude aberration introduced on the DM. The center image represents the reference image for spatial LDFC.

3.2 Bright pixel selection

As mentioned in the previous section and shown in Fig. 6, bright pixels respond linearly to wavefront perturbations. In spatial LDFC, the bright pixel selection is therefore crucial for the performance of the closed-loop correction. The selection relies on multiple parameters, including background flux, flux per speckle, detector efficiency, and signal-to-noise ratio (S/N). Based on these requirements, we first applied a brightness threshold set at 1 × 10−5 relative to the point spread function (PSF) core. While deploying spatial LDFC and EFC, we increased the exposure time to improve the S/N at higher spatial frequencies; however, this caused detector saturation near the center of the coronagraphic image (<7 λ/D). To prevent a nonlinear response of saturated pixels, we excluded them from the selection. This caused a loss of sensitivity to low-order modes. However, it did not impact the overall performance of spatial LDFC because the ExAO system corrected most low-order modes. We also excluded pixels outside the DM control region (>22.5 λ/D). The bright pixel mask used for the spatial LDFC WFS is shown in Fig. 7.

3.3 Spatial LDFC calibration

The whole calibration process of spatial LDFC is shown in Fig. 8. We applied the same calibration process with previous spatial LDFC implementations (Miller et al. 2021; Bos et al. 2021). We used the modal basis set derived from the focal plane response to a series of Hadamard modes (Kasper et al. 2004). The benefits of using Hadamard modes with a high variance-to-peak ratio are as follows: (1) The peak value that can be applied is limited because LDFC has a limited linear range. (2) The S/N of the focal-plane response depends on the variance in the applied mode, not on the peak amplitude. Due to these advantages, we used Hadamard modes instead of actuator influence functions to poke the DM and acquire the focal plane response to build the Hadamard response matrix. We chose a poke amplitude of 10 nm, above which saturated pixels are likely to occur. To build the response matrix, we measured the focal plane intensities with positive and negative amplitude of all modes and selected only pixels inside the bright pixel map (Fig. 7). After the response matrix of Hadamard modes was measured, the modal basis set (eigenmodes) for spatial LDFC was determined by SVD of the Hadamard response matrix. An example of eigenmodes derived from the Hadamard response matrix is illustrated in Fig. 9. Similar to the AO control modes used for iEFC, these eigenmodes are also ordered by spatial frequency from the lower to the higher mode. After making the eigenmodes, we finally took the response matrix for the LDFC control loop. We first poked the DM with the eigenmodes, and then recorded the resulting focal-plane images in the same process for building the Hadamard response matrix. When we took the response matrix, we averaged ten images to improve the S/N of the focal plane images. We also applied the same bright pixel map to take only pixels that have good linearity against input DM modes. We took the response matrix twice in a row because we empirically found that the regularized eigenmodes provide better linearity.

Like iEFC, we used the SVD method with the Tikhonov regularization to build the control matrix. As mentioned before, this method can mitigate the error caused by noisy higher modes. We set γ = 0.1, and the modes above mode number 559 were regularized in the inversion. We set the γ value empirically by monitoring the convergence and stability of the spatial LDFC loop. When we set the γ value, we also considered the number of modes to have a similar number of modes used for iEFC because a sufficient number of modes is required to remove the speckles in the DH using spatial LDFC effectively. We also applied a modal gain binning for loop stability. Modal gain binning gives a higher gain to low-order modes and a lower gain to high-order modes. This method prevents additional aberration induced by noisy high-order modes and provides better loop stability (Miller et al. 2021; Bos et al. 2021). In our experiment, we set full weighting (γmodal = 1) to the first 150 modes of the eigenmodes and ten times smaller weighting (γmodal = 0.1) to the rest of the modes. Figure 10 shows the normalized singular value for all 2500 eigenmodes and all information about the modal cutoff and binning.

thumbnail Fig. 7

Bright pixel map used for the LDFC WFS. These pixels were chosen based on three criteria: they are unsaturated, lie within the DM control region, and they lie above the threshold. This ensures a linear response to small wavefront aberrations.

4 Spatial LDFC: Stability limit and dynamical performance

This section describes a more detailed analysis and discussion of combining iEFC with spatial LDFC for the practical implementation (e.g., on-sky) through numerical simulations. We performed numerical simulations for the analysis as a more general approach to implementing various optical systems. In this paper, we only simulated the numerical model for the SCExAO bench with the same bench hardware mentioned in Sect. 5.1, but the numerical model for other optical systems can easily be implemented. For the practical situation, we assumed that the target star’s H-band brightness is mH = 5, the angular separation of the planet is 10 λ/D, and the total throughput of the optical system is 20%. An overview of the parameters we used in numerical simulations is shown in Table 1. The DH was generated using iEFC with the same bench model before implementing the spatial LDFC simulation, and we only considered photon noise, assuming a camera free of readout noise. In the following subsections, we first measured the linearity range of spatial LDFC modes to identify the aberration range required to implement spatial LDFC after generating the DH with iEFC. Second, we analyzed the impact of noise on the spatial LDFC loop. We verified the closed-loop performance by changing the noise level (exposure time, t). We show contrast curves as a function of exposure time to determine the optimal contrast and loop frequency through noise propagation and dynamic sensitivity analysis.

4.1 Linearity range

To measure the linearity range of each eigenmode derived from the Hadamard response matrix in the numerical simulation, we applied a single eigenmode with specific aberration levels ranging from −30 to +30 nm RMS WFE. We then measured the reconstructed WFE after 20 iterations of the spatial LDFC loop, which is long enough to allow the loop to converge. We assumed that the incident number of photons Nph was high enough (108) as the calibration process is usually done with the internal source or bright target. Figure 11 shows the measured linearity curves. The thick solid blue line represents an ideal linearity curve, and the dash-dotted lines with different colors indicate the measured results of ten eigenmodes. We measured the linearity of all 559 modes, but only plot ten results for better visibility. As shown in Fig. 11, every mode shows a good linearity between ±20 nm RMS WFE, which means that the spatial LDFC loop can compensate for the incident wavefront with 20 nm RMS WFE range per mode. According to this result, we conclude that the linearity range of spatial LDFC is ≈20 nm RMS WFE per mode. In general, the ExAO residual has ~100 nm RMS WFE (total modes), which corresponds to a few nanometers per mode, assuming ~1000 modes. Therefore, the linearity result implies that spatial LDFC can sufficiently compensate for the residual WFE after the ExAO correction. In other words, spatial LDFC can be used in combination with ground- or space-based ExAO systems to restore and maintain the contrast of the DH in a high-contrast regime (<10−6).

thumbnail Fig. 8

Full calibration process for the actual implementation of spatial LDFC, including acquiring the response matrix, applying the pixel map, and building the control matrix. In the first step (upper row), eigenmodes are derived from the focal-plane response of Hadamard modes, which is filtered by the pixel map. In the second step (lower row), the final response matrix is measured through a series of selected eigenmodes. This response matrix is also filtered by the pixel map. From this response matrix, the control matrix is then derived.

Table 1

Parameters used in numerical simulations.

thumbnail Fig. 9

DM and WFS eigenmodes derived from the Hadamard response matrix that was recorded with the internal source. For the WFS eigenmodes, the bright pixel map shown in Fig. 7 is applied.

4.2 Noise analysis

In this work, we performed two types of noise analysis for more practical implementations. As the first analysis, we performed a noise propagation analysis. In this analysis, we injected a single wavefront map linear combination of control modes with 20 nm RMS WEE, which is within the linearity range defined in the previous subsection. Then we closed the spatial LDFC loop for 20 iterations because this iteration number is long enough to converge the loop confirmed in the previous subsection. After iterations, we measured the amplitude h of the residual WFE between the injected WFE and the reconstructed WFE by the spatial LDFC loop. Last, we approximated the amplitude of the residual WFE into contrast C using the following equation (Guyon 2005): (13)

We repeated this test 100 times to obtain good variance statistics for each exposure time, and we varied the exposure time from 10−5 to 10−2. We plot the average value of the estimated contrast for each exposure time with the dash-dotted green line in Fig. 12. We also plot a contrast curve of pure shot noise with the dotted black line in Fig. 12 to show the theoretical limit. The measured data show an almost perfect fit with the pure shot noise case, which means that the spatial LDFC loop is photon-noise limited. This result implies that a longer exposure time is required to reach deeper contrast, making the loop slower. However, high-speed wavefront stabilization is required to compensate for dynamic WFEs. Therefore, finding a good balance between photon noise and loop speed is crucial.

We conducted another analysis to find the optimal loop frequency. In a closed-loop AO system, the corrected amplitude of the spatial frequency f varies due to the time lag ∆t. For this analysis, we measured the corrected amplitude change of the sine wave component at the specific spatial frequency (f = 10 λ/D) as a function of the exposure time (≈ ∆t), and it can be computed (Guyon 2005), (14)

where λ0 is the wavelength at which the Fried parameter r0 is measured, D is the telescope diameter, and υwind is the wind speed. In this equation, we assumed that exposure time is equal to time lag for convenience, but the time lag in the actual system can be longer due to hardware latency. When the amplitude change due to the time lag is calculated, we can convert it into contrast using Eq. (13). This calibrated contrast is plotted with a solid blue line in Fig. 12 and shows that a high-speed closed-loop is required to reach deeper contrast. This contradicts the result from the shot noise analysis. In the final analysis, the optimal loop frequency (= l/∆t) is an intersecting point between the time lag and shot noise curve shown in Fig. 12. This optimal point holds only for the case of the planet located at 10 λ/D angular separation. According to Eq. (14), when we observe the planet at a smaller angular separation with the same brightness star, the time lag curve is moved to an upper direction in Fig. 12. The same contrast therefore cannot be achieved because the shot noise curve limits it in that case. Pursuing deeper contrast requires a brighter star, a larger telescope aperture (e.g., future ground-based ELTs), or a more stable wavefront (space-based telescope). We quantified the performance limitation of spatial LDFC for practical implementation through noise analyses. We considered only a specific condition in this work, but changing parameters can easily expand these analyses. Throughout this work, we conclude that a reasonably bright reference star (mH < 5) and a loop speed of a few tens of kilohertz are required to restore and maintain deep contrast (<10−6) at the small angular separation (<10λ/D) in the practical implementation of spatial LDFC. We anticipate that spatial LDFC should be able to stabilize the dynamic WFEs with bright targets because spatial LDFC is a linear algorithm that can be operated up to a few tens of kilohertz.

thumbnail Fig. 10

Normalized singular value for all 2500 modes generated by the SVD of the eigenmode response matrix. The dash-dotted red line indicates the regularized modes, and the solid green line represents the modes attenuated by modal gain binning (γmodal = 0.1).

thumbnail Fig. 11

Measured linearity curves for 10 eigenmodes. The solid thick blue line represents an ideal linearity curve, and dash-dotted lines with different colors indicate the measured linearity curves of the 10 eigenmodes.

thumbnail Fig. 12

Noise analysis results for practical implementations. The parameters used in this analysis are listed in Table 1. The solid blue line indicates the measured time lag. The dash-dotted green line represents the measured shot noise, and the dotted black line shows the theoretical shot noise limit.

5 Laboratory demonstrations and results on SCExAO

In the previous section, we showed the practical implementation of the proposed method through numerical simulations. We present laboratory demonstrations and results for the SCExAO bench in this section. We also confirm the experimental result by comparing it with the simulation result.

5.1 SCExAO

The SCExAO instrument (Jovanovic et al. 2015) is an active HCI system and is implemented at the Nasmyth platform of the Subaru telescope. It has an extreme AO (ExAO) system, allowing a high Strehl ratio (>80% in median seeing in H band) with a partial correction of low-order modes provided by the facility AO 188 (Minowa et al. 2010). The ExAO system of SCExAO mainly consists of a 2000-actuator MEMS DM (Boston Micro-machines (BMC) 2K) with 45 actuators across the active pupil and a pyramid wavefront sensor (Lozi et al. 2019) operating in the 600–900 nm wavelength range. For real-time wavefront control, it uses the compute and control for adaptive optics (CACAO; Guyon et al. 2018) package, and the maximum AO loop frequency is 3.5 kHz, using a binned OCAM2K camera. It has 12 software channels to write DM commands, so that we can use them independently. In our demonstrations, we used 4 channels to apply the pair-wise probes, the DM map to generate DH, wavefront aberrations, and the correction DM map. Moreover, SCExAO has various types of coronagraphs (Lozi et al. 2018) to suppress the starlight, such as a classical Lyot coronagraph (CLC), a vector vortex, a phase-induced amplitude apodization complex mask coronagraph (PIAACMC), and an 8-octant phase mask (80PM). In both iEFC and spatial LDFC demonstrations, we used a CLC with a diameter of 114 mas at a 1550 nm wavelength, the internal NIR camera using a cooled InGaAs detector (C-RED 2; Feautrier et al. 2017), and the FPWFS with a narrowband filter (λ = 1550 ± 25 nm). The frame rate of the camera was 1.5 kHz, and all acquired images were 128×128 pixels. Both iEFC and spatial LDFC were deployed on SCExAO in Python and used functions within the HCIPy package (Por et al. 2018).

thumbnail Fig. 13

Coronagraphic images with CLC before iEFC (left), after iEFC (center), and after iEFC with the field stop (right). The white half-doughnut shape corresponds to the region 7 λ/D ~ 17 λ/D. All images are coadded and averaged by 1000 images to have a sufficient S/N.

5.2 iEFC results

This section shows results from the laboratory demonstration of iEFC on the SCExAO bench. The initial coronagraphic image with the CLC and the narrow-band filter has a contrast level of ~1×10−5 (Fig. 13, left). Starting from this state, we acquired the response matrix by putting the DM modes and built a control matrix. After we determined the control matrix, we calculated the command to generate the DH. The DH has a half-doughnut shape with an inner working angle (IWA) of 7 λ/D up to an outer working angle (OWA) of 17 λ/D. After 30 iterations, we reached the minimum contrast, as shown in the center of Fig. 13. We calculated the raw contrast and averaged it out in the DH. The average contrast inside of the DH was ~2×10−7. We noted diffracted light near the IWA of the DH, so we placed the field stop to block this light. We obtained a slightly better contrast with field stop (Fig. 13, right). This was the deepest contrast ever reached on SCExAO. However, the contrast improvement was not significant enough to require the field stop at all times, especially when running the spatial LDFC. We also noted that the camera noise mainly limits the contrast of the DH since we confirmed that the contrast could be reached below 10−7 with iEFC through the numerical simulation, as shown in Sect. 4. To overcome this limit, we are currently testing with an avalanched MCT detector (C-RED One camera) that is newly installed on SCExAO and has significantly reduced read-out noise and dark current compared to the InGaAs camera. As shown in Fig. 14, we also plot the raw contrast curves before iEFC and after iEFC for comparison. These contrast curves were averaged along the radial direction. From this encouraging result, we expect that we can have a smaller IWA with advanced coronagraphs such as Vector vortex and PIAACMC. These are in progress and left for future works.

thumbnail Fig. 14

Raw contrast curves averaged along the radial direction. The solid blue line indicates the contrast curve before iEFC, and the dash-dotted green line represents the contrast curve after iEFC. IWA and OWA are illustrated by dashed gray lines.

thumbnail Fig. 15

Spatial LDFC experimental results with simple speckles. Left panel: Initial images after iEFC. Central panel: Aberrated images before spatial LDFC. Right panel: Restored images after spatial LDFC.

5.3 Spatial LDFC results

We implemented spatial LDFC after the iEFC process generated the bright field and DH (also called dark field) on the same day. Following the LDFC calibration process shown in Fig. 8, we calibrated the response matrix and control matrix for spatial LDFC demonstration. To verify the performance of spatial LDFC, we conducted three experiments as follows: (1) An experiment with individual speckles. We generated one to three speckles by introducing a linear combination of sine wave perturbations on the DM. The peak contrast of the generated speckles was ~1×10−4. (2) An experiment with static complex speckles. We introduced static aberration on the DM to generate complex speckles in the DH. This static aberration was a random linear combination of the eigenmodes. (3) An experiment with quasi-static speckles. We introduced a random temporally evolving phase aberration on the DM to simulate realistic AO residuals and NCPAs.

Figure 15 illustrates the results of the first experiment. In the first experiment, we generated three types of simple speckles in the DH by introducing a linear combination of sine waves on the DM, as shown in the central panel of Fig. 15. Each speckle has a contrast of ~1×10−4, which is approximately 500 times brighter than the average contrast over the entire DH. We closed the spatial LDFC control loop for 100 iterations to remove these speckles. We noted that spatial LDFC immediately starts to remove the speckles and can successfully restore the initial contrast. In the right panel of Fig. 15, the contrasts of the DH after spatial LDFC are different. This effect is caused by noise fluctuations of the camera and is not related to the ability of spatial LDFC.

In the second experiment, we generated more complex speckles in the DH to verify that spatial LDFC can suppress multiple speckles. These complex speckles degraded the contrast DH by a factor of ~20. We introduced a random linear combination of the eigenmodes on the DM to simulate these static speckles. The amplitude of the WFE was ~50 nm root-mean-square (RMS) WFE. We also ran the spatial LDFC control loop for 100 iterations, similar to the first experiment. The results are shown in Fig. 16. The random wavefront error was nearly fully removed by spatial LDFC, and the residual error was only 5.31 nm RMS WFE. The residual error was calculated by summing the introduced aberration and the reconstructed DM map. We also confirmed that the experimental result matches the simulation result well (Fig. 12). The simulation result has a deeper contrast than the experiment result because the actual experiment includes errors such as readout noise, surface errors of optics, and NCPAs, which are not considered in the numerical simulation.

Last, we conducted the experiment to verify the capability of spatial LDFC to sense and compensate for the AO residuals and NCPAs. To simulate these aberrations realistically, we generated a random quasi-static aberration (38 nm total variation over 1000 frames. Total variation means the difference between the RMS WFE of the first frame and the last frame on the DM. This aberration was generated by the same method as used in Miller et al. (2021). We made an aberration data set in which each frame was the next step in the aberration evolution. This simulated data set includes slowly evolving NCPA components with a 1/fα temporal power spectrum, where α = 4, giving the aberration sequence a high temporal correlation. Furthermore, the spatial frequency was defined by a power spectral density given by 1/kβ law with β = 2.2 to ensure that quasi-static speckles were generated across the full DH. However, we excluded the fitting error (spatial frequencies beyond the DM sampling limit) as it generates speckles outside of the DH (more specifically, outside of the DM control region). The RMS amplitude of each frame was an average of 42 nm. The amplitude was small compared to the ExAO residual WFE, but this experiment did not require a high amplitude as the goal of this test was to verify the spatial LDFC loop with quasi-static speckles. In the actual demonstration, each frame was directly projected onto the DM and was replayed, not projected on the LDFC control modes. The closed-loop speed was ~30 Hz, which included injecting the aberration, taking an image, deriving, and applying the DM command for the correction. Because we implemented the closed-loop tests using Python, the loop speed was limited by ~30 Hz. However, the loop speed can easily be increased up to a few tens of kilohertz using the CACAO open-source package.

For comparison, we show open- and closed-loop images, each with an average of 1000 images in Fig. 17. The quasi-static speckles are significantly removed in the closed-loop case. The residual small amplitude speckles remain. They are caused by noisy higher modes, which can be optimized by the modal cutoff and binning. However, the contrast improvement in the DH was a factor of ~4. In Fig. 18, we compare the RMS WFE of the open and closed loop. The induced aberration has an average of 42 nm RMS WFE (the dashed blue line in Fig. 18), but spatial LDFC reduces it to an average of 17 nm (the dashed green line in Fig. 18). The RMS WFE after the correction was also maintained below 20 nm over the total 1000 iterations. This represents the stability of the spatial LDFC loop.

thumbnail Fig. 16

Spatial LDFC experimental results with complex static speckles. (a) Initial image, (b) degraded image by complex speckles before spatial LDFC, (c) restored image after spatial LDFC, (d) introduced aberration, (e) reconstructed DM map by spatial LDFC, and (f) residual error between introduced aberration and reconstructed DM map.

thumbnail Fig. 17

Averages of 1000 images showing the resulting coronagraphic images when the LDFC loop is open (left) and closed (right).

6 Conclusions

We presented the first laboratory demonstration of combining two FPWFS&C methods: EFC and spatial LDFC. We verified that this method can deliver and maintain a deep contrast with a 100% science duty cycle. We first deployed iEFC and generated the DH with a contrast of ~2×10−7 between 7 λ/D and 17 λ/D. This demonstration was conducted with the CLC; a DH extending to smaller angular separation can be accomplished with more advanced coronagraphs. After the DH was generated, we deployed spatial LDFC to restore and maintain the DH contrast degraded by various aberrations. Using spatial LDFC, we successfully removed static and dynamic speckles and restored the contrast in the DH without field modulations such as pair-wise probing. We also verified that spatial LDFC maintained the contrast during the iteration. These results confirm that spatial LDFC operates with 100% science duty cycle without any interruption. Furthermore, we provided a detailed analysis and discussion for the practical implementation of the proposed method using numerical simulations.

Our main conclusion is that the method presented in this paper proves a promising approach to achieving high-contrast imaging of exoplanets, which is one of the main scientific goals of current and next-generation ground-based large telescopes and space telescopes. Another strength of this method is that it can be easily deployed on multiple instruments without any knowledge of the optical system or additional hardware. This is because iEFC does not require the numerical model of the optical system and only uses measured intensities with the probes. Spatial LDFC does not require additional optical components either. The iEFC technique was initially developed and actively used on MagAO-X with various coronagraphs, and the results have been published in Haffert et al. (2022).

We also concluded the use-case for the practical implementation of the proposed method. We propose three possible use-cases based on the presented experiments and analyses, but the hardware mentioned here can be replaced with compatible components for other HCI systems. The use-cases we propose are as follows: (1) Use a fast science camera such as the FPWFS, which is an ideal case for the proposed method. In this case, no NCPAs would arise because the same camera is used for both the FPWFS and science observations. However, this option requires a camera with a wide dynamical range for spatial LDFC. (2) Use a fast focal-plane camera for LDFC and a separate long-exposure camera or spectrograph for science. The two cameras would share the light. The focal plane camera would take out of the science band, while the science camera would simultaneously take the science band for science observations. This option is also known as spectral LDFC (Guyon et al. 2017) and allows for spectroscopy with an integral field spectroscope such as Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS; Groff et al. 2017). (3) Adding a half mirror at the focal plane and a fast camera for the FPWFS. In this concept, the half mirror would send the light of the bright field to the FPWFS for spatial LDFC, and the remaining light would be sent to the science camera. However, this case requires installing additional components in the instrument, which makes the system more complex. We anticipate that cases (2) and (3) can be useful for systems that have a slow science camera. However, the main drawback of these cases is that NCPAs could occur between the science camera and the FPWFS. We also propose that LDFC should be used together with PSF subtraction, either using field rotation (ADI) or PSF reference subtraction (RDI) for all use-cases because LDFC keeps it more stable and will improve the quality of PSF subtraction.

Despite the encouraging results presented in this paper, future works are planned to make this method more robust and powerful. We only demonstrated with the narrow-band filter (λ = 1550 ± 25 nm). However, observing exoplanets with broadband filters is preferred to maximize the sensitivity becausee they are orders of magnitude fainter than the host star. Therefore, we will demonstrate this method shortly with a broadband filter. We are also currently working on implementing this technique within the CACAO open-source package, enabling a sufficiently fast operation to cancel out speckles induced by residual AO error so that this method can be used in science observing runs.

thumbnail Fig. 18

RMS WFE of both open- and closed-loop LDFC. The solid blue and green lines represent the applied aberration and the residual error, respectively. The dashed blue and green lines indicate the average of each data.

Acknowledgements

This work was supported by NASA’s Strategic Astrophysics Technology (SAT) exoplanet program (grant #80NSSC19K0121). The development of SCExAO was supported by the Japan Society for the Promotion of Science (Grant-in-Aid for Research #23340051, #26220704, #23103002, #19H00703 & #19H00695), the Astrobiology Center of the National Institutes of Natural Sciences, Japan, the Mt Cuba Foundation and the director’s contingency fund at Subaru Telescope. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna kea has always had within the Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. K.A. acknowledges funding from the Heising-Simons Foundation. V.D. and N.S. acknowledge support from NASA funding (Grant #80NSSC19K0336). N.S. acknowledges support from the PSL Iris-OCAV project. S.H. acknowledges support from NASA funding through the NASA Hubble Fellowship grant #HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.

References

  1. Bailey, V. P., Bottom, M., Cady, E., et al. 2018, in Space Telescopes and Instrumentation 2018: Optical, Infrared, and Millimeter Wave, 10698, SPIE, 1913 [Google Scholar]
  2. Barman, T. S., Macintosh, B., Konopacky, Q. M., & Marois, C. 2011, ApJ, 735, L39 [Google Scholar]
  3. Barman, T. S., Konopacky, Q. M., Macintosh, B., & Marois, C. 2015, ApJ, 804, 61 [Google Scholar]
  4. Belikov, R., Pluzhnik, E., Witteborn, F. C., et al. 2011, in Techniques and Instrumentation for Detection of Exoplanets V, 8151, SPIE, 815102 [Google Scholar]
  5. Belikov, R., Pluzhnik, E., Witteborn, F. C., et al. 2012, in Space Telescopes and Instrumentation 2012: Optical, Infrared, and Millimeter Wave, 8442, SPIE, 844209 [NASA ADS] [CrossRef] [Google Scholar]
  6. Beuzit, J.-L., Vigan, A., Mouillet, D., et al. 2019, A&A, 631, A155 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  7. Bordé, P. J., & Traub, W. A. 2006, ApJ, 638, 488 [Google Scholar]
  8. Bos, S. P., Vievard, S., Wilby, M. J., et al. 2020, A&A, 639, A52 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Bos, S., Miller, K., Lozi, J., et al. 2021, A&A, 653, A42 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  10. Cady, E. J., Prada, C. A. M., An, X., et al. 2015, J. Astron. Telesc. Instrum. Syst., 2, 011004 [NASA ADS] [CrossRef] [Google Scholar]
  11. Chauvin, G., Desidera, S., Lagrange, A. M., et al. 2017, A&A, 605, A9 [EDP Sciences] [Google Scholar]
  12. Currie, T., Burrows, A., Itoh, Y., et al. 2011, ApJ, 729, 128 [Google Scholar]
  13. Currie, T., Daemgen, S., Debes, J., et al. 2014, ApJ, 780, L30 [Google Scholar]
  14. Currie, T., Pluzhnik, E., Guyon, O., et al. 2020, PASP, 132, 104502 [NASA ADS] [CrossRef] [Google Scholar]
  15. Currie, T., Biller, B., Lagrange, A.-M., et al. 2022a, ArXiv e-prints [arXiv:2205.05696] [Google Scholar]
  16. Currie, T., Lawson, K., Schneider, G., et al. 2022b, Nat. Astron., 6, 751 [NASA ADS] [CrossRef] [Google Scholar]
  17. Currie, T., Brandt, G. M., Brandt, T. D., et al. 2023, Science, 380, 198 [NASA ADS] [CrossRef] [Google Scholar]
  18. Delorme, J.-R., Jovanovic, N., Echeverri, D., et al. 2021, J. Astron. Telesc. Instrum. Syst., 7, 035006 [NASA ADS] [CrossRef] [Google Scholar]
  19. Feautrier, P., Gach, J.-L., Greffe, T., et al. 2017, in Image Sensing Technologies: Materials, Devices, Systems, and Applications IV, 10209, SPIE, 59 [Google Scholar]
  20. Galicher, R., Baudoz, P., Rousset, G., Totems, J., & Mas, M. 2010, A&A, 509, A31 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  21. Give’on, A., Belikov, R., Shaklan, S., & Kasdin, J. 2007, Opt. Express, 15, 12338 [CrossRef] [Google Scholar]
  22. Groff, T., Chilcote, J., Brandt, T., et al. 2017, in Techniques and Instrumentation for Detection of Exoplanets VIII, 10400, SPIE, 315 [Google Scholar]
  23. Guyon, O. 2004, ApJ, 615, 562 [NASA ADS] [CrossRef] [Google Scholar]
  24. Guyon, O. 2005, ApJ, 629, 592 [NASA ADS] [CrossRef] [Google Scholar]
  25. Guyon, O., Miller, K., Males, J., Belikov, R., & Kern, B. 2017, ArXiv e-prints [arXiv:1706.07377] [Google Scholar]
  26. Guyon, O., Sevin, A., Gratadour, D., et al. 2018, in Adaptive Optics Systems VI, 10703, SPIE, 469 [Google Scholar]
  27. Haffert, S. Y., Bohn, A. J., de Boer, J., et al. 2019, Nat. Astron., 3, 749 [Google Scholar]
  28. Haffert, S. Y., Males, J. R., Gorokom, K. V., et al. 2022, in Adaptive Optics Systems VII, SPIE, 12185 [Google Scholar]
  29. Huby, E., Baudoz, P., Mawet, D., & Absil, O. 2015, A&A, 584, A74 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  30. Jovanovic, N., Martinache, F., Guyon, O., et al. 2015, PASP, 127, 890 [NASA ADS] [CrossRef] [Google Scholar]
  31. Jovanovic, N., Absil, O., Baudoz, P., et al. 2018, in Adaptive Optics Systems VI, 10703, SPIE, 107031U [Google Scholar]
  32. Kasper, M., Fedrigo, E., Looze, D. P., et al. 2004, JOSA A, 21, 1004 [NASA ADS] [CrossRef] [Google Scholar]
  33. Kuhn, J., Potter, D., & Parise, B. 2001, ApJ, 553, L189 [NASA ADS] [CrossRef] [Google Scholar]
  34. Lacy, B., Shlivko, D., & Burrows, A. 2019, AJ, 157, 132 [Google Scholar]
  35. Lagrange, A. M., Bonnefoy, M., Chauvin, G., et al. 2010, Science, 329, 57 [Google Scholar]
  36. Lopez-Morales, M., Currie, T., Teske, J., et al. 2019, BAAS, 51, 162 [NASA ADS] [Google Scholar]
  37. Lozi, J., Guyon, O., Jovanovic, N., et al. 2018, in Adaptive Optics Systems VI, 10703, SPIE, 1070359 [NASA ADS] [Google Scholar]
  38. Lozi, J., Jovanovic, N., Guyon, O., et al. 2019, PASP, 131, 044503 [NASA ADS] [CrossRef] [Google Scholar]
  39. Macintosh, B., Graham, J. R., Ingraham, P., et al. 2014, Proc. Natl. Acad. Sci., 111, 12661 [NASA ADS] [CrossRef] [Google Scholar]
  40. Males, J. R., Close, L. M., Haffert, S., et al. 2022, in Adaptive Optics Systems VIII, 12185, SPIE, 61 [Google Scholar]
  41. Marois, C., Lafreniere, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, ApJ, 641, 556 [NASA ADS] [CrossRef] [Google Scholar]
  42. Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348 [Google Scholar]
  43. Martinache, F. 2013, PASP, 125, 422 [NASA ADS] [CrossRef] [Google Scholar]
  44. Martinez, P., Loose, C., Carpentier, E. A., & Kasper, M. 2012, A&A, 541, A136 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  45. Miller, K., Guyon, O., & Males, J. 2017, J. Astron. Telesc. Instrum. Syst., 3, 049002 [CrossRef] [Google Scholar]
  46. Miller, K., Bos, S., Lozi, J., et al. 2021, A&A, 646, A145 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  47. Minowa, Y., Hayano, Y., Oya, S., et al. 2010, in Adaptive Optics Systems II, 7736, SPIE, 77363N [NASA ADS] [CrossRef] [Google Scholar]
  48. National Academies of Sciences, Engineering, and Medicine 2021, Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Washington, DC: The National Academies Press) [Google Scholar]
  49. N’Diaye, M., Dohlen, K., Fusco, T., & Paul, B. 2013, A&A, 555, A94 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  50. Paul, B., Mugnier, L., Sauvage, J.-F., Dohlen, K., & Ferrari, M. 2013, Opt. Express, 21, 31751 [NASA ADS] [CrossRef] [Google Scholar]
  51. Por, E. H., Haffert, S. Y., Radhakrishnan, V. M., et al. 2018, in Adaptive Optics Systems VI, 10703, SPIE, 1112 [Google Scholar]
  52. Potier, A., Baudoz, P., Galicher, R., Singh, G., & Boccaletti, A. 2020, A&A, 635, A192 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  53. Potier, Mazoyer, J., Wahhaj, Z., et al. 2022, A&A, 665, A136 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  54. Rameau, J., Chauvin, G., Lagrange, A.-M., et al. 2013, ApJ, 779, L26 [NASA ADS] [CrossRef] [Google Scholar]
  55. Ruane, G., Ngo, H., Mawet, D., et al. 2019, ApJ, 157, 118 [CrossRef] [Google Scholar]
  56. Skaf, N., Guyon, O., Boccaletti, A., et al. 2021, in Techniques and Instrumentation for Detection of Exoplanets X, 11823, SPIE, 387 [Google Scholar]
  57. Skaf, N., Guyon, O., Gendron, É., et al. 2022, A&A, 659, A170 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  58. Sparks, W. B., & Ford, H. C. 2002, ApJ, 578, 543 [Google Scholar]
  59. Traub, W. A., & Oppenheimer, B. R. 2010, in Exoplanets, ed. S. Seager, 111 [Google Scholar]
  60. Trauger, J., Moody, D., Gordon, B., Krist, J., & Mawet, D. 2011, in Techniques and Instrumentation for Detection of Exoplanets V, 8151, SPIE, 81510G [Google Scholar]
  61. Wilby, M. J., Keller, C. U., Snik, F., Korkiakoski, V., & Pietrow, A. G. 2017, A&A, 597, A112 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]

All Tables

Table 1

Parameters used in numerical simulations.

All Figures

thumbnail Fig. 1

Numbering and layout of DM actuators with the SCExAO pupil mask. There are central obscuration, spider, and the mask for the dead actuator. The cyan actuators are used as the probes in the actual implementation.

In the text
thumbnail Fig. 2

Normalized total flux inside of the DH values shown on the DM coordinates. Total flux means the sum of the flux of all pixels inside of the DH. The numbering of all actuators is the same as Fig. 1.

In the text
thumbnail Fig. 3

Intensity responses of the best case (left) and worst case (right). The white half-doughnut shape indicates the DH region, and the values outside of the DH were masked out.

In the text
thumbnail Fig. 4

Examples of AO control modes used to build the response matrix. This clearly shows that the AO control modes are ordered by spatial frequency. The same color bars are applied for all panels.

In the text
thumbnail Fig. 5

Normalized singular values for all 1190 modes of (solid blue line). The dashed red line represents the modes below the Tikhonov regularization (<5 × 10−2) threshold (dotted gray line).

In the text
thumbnail Fig. 6

Measured pixel response of both the bright (left) and the dark field (right) corresponding to the same low-amplitude aberration introduced on the DM. The center image represents the reference image for spatial LDFC.

In the text
thumbnail Fig. 7

Bright pixel map used for the LDFC WFS. These pixels were chosen based on three criteria: they are unsaturated, lie within the DM control region, and they lie above the threshold. This ensures a linear response to small wavefront aberrations.

In the text
thumbnail Fig. 8

Full calibration process for the actual implementation of spatial LDFC, including acquiring the response matrix, applying the pixel map, and building the control matrix. In the first step (upper row), eigenmodes are derived from the focal-plane response of Hadamard modes, which is filtered by the pixel map. In the second step (lower row), the final response matrix is measured through a series of selected eigenmodes. This response matrix is also filtered by the pixel map. From this response matrix, the control matrix is then derived.

In the text
thumbnail Fig. 9

DM and WFS eigenmodes derived from the Hadamard response matrix that was recorded with the internal source. For the WFS eigenmodes, the bright pixel map shown in Fig. 7 is applied.

In the text
thumbnail Fig. 10

Normalized singular value for all 2500 modes generated by the SVD of the eigenmode response matrix. The dash-dotted red line indicates the regularized modes, and the solid green line represents the modes attenuated by modal gain binning (γmodal = 0.1).

In the text
thumbnail Fig. 11

Measured linearity curves for 10 eigenmodes. The solid thick blue line represents an ideal linearity curve, and dash-dotted lines with different colors indicate the measured linearity curves of the 10 eigenmodes.

In the text
thumbnail Fig. 12

Noise analysis results for practical implementations. The parameters used in this analysis are listed in Table 1. The solid blue line indicates the measured time lag. The dash-dotted green line represents the measured shot noise, and the dotted black line shows the theoretical shot noise limit.

In the text
thumbnail Fig. 13

Coronagraphic images with CLC before iEFC (left), after iEFC (center), and after iEFC with the field stop (right). The white half-doughnut shape corresponds to the region 7 λ/D ~ 17 λ/D. All images are coadded and averaged by 1000 images to have a sufficient S/N.

In the text
thumbnail Fig. 14

Raw contrast curves averaged along the radial direction. The solid blue line indicates the contrast curve before iEFC, and the dash-dotted green line represents the contrast curve after iEFC. IWA and OWA are illustrated by dashed gray lines.

In the text
thumbnail Fig. 15

Spatial LDFC experimental results with simple speckles. Left panel: Initial images after iEFC. Central panel: Aberrated images before spatial LDFC. Right panel: Restored images after spatial LDFC.

In the text
thumbnail Fig. 16

Spatial LDFC experimental results with complex static speckles. (a) Initial image, (b) degraded image by complex speckles before spatial LDFC, (c) restored image after spatial LDFC, (d) introduced aberration, (e) reconstructed DM map by spatial LDFC, and (f) residual error between introduced aberration and reconstructed DM map.

In the text
thumbnail Fig. 17

Averages of 1000 images showing the resulting coronagraphic images when the LDFC loop is open (left) and closed (right).

In the text
thumbnail Fig. 18

RMS WFE of both open- and closed-loop LDFC. The solid blue and green lines represent the applied aberration and the residual error, respectively. The dashed blue and green lines indicate the average of each data.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.