EDP Sciences
Free Access
Issue
A&A
Volume 545, September 2012
Article Number A18
Number of page(s) 10
Section Astronomical instrumentation
DOI https://doi.org/10.1051/0004-6361/201117319
Published online 30 August 2012

© ESO, 2012

1. Introduction

In the next few decades, the development of high angular-resolution instruments is expected to provide efficient tools to help us greatly improve our knowledge of compact objects in our Galaxy and beyond. In particular, optical long-baseline interferometry is a very promising technique to reach very high angular resolution in the milliarcsecond range (Perrin et al. 2000; Ridgway & Glindemann 2009; Absil & Mawet 2010).

In 1996, A. Labeyrie proposed a new type of long-baseline interferometric instrument called a hypertelescope (Labeyrie 1996). Such an aperture synthesis instrument is dedicated to direct imaging at visible and near infrared wavelengths. Unlike classical interferometers that measure mutual coherence and phase closure (Lawson 1997), hypertelescopes allow high contrasted objects to be imaged by isolating the light of each resolved element of the object on one pixel of the image.

In Reynaud & Delage (2007), our team proposed an alternative version of this instrument called the temporal hypertelescope (THT). This new architecture makes the instrument very versatile and well-suited to a space-based mission. We then developed a THT experimental test-bench with the support of CNES and Thales Alenia Space (Bouyeron et al. 2010). The aim of this device is to experimentally demonstrate that a project of a space-based hypertelescope will be realistic in a few years. In this context, the development of an efficient co-phasing system with a submicrometric accuracy is required to validate the direct imaging abilities of this new kind of instrument.

In this paper, we present the first experimental THT co-phasing result obtained with our laboratory test-bench. In the first section, we provide a brief reminder of the hypertelescope concept and a presentation of the temporal hypertelescope (THT) test-bench. The second section deals with the co-phasing process based on a joint implementation of the sub-aperture piston phase-diversity (SAPPD) method and a genetic algorithm (GA). The third section reports the most important experimental results obtained with our laboratory THT test-bench.

2. Reminder on the hypertelescope concept

We briefly recall the concept of a hypertelescope in its spatial and temporal version. In a second step, the THT test-bench design is described.

2.1. Spatial version

thumbnail Fig. 1

Labeyrie’s hypertelescope design.

Open with DEXTER

A hypertelescope is a multi-aperture interferometric instrument dedicated to direct imaging at optical wavelengths. Labeyrie’s initial design involves two main pupils (see Fig. 1). The input pupil consists of an array of telescopes simultaneously observing the same object. Distances between adjacent telescopes are much larger than their mirror diameters. Therefore, the input pupil is referred to as “diluted”. The array configuration determines the instrument interferometric function (Lardière et al. 2007). Its angular resolution R is given by

with Bmax the largest array baseline.

Its clean field of view (CLF) is the maximal angular size of the object to be observed without any aliasing effect (and so to obtain a direct image). It is equal to

where Bmin is the shortest array baseline.

According to the golden rule of interferometry (Traub 1986), direct imaging only operates if the output pupil configuration is homothetic to the input one. However, in the case of the hypertelescope, the ratio of Bmin to the mirror diameter D of the telescopes is not preserved over the output pupil. The densification ratio γ is defined as

where bmin and d are respectively the shortest baseline length and the diameter of sub-apertures in the output pupil.

This densification process permits us to squeeze the instrument diffraction envelope angular size (see Fig. 2) from λ/D (no densification, Fizeau interferometer case) to λ/(γD) (densified pupil).

If the output pupil is fully densified (i.e. has maximum compactness), the diffraction envelope FWHM is equal to λ/Bmin. In these conditions, after passing through a focusing lens, the light collected by the input pupil is fully concentrated into the image CLF over a single interferometric order. Relative to the Fizeau interferometer design, this architecture results in an increase in the maximum of the instrument point spread function (PSF), and a signal-to-noise-ratio improvement of a factor of γ2 (Patru et al. 2009).

thumbnail Fig. 2

Pupil densification effect on the PSF shape. The reduction of the diffraction envelope size induces a concentration of the photon flux in the instrument CLF.

Open with DEXTER

To conclude, the hypertelescope allows us to preserve the direct imaging abilities of a synthesised aperture even with an highly diluted input pupil. Some prototypes (Patru et al. 2008; Le Coroller et al. 2012) have been developed to experimentally validate this instrumental concept.

2.2. From spatial to temporal approaches

In a first step, we consider the spatial configuration of an hypertelescope. When observing a point-like star located on the optical axis (see Fig. 3, left), the ideal instrument PSF is centered on this same axis. The PSF distribution is equal to the product of the telescope-array interferometric function and the diffraction envelope of a single aperture. We now consider a point-like source with an off-axis angle α (see Fig. 3, right). The plane wave falling on the hypertelescope input pupil is also tilted by the angle α. This results in an optical path difference (OPD) between the portions of plane waves collected by each telescope of the array

where Bkl is the baseline length between the telescopes k and l. In the image plane, this results in a interferometric function shift proportional to α. We note that with the hypertelescope configuration proposed by Labeyrie, the diffraction envelope also moves but that this motion speed is divided by γ (Lardière et al. 2007). Consequently, for a diluted array, we can assume that the envelope remains quite stationary. This feature becomes perfectly true if the collected waves are spatially single-mode filtered between the input and the output pupil, for example by using single-mode fibers (Lardière et al. 2007).

The temporal hypertelescope concept proposed in 2007 by our team (Reynaud & Delage 2007) is compliant with these last properties. By using optical path modulators (OPMs) located between the input and the output pupil, we can generate OPDs identical to those observed at any value of α. The resulting equivalent tilt angle is denoted by αT. We have

where t0 is the image acquisition time. In our experiment, t0 is equal to 100   ms. During this time, αT varies linearly between  − αTmax and  + αTmax with

The OPDs generated between the arm k and l of the interferometer are given by

This process results in a linear shift of the image in the image plane.

Consequently, observing with a mono-pixel detector located in the center of this plane, one can scan the entire CLF and thus temporally display the full image (see Fig. 4). In addition, as the diffraction-envelope maximum remains aligned with the detector position, no diffraction effect of one single telescope is observed. Consequently, in the temporal approach, the acquired image is a pure convolution between the object spatial intensity distribution and the PSF. In the spatial case, this process is a pseudo convolution as the diffracting envelope induces a slight distortion of the image.

Finally, we note that in the temporal domain, we can replace the multi-axial interferometric combiner with a coaxial coupler (Hunsperger 1984). This configuration enables us to use single-mode optical components positioned along all the interferometric arms, to produce the drastic spatial filtering that is mandatory for acquiring high contrast images.

thumbnail Fig. 3

Point-spread function shift induced by a tilt angle α.

Open with DEXTER

2.3. The temporal hypertelescope experimental setup

A test-bench of a temporal hypertelescope has been implemented at the XLIM laboratory to validate the potential of this new kind of instrument. Its main goal is to provide images with both high angular resolution and high contrast using a limited number of telescopes (fewer than ten). Its long-term target could be high-contrast binary systems such as a star-exoplanet system.

Our THT bench structure consists of three main parts: a binary star simulator, a telescope array, and a combining interferometer.

2.3.1. The binary star simulator

The star simulator consists of two independent distributed feedback (DFB) lasers located in the focal plane of an optical doublet. These sources provide a 1550 nm linearly polarized light. The two-star angular separation is equal to 25 μrad and their intensity ratio can be adjusted. In the framework of this first demonstration, we only work with monochromatic light.

2.3.2. The telescope array

thumbnail Fig. 4

Principle of the temporal hypertelescope. The image is acquired as a function of time by artificially tilting the incident wavefront using optical path modulators.

Open with DEXTER

thumbnail Fig. 5

Temporal hypertelescope test-bench scheme. This setup is divided into three main parts: a) the unbalanced binary-star simulator, b) the telescope array, and c) the eight-arm interferometer.

Open with DEXTER

The THT test-bench is made of an array of eight telescopes. Its design directly affects the PSF shape, and hence, the instrument imaging properties. This PSF can be characterized by a few parameters: the instrument clean field (CLF), the angular resolution (R), the dark field (DF), and the dynamic range (DR). The DF is the CLF area where the halo level is minimized and so where a faint object could be detected. The DR is defined as the ratio of the maximum intensity of the PSF Imax to the maximum intensity over the dark field IDF (i.e. the maximum intensity of the PSF side lobes) and characterizes the capability of detecting a faint object as

In a previous study (Armand et al. 2009), the telescope spatial distribution in the THT input pupil and the relative photon flux collected by each sub-aperture were optimized to maximize the DR when imaging a high-contrast linear object, such as a star-planet system. The input pupil mapping is reported in Fig. 6. This configuration is equivalent to a redundant linear array when observing a one-dimensional (1D) vertical object. The instrument angular resolution R and the clean field of view CLF are respectively

This optimization was done for the dark field

The DR maximization is obtained by adjusting the flux level collected by each sub-aperture (i.e.,apodization). On the THT test-bench, this is applied using diaphragms located in front of each telescope. Relative intensity levels are adjusted to fit the theoretical optimized configuration (see Table 1).

thumbnail Fig. 6

Theoretical (white disks) and experimental (black disks) telescope array configuration: owing to the experimental constraints, the telescope array is not linear. However, the projection of the telescope baseline along the vertical axis is fully identical to the theoretical redundant configuration. These two configurations are equivalent to a 1D vertical array when observing a 1D vertical object.

Open with DEXTER

As we can see in the left panel of Fig. 7, this configuration allows us to produce a very high DR (5.6 × 104) image with a limited number (<10) of telescopes.

2.3.3. The interferometer

As previously said, the input pupil of an hypertelescope is assumed to be highly diluted. Consequently, each single telescope of the THT array does not resolve the astronomical target. The wavefront collected by each sub-aperture can therefore, be reduced to a single mode beam. This experimental configuration can benefit from the use of single-mode fibers without any loss of information about the astronomical target. Moreover, the use of these fibers allows an efficient spatial filtering and cancels the residual wavefront distortions due to environmental fluctuations or instrument optical aberrations. Single-mode fibers and guided wave optical components have therefore been used to implement most of the interferometric part of the test bench.

Each interferometer arm includes a fiber delay line (Simohamed & Reynaud 1997) and a fiber optical path modulator (OPM). The first component enables a coarse adjustment of the optical path length (OPL) necessary to observe interferometric fringes. The second one is a finer stage permitting us to induce the calibrated variation in the wave phase as a function of time. The applied OPL strokes depend of the telescope relative positions in the input pupil and vary from zero to seven λ in our experimental setup. These actuators allows an OPL resolution of about 3 nm (Reynaud & Delaire 1993).

Finally, fibers guide the light to an 8 to 1 coaxial recombiner based on the integrated optics technology. The temporal image is acquired pixel by pixel over 100 ms thanks to a monopixel InGaAs photodiode.

An experimental validation of the THT principle was previously reported with this test-bench (Bouyeron et al. 2010). However, the DR value obtained during this first experiment was unstable as function of time and reached a maximum value of 300. This experimental study has demonstrated that this DR limitation cannot be explained by only taking into account setup intrinsic defects such as:

  • apodization error,

  • detector noise,

  • differential effect of light polarization.

Assuming these limitations to be the only relevant ones, we have inferred from a Monte Carlo statistical approach, that the maximum DR to be obtained would be about 104 on our test-bench (see Fig. 7, right). To reach these performances, a servo-controlling of the OPL is required.

Table 1

Relative intensity distribution derived from an optimization process (Armand et al. 2009).

3. The co-phasing problem

3.1. Problem statement

At a given wavelength λ, the image I obtained with our hypertelescope is described over the field of view by

with PSF denoting the instrument point spread function, O the object angular intensity distribution, and  ⊗  the convolution operator. The corresponding image spatial spectrum is equal to

where OTF denotes the instrument optical transfer function, the object spatial spectrum, and ℱ(I) the Fourier transform of the image I.

As previously mentioned, the hypertelescope array sub-apertures are unable to individually resolve the astronomical target. This allows us to consider each sub-aperture as point-like. Consequently, the hypertelescope OTF can be represented by a set of periodic Dirac delta functions. In our experiment, the related redundant array consists of eight evenly spaced telescopes. This array therefore samples eight spatial frequencies νi given by νi = i/CLF, i = 0 to 7. The choice of this configuration was driven by the expected dynamic range DR (Armand et al. 2009).

When observing a point-like star, each sub-aperture of the array collects an optical field that is combined with the other contributions. At the output of the instrument, the output field can be expressed as

where ak and ϕk are the optical field amplitude and phase, respectively. The ak values are adjusted to those of Table 1 thanks to diaphragms positioned in front of each sub-aperture. The OTF spectral contribution related to one frequency νi is given by

When observing a resolved object O, the image spectral power density at the νi spatial frequency is then given by

At the interferometric recombination level, the phase difference ϕk − ϕl between the optical fields collected by telescopes k and l is called the piston error, which is related to the optical path difference (OPD) by

where OPLi refers to the total optical path from the source to the beam combiner passing through the whole ith interferometer arm. In the case of monochromatic point-like sources, the system is co-phased if ϕk − ϕl = 0(mod      2π) ∀(k,l).

As we can see in Fig. 8, the DR quickly decreases as the piston error RMS value increases. In the next part of this paper, we propose a servo-control method to accurately stabilize OPDs of our interferometer, allowing us to get the PSF as close as possible to the theoretical co-phased model.

thumbnail Fig. 7

Left: theoretical PSF for an ideal co-phased instrument (ϕk − ϕl = 0 ∀(k,l) ). The DR is equal to 5.6 × 104. Right: point spread function of the THT test bench taking into account all experimental intrinsic defects except for the phasing error. Amplitude of each defect (apodization error, detector noise, differential effect of light polarization, etc.) was evaluated experimentally. The image was then obtained thanks to a Monte Carlo statistical approach.

Open with DEXTER

thumbnail Fig. 8

Evolution of the mean DR value of short exposure images as a function of the root-mean-square of the piston error between interferometer arms. This was numerically obtained by a Monte Carlo statistical approach. Each point is evaluated over 104 computed PSFs. This curve is computed using the actual test-bench characteristics (apodization coefficients, array configuration, working wavelength ...). The sole instrumental defect taken into account in this simulation is the piston error.

Open with DEXTER

3.2. Co-phasing method

Co-phasing techniques have been developed to minimize the impact of atmospheric turbulences and instrumental instabilities on the instrument angular resolution. Shack-Hartmann wavefront sensing (Rousset et al. 2002) and curvature analysis (Takami et al. 2004) have been successfully applied to single aperture instruments but are inefficient for multi-aperture arrays. These systems are indeed, unable to measure the piston error existing between each sub-aperture of a diluted array. Phase retrieval method (Fienup 1978), which directly uses the image, is insensitive to this limitation. However, this technique is usually based on the comparison between the instrument current PSF and the theoretical one. It requires a point-like object or an object with a well-known geometry in the field of view. Hypertelescopes permit us to obtain direct images of the observed object with high angular resolution but the instrument CLF and the number of resolved elements is relatively low. Consequently, with a hypertelescope, it is difficult to use a technique that requires both the reference star and the astronomical target in the instrument CLF simultaneously.

3.2.1. Phase diversity

Gonsalves (1982) proposed an evolution of the phase retrieval technique called phase diversity. It was later, demonstrated that this technique can be successfully applied to multi-aperture instruments (Paxman & Fienup 1988).

This method permits us to correct phase aberrations of an instrument without any use of a reference object. This process involves the acquisition of two images of the same object. The first one, which is called the standard image , is the current image of the object for which piston errors are unknown and need to be determined. The second one, called a diversity image , is obtained by applying additional well-known aberration errors to the instrument in its current status.

These two images are used to compute a phase criterion χref that does not depend on the target intensity distribution and is a relevant signature of the actual piston errors.

Kendrick et al. (1994) proposed four different metrics to define the χ criterion

where and are the standard and diversity image Fourier spectra, respectively, and z denotes the complex conjugate of z.

The phase criterion is then computed with

We test the efficiency of these four means of driving our co-phasing system in Sect. 3.3.

The diversity image is usually, obtained by defocusing the instrument. However, as proposed in Bolcar & Fienup (2005, 2009), the diversity function can also be generated as a piston phase aberrations. This method is named sub-aperture piston phase-diversity (SAPPD). It is really suitable for multi-aperture instruments and especially for the THT test-bench. As OPM are already implemented in all interferometric instruments, it is unnecessary to add any extra optical component to implement SAPPD with these devices.

3.2.2. Problem solving method

thumbnail Fig. 9

Co-phasing process scheme.

Open with DEXTER

thumbnail Fig. 10

Simulated PSF DR average evolution with co-phasing cycle number for M1, M2, M3, and M4 metrics. For each, we have tested the four piston diversity range where the piston diversity values were randomly generated in this range.

Open with DEXTER

The value of the phase criterion χref, which is experimentally obtained with phase diversity, is related by non-linear relations to piston errors in the instrument. In this way, this problem is not invertible. At the present time, only iterative methods have proved their efficiency in deriving OPDs from this criterion (Fienup 1982).

In the framework of our study, we chose to use a genetic algorithm (GA) to solve this problem. This technique, well-known to avoid local minima in an optimization problem (Brady 1985), is often used to design and optimize a phased array antenna, (Ares-Pena et al. 1999; Marcano & Durán 2000), which is similar to with our study (redundant array configuration, control of the PSF shape, ...).

In our instrument, a co-phasing cycle pursues the following sequence (see Fig. 9):

  • image acquisition,

  • random generation of diversity pistons,

  • acquisition of diversity image ,

  • computation of the reference phase criterion χref,

  • sending χref and piston diversity values into the GA,

  • iterative evaluation of the piston errors with the GA,

  • OPL adjustment in the instrument.

The GA principle stems from the Darwin evolution theory. A population of individuals is confronted with its natural environment. An individual consists in a set of parameters called genes. The entire set of genes of an individual define its genotype. In the case of the GA, we consider the individual destiny to be totally determined by its genotype. Consequently, the most-well adapted individuals with respect to their environment, have greater chances of reproducing themselves and so transmitting their genotype. New individuals are obtained by crossing the genotype of two parents (i.e. selected individuals of the previous generation). The genotype of this new individual can finally diverge from its parents by a mutation effect: each gene can acquire a modified value that is slightly different from its parents.

In our case, we had to evaluate the interferometer piston errors to be corrected. We knew the applied piston diversity values and the χref value computed with and . We considered the wave phase ϕi linked to each interferometer arms are the only relevant free parameters defining the genotype of an individual.

The behavior of the THT test-bench was modeled. Consequently, for a particular individual (set of piston error values), a current image and a diversity image can be simulated for a particular object (especially for a point-like star). The related χsim criterion, which is independent of the object geometry, can be computed. As χref is a specific signature of piston errors in the real instrument, the best fit of χref with χsim gives a more accurate piston error estimation.

Therefore, the individual’s adaptation level is estimated by a fitness function defined as

The smaller F, the better the individual’s adaptation level. In our algorithm, only the best individuals can reproduce themselves to maximize the algorithm speed convergence. An iteration of the GA consists in the following sequence (see Fig. 9):

  • computation of individual responses χsim,

  • evaluation of the individual adaptation level by the fitness function F,

  • selection of the best individuals to be chosen as future parents,

  • crossover of the parents’ genotype to create new individuals,

  • mutation of the genotype of each new individual.

Experimentally, an iteration has a 0.2 ms duration. After a preset number of iterations, the GA gives an estimation of the piston errors. Finally, we applied to the test-bench interferometer, an adjustment of the OPLs that cancel the estimated piston errors.

3.3. Selecting the best metric

As previously said, the χref criterion can be defined by many ways. In this paragraph, we evaluated the statistical properties of the four metrics previously defined in the case of a joint use of GA and SAPPD. We observed their DR evolution as a function of the number of co-phasing cycles. For each metric, the initial set of piston errors are the same. The piston diversity range (PDR) is the span over which the piston diversity values are randomly drawn. The following GA parameters were empirically chosen:

  • 5 individuals per generation,

  • 2 parents per generation,

  • 10 GA iterations per co-phasing cycle,

  • mutation range of  ±0.005 rad per iteration,

  • PDR (4 cases are investigated)

    •  [−π; +π] ,

    •  [−π/2; +π/2] ,

    •  [−π/4; +π/4] ,

    •  [−π/10; +π/10] .

The results of these simulations (see Fig. 10) show that the metric efficiency depends on the PDR. Both M2 and M3 require a large PDR ([−π; +π]), whereas M1 and M4 are more efficient for a small PDR ([−π/10; +π/10] ). M4 is the most versatile metric with respect to the PDR and so could be used either with a large ([−π; +π] ) or small ([−π/10; +π/10]) PDR. Figure 11 compares the results obtained with the four metrics when their best PDR. We can see that metric performances are similar when using a well-suited piston diversity range.

4. Experimental results

thumbnail Fig. 11

Comparison of the simulated PSF DR average evolution obtained with M1, M2, M3, and M4 for their best PDR.

Open with DEXTER

thumbnail Fig. 12

Experimental PSF DR versus time evolution in closed loop. Left: ten GA iterations per cycle. Right: hundred GA iterations per cycle.

Open with DEXTER

We report the experimental results obtained by implementing our active co-phasing system on the THT test-bench. The four metrics selected in the previous section were experimentally tested. Figure 12 shows an example of the experimental DR evolution as a function of time. In each case, results were obtained for 10 and 100 GA iterations per co-phasing cycle.

Both M1 [−π/10;π/10]  and M4 [−π/10;π/10]  reach the best results with an average DR of 4000, whereas M2 [−π;π]  and M4 [−π;π]  give a DR of 1000 and M3 [−π;π]  achieved a DR of 300 but with a longer response time. A clear difference between the experimental behaviors appeared when using a large piston diversity in a  [−π;π]  range. Owing to an hysteresis effect in the piezoelectric modulators, diversity pistons are applied with a lower accuracy than for a ([−π/10;π/10]) piston diversity range, resulting in a lower co-phasing efficiency. M3 [−π;π]  is very sensitive to this effect and consequently, cannot be used conveniently with our experimental test-bench.

In addition, Fig. 12 shows that as long as the GA computation time is negligible compared to the exposure time, the algorithm convergence speed increases nearly linearly with the number of iterations per cycle. In our case, the exposure time value is 100 ms and the computation time is 0.2 ms per GA iteration. By increasing the GA iterations per co-phasing cycle from 10 to 100, we significantly reduced the response time to reach a DR of 1000 from 25 s to 3 s.

thumbnail Fig. 13

Top: experimental PSF DR evolution with time for short exposure images. Bottom: comparison between ideal PSF and long-exposure images (1000 s) obtained by adding 104 short-exposure images on logarithmic (left) and linear (right) scale. Note that error bars are given at 3σ.

Open with DEXTER

Then, according to the previous results, we used the M4 [−π/10;π/10]  metric to obtain a long exposure image by averaging 104 short exposure images. The average DR for short exposure images is stabilized at 6 × 103 over 1000 s (see top of Fig. 13). The long exposure image was then obtained by stacking these short exposure images (see Fig. 13 bottom), which exhibits a greater DR of 104. This result demonstrates that the image DR can be improved by averaging short exposure images. Residual phase defects are different for each short exposure image (random superposition) whereas the light related to the object is always localized at the same position on the image (deterministic superposition).

According to Fig. 8, an average DR of 6 × 103 in the short exposure images corresponds to a 6 nm RMS OPD (i.e., λ/260). To validate this estimation, we simulated the long exposure PSF we should have acquired with an instrument only limited by this defect (see Fig. 14, red dot). The comparison of the two images shows that experimental results seem to be better than the simulation one (especially on the left part of the PSF). In this part of the field, the experimental image is well-fitted by a long exposure PSF obtained with a 4 nm RMS OPD, i.e., λ/400 (see Fig. 14, green dot). On the right part of the field, the artifact remains experimentally stable regardless of the residual piston fluctuations.

We can infer from these results that the piston default was properly corrected for and is no longer the main default of the instrument. The residual defect on the right side of the PSF may be attributable to an instrumental flaw involving polarization behavior. This result is consistent with the maximum DR of 104 predicted for our test-bench when the instrument intrinsic defaults are taken into account (see Fig. 7, right).

thumbnail Fig. 14

Comparison between the experimental long-exposure image and a simulated image obtained on an ideal instrument only limited by OPD in logarithmic (left) and linear (right) scale. The long-exposure images are obtained by integrating 104 short-exposure images.

Open with DEXTER

Finally, we tested the ability of our instrument to image a highly unbalanced binary system (see Fig. 15). We obtained two long-exposure images: a reference PSF (single star) and an image of the binary system (a bright star + companion). By subtracting the normalized PSF to the normalized image, we can clearly detect the star companion. A comparison between the object and the image of the star companion alone validates that its position (25   μrad) and intensity (magnitude difference between the main star and the companion in H-band: ΔH = 9.1) are well-estimated. We note that the error bar amplitudes are larger in the image central area (vertical lines between 8 and 12 μrad the bottom panel of Fig. 15 bottom corresponding to the photon noise) but this is not an instrument’s limitation as we only intend to detect a companion in the clear field area.

thumbnail Fig. 15

Top: comparison between long exposure (2 min) PSF and image of a binary system in logarithmic scale. Middle: same as top in linear scale. Bottom: comparison between an image of the faint companion alone and the difference between the binary system image and the PSF. Note that error bars are given at 3σ.

Open with DEXTER

5. Conclusion

We have proposed a co-phasing method for a diluted multi-aperture instrument, using sub-aperture piston-phase diversity method and a genetic algorithm. We have first, optimized algorithm parameters through simulations. We have then, implemented this system on an eight-sub-aperture-hypertelescope prototype operating with a 1550 nm quasi-monochromatic light. We have obtained a minimization and stabilization of the instrumental optical-path differences in the range of 4 nm (i.e., λ/400). Thanks to this stability, we have obtained a long-exposure point spread function over 1000 s. This image exhibits a dynamic range of 104, which tallies with the best performances theoretically accessible to our test-bench. We have finally obtained an image of an unbalanced binary system and retrieved its characteristics of a magnitude difference of ΔH = 9.1 and a 25   μrad angular separation.

Even if our study is mainly dedicated to a space-borne mission, this method could be applied to a ground-based instrument. In this case, the co-phasing system would indeed, have to operate with a frequency cut-off of at least few kHz (compared to the 5 Hz THT closed loop). For this purpose, a major improvement in our setup could consist in the use of lithium-niobate optical-path modulators and high speed electronics. In this way, the image acquisition time t0 could be drastically reduced. In addition, the computing time could be minimized by using dedicated hardware, and parallel computing, which is well-suited to the GA concept. However, the global architecture of the servo control system would then be the same as in the present experiment.

The next step of our work will be dedicated to the validation of our co-phasing system in the photon counting regime. Using the same experimental setup, we intend to decrease the artificial object intensity and determine the limiting magnitude that we could expect in a real observation. Finally, the last step is to slightly modify the co-phasing algorithm to be effective with a broadband source. In this case, an improvement in the measured piston range to a few π is required.

Acknowledgments

The authors are grateful to S. Vergnole for valuable comments on the manuscript. This work is supported by Thales Alenia Space and the Centre National d’Études Spatiales (CNES).

References

All Tables

Table 1

Relative intensity distribution derived from an optimization process (Armand et al. 2009).

All Figures

thumbnail Fig. 1

Labeyrie’s hypertelescope design.

Open with DEXTER
In the text
thumbnail Fig. 2

Pupil densification effect on the PSF shape. The reduction of the diffraction envelope size induces a concentration of the photon flux in the instrument CLF.

Open with DEXTER
In the text
thumbnail Fig. 3

Point-spread function shift induced by a tilt angle α.

Open with DEXTER
In the text
thumbnail Fig. 4

Principle of the temporal hypertelescope. The image is acquired as a function of time by artificially tilting the incident wavefront using optical path modulators.

Open with DEXTER
In the text
thumbnail Fig. 5

Temporal hypertelescope test-bench scheme. This setup is divided into three main parts: a) the unbalanced binary-star simulator, b) the telescope array, and c) the eight-arm interferometer.

Open with DEXTER
In the text
thumbnail Fig. 6

Theoretical (white disks) and experimental (black disks) telescope array configuration: owing to the experimental constraints, the telescope array is not linear. However, the projection of the telescope baseline along the vertical axis is fully identical to the theoretical redundant configuration. These two configurations are equivalent to a 1D vertical array when observing a 1D vertical object.

Open with DEXTER
In the text
thumbnail Fig. 7

Left: theoretical PSF for an ideal co-phased instrument (ϕk − ϕl = 0 ∀(k,l) ). The DR is equal to 5.6 × 104. Right: point spread function of the THT test bench taking into account all experimental intrinsic defects except for the phasing error. Amplitude of each defect (apodization error, detector noise, differential effect of light polarization, etc.) was evaluated experimentally. The image was then obtained thanks to a Monte Carlo statistical approach.

Open with DEXTER
In the text
thumbnail Fig. 8

Evolution of the mean DR value of short exposure images as a function of the root-mean-square of the piston error between interferometer arms. This was numerically obtained by a Monte Carlo statistical approach. Each point is evaluated over 104 computed PSFs. This curve is computed using the actual test-bench characteristics (apodization coefficients, array configuration, working wavelength ...). The sole instrumental defect taken into account in this simulation is the piston error.

Open with DEXTER
In the text
thumbnail Fig. 9

Co-phasing process scheme.

Open with DEXTER
In the text
thumbnail Fig. 10

Simulated PSF DR average evolution with co-phasing cycle number for M1, M2, M3, and M4 metrics. For each, we have tested the four piston diversity range where the piston diversity values were randomly generated in this range.

Open with DEXTER
In the text
thumbnail Fig. 11

Comparison of the simulated PSF DR average evolution obtained with M1, M2, M3, and M4 for their best PDR.

Open with DEXTER
In the text
thumbnail Fig. 12

Experimental PSF DR versus time evolution in closed loop. Left: ten GA iterations per cycle. Right: hundred GA iterations per cycle.

Open with DEXTER
In the text
thumbnail Fig. 13

Top: experimental PSF DR evolution with time for short exposure images. Bottom: comparison between ideal PSF and long-exposure images (1000 s) obtained by adding 104 short-exposure images on logarithmic (left) and linear (right) scale. Note that error bars are given at 3σ.

Open with DEXTER
In the text
thumbnail Fig. 14

Comparison between the experimental long-exposure image and a simulated image obtained on an ideal instrument only limited by OPD in logarithmic (left) and linear (right) scale. The long-exposure images are obtained by integrating 104 short-exposure images.

Open with DEXTER
In the text
thumbnail Fig. 15

Top: comparison between long exposure (2 min) PSF and image of a binary system in logarithmic scale. Middle: same as top in linear scale. Bottom: comparison between an image of the faint companion alone and the difference between the binary system image and the PSF. Note that error bars are given at 3σ.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.