Issue |
A&A
Volume 500, Number 2, June III 2009
|
|
---|---|---|
Page(s) | 647 - 655 | |
Section | Cosmology (including clusters of galaxies) | |
DOI | https://doi.org/10.1051/0004-6361/200811061 | |
Published online | 01 April 2009 |
Optimal point spread function modeling for weak lensing: complexity and sparsity
S. Paulin-Henriksson1 - A. Refregier1 - A. Amara2,3
1 - Service d'Astrophysique, CEA Saclay, Bâtiment 709, 91191 Gif-sur-Yvette Cedex, France
2 -
Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 16, 8093 Zürich,
Switzerland
3 -
Department of Physics and Center for Theoretical and Computational Physics, University of Hong Kong, Pok Fu Lam Road, Hong Kong
Received 30 September 2008 / Accepted 23 December 2008
Abstract
Context. We address the issue of controling the systematic errors in shape measurements for weak gravitational lensing.
Aims. We make a step to quantify the impact of systematic errors in modeling the point spread function (PSF) of observations, on the determination of cosmological parameters from cosmic shear.
Methods. We explore the impact of PSF fitting errors on cosmic shear measurements using the concepts of complexity and sparsity. Complexity, introduced in a previous paper, characterizes the number of degrees of freedom of the PSF. For instance, fitting an underlying PSF with a model of low complexity produces small statistical errors on the model parameters, although these parameters could be affected by large biases. Alternatively, fitting a large number of parameters (i.e. a high complexity) tends to reduce biases at the expense of increasing the statistical errors. We attempt to find a trade-off between scatters and biases by studying the mean squared error of a PSF model. We also characterize the model sparsity, which describes how efficiently the model is able to represent the underlying PSF using a limited number of free parameters. We present the general case and give an illustration for a realistic example of a PSF modeled by a shapelet basis set.
Results. We derive a relation between the complexity and the sparsity of the PSF model, the signal-to-noise ratio of stars and the systematic errors in the cosmological parameters. By insisting that the systematic errors are below the statistical uncertainties, we derive a relation between the number of stars required to calibrate the PSF and the sparsity of the PSF model. We discuss the impact of our results for current and future cosmic shear surveys. In the typical case where the sparsity can be represented by a power-law function of the complexity, we demonstrate that current ground-based surveys can calibrate the PSF with few stars, while future surveys will require hard constraints on the sparsity in order to calibrate the PSF with 50 stars.
Key words: gravitational lensing - cosmology: dark matter - cosmology: cosmological parameters - cosmology: observations
1 Introduction
Studying spatial correlations between galaxy shapes induced by gravitational lensing of the large scale structure (``Cosmic Shear'') is a powerful probe of dark energy and dark matter.
A number of current and planned surveys are dedicated to cosmic shear,
such as: the Canada-France-Hawaii-Telescope Legacy Survey (CFHTLS), the KIlo Degree Survey and the VISTA Kilo-Degree Infrared Galaxy Survey
(KIDS/VIKING), the Dark Energy Survey
(DES), the Panoramic Survey Telescope & Rapid Response System
(Pan-STARRS), the SuperNovae Acceleration Probe
(SNAP), the Large Synoptic Survey Telescope
(LSST) and the Dark UNiverse Explorer
(DUNE/Euclid).
The most efficient way of improving the statistical precision of cosmic shear analyses is by enlarging the surveys. As long as the median redshift is sufficiently high (
),
Amara & Refregier (2007a) demonstrate that,
it is more advantageous to perform the cosmic shear surveys in fields as wide as possible, rather than deep. This in order to minimize the error bars in cosmological parameters.
To date, the largest data optimized for cosmic shear is the Wide field of the
CFHTLS,
which covers 50 deg2 (Fu et al. 2008) and will eventually reach 170 deg2.
Another analysis has also been published that combines 4 surveys that, together, cover an area of 100 deg2 (Benjamin et al. 2007).
In a few years, the KIDS/VIKING survey will cover 1500 deg2. Eventually, projects currently being planned, such as DUNE/Euclid, planned for 2017, will be able to perform cosmic shear measurements over the entire observable extragalactic sky (
deg2).
To let cosmic shear surveys reaching their full potential, it is necessary to ensure that systematic errors are sub-dominant relative to statistical uncertainties. In particular, a tight control on all the effects associated with shape measurements is required. To illustrate the difficulty in compiling accurate shear measurements, we begin with an overview of the ``forward process'' that illustrates how the original image of a galaxy is distorted in forming the final image that we measure. In the forward process, a galaxy image is: (i) sheared by gravitational lensing; (ii) convolved with a point spread function (PSF) originating in a number of sources (e.g. instruments and atmosphere); (iii) pixelated at the detector; and finally (iv) affected by noise.
Cosmic shear analyses involve the reverse process: we begin with the final image and move backward from step (iv) to (i) in recovering the original lensing effect. A detailed and illustrated description of the forward and inverse processes is given in the GREAT08 Challenge Handbook (Bridle et al. 2008). The GREAT08 Challenge aims to provide a wide range of expertise into gravitational lensing by presenting the relevant issues in a clear way so as to make it accessible to a broad range of disciplines, including the machine-learning and signal-processing communities. Other similar challenges have also been performed within the weak lensing community, as part of the STEP collaboration (Rhodes et al. 2008; Massey et al. 2007; Heymans et al. 2006), which focused mainly on understanding the systematic errors at play in current shear measurement methods. These challenges focus mainly on reducing the errors originating in the shape measurement method. However, even with a perfect method there are fundamental limits due the statistical potential of a data set.
In Paulin-Henriksson et al. (2008) (P1 hereafter), we investigated the link between systematic errors in the power spectrum and uncertainties in the PSF correction phase.
The framework is the following. Since the PSF of an instrument varies on all scales, the PSF needs to be measured using the stars that surround the lensed galaxy.
Each star provides an image of the PSF that is pixelated and noisy,
which means that to reach a given accuracy in the knowledge of the PSF, a number of stars is required.
We estimate the number of stars N* required to calibrate the PSF to a given accuracy, according to the stellar signal-to-noise ratio (SNR), the minimum galaxy size, the complexity of the PSF and the tolerated variance in the systematics
.
On the other hand, Amara & Refregier (2007b) estimated the upper limit to
when estimating cosmological parameters. By combining both papers together, we derive the minimum number of stars required to reach a given accuracy.
For instance, analyses completed to date, that allow us to constrain
with an accuracy of 0.05, require
lower than a few 10-6 and the PSF to be calibrated by using 5 stars; while for future ambitious surveys that will allow us to constrain w0 and wa to an accuracy of 0.02 and 0.1, respectively,
must be lower than 10-7,
which requires at least 50 stars
(for stars with signal-to-noise ratio of 500 and a PSF described by a few degrees of freedom, as can be typically achieved in space).
In P1 we use the same functional form for both the underlying PSF as well as the model used to fit it. This means that the PSF model is able to describe perfectly the underlying PSF. The errors in the fit due to noise causes a scatter of the fitted parameters around the truth. For instance, if the model is an orthogonal basis set, then the fitted parameters follow a Gaussian distribution around the truth. In this paper, we extend this investigation by studying the impact of fitting a PSF with a model that has a different form. This addresses the case in which the underlying PSF (unknown in practice) is estimated by fitting the parameters of an arbitrary model. This can lead to both a scatter in the fitting parameters and an offset in the average value relative to the true value, i.e. a bias in the fitting parameters. We can therefore model a given PSF using either a complex model of small biases but large scatters, or a simpler model that would lead to smaller scatters but larger biases. To quantify these effects, we revisit the concept of complexity proposed in P1 and introduce the concept of sparsity.
This paper is organised as follow: first, in Sect. 2, we discuss the concepts of complexity and sparsity, which are the key concepts of this paper; Sect. 3 presents our notation; Sect. 4 presents the definition of optimal complexity, illustrates our formalism with a PSF example and uses the sparsity as a tool for optimizing the complexity; Sect. 5 derives the minimum number of stars required to calibrate the PSF, extending results of P1; and finally, Sect. 6 summarizes our conclusions.
2 Complexity and sparsity
In P1, we introduced the concept of complexity; we demonstrated that a few complexity factors characterize the amount of information that needs to be collected about the PSF.
This is summarized and revisited in Sect. 2.1.
In Sect. 2.2, we introduce the concept of sparsity, which measures the ability of a PSF model to represent the underlying PSF with a small number of free parameters.
This allows us to explore how an optimal PSF model can be constructed to minimize
.
2.1 Complexity
In P1, we define the complexity factors of the PSF, which represent the number of degrees of freedom (DoF) that are estimated from stars (in the limit of infinite resolution, i.e. infinitely small pixels): the higher the number of DoF, the larger the complexity factors.
Each PSF shape parameter is associated with a complexity factor that is related to the rms of its estimator.
In the simple formalism where we consider unweighted quadrupole moments, the PSF is characterized by only two complexity factors
and
associated with the 2 component PSF ellipticity
and the square PSF rms radius
respectively (as defined in P1).
For a given star, one has:
where

![$\sigma[R^{2}_{\rm PSF}]$](/articles/aa/full_html/2009/23/aa11061-08/img36.png)
![$\sigma[\epsilon_{\rm PSF}]$](/articles/aa/full_html/2009/23/aa11061-08/img37.png)


![$\sigma[\epsilon_{{\rm PSF},i}]\equiv\sigma[\epsilon_{\rm PSF}]$](/articles/aa/full_html/2009/23/aa11061-08/img40.png)
If the PSF can be considered constant over several stars, or for particular representations of the PSF (for example with shapelet basis sets in the small ellipticity regime, see P1),
and
are spatially constant and Eq. (1) can be extended to a set of several stars. For a combination of several stars,
becomes
:
where

We also show in P1 that the polar shapelet basis set, proposed by Massey & Refregier (2005), tested on simulated data in Massey et al. (2007) and used on real data by Bergé et al. (2008), is particularly convenient for modeling the PSF. For example, in the small ellipticity regime,




![]() |
(4) |
with





![]() |
Figure 1:
PSF example (top panel) adopted in this paper and best fits of it (other panels) with 4 shapelet basis sets (corresponding to
|
Open with DEXTER |
2.2 Sparsity
In this paper, we introduce the concept of sparsity of the PSF model, which describes how efficiently a model can represent the underlying PSF with a limited number of DoF (i.e. with a limited complexity).
Specifically, the sparsity quantifies how the residuals between the estimated and the underlying PSF decrease as the complexity of the PSF model increases.
With a high number of DoF, i.e. a high complexity, one might expect small residuals but large scatters ointhe fitted parameters.
On the other hand, with a small number of DoF, i.e. a low complexity, one might expect large residuals but small scatters in the fitted parameters.
The sparsity characterizes the slope in this relation and
thus is an estimate of
the amount of information that can be contained in a given number of DoF.
We show how to use sparsity in optimizing the complexity and minimizing
.
Consider the shape parameters
and
of the underlying PSF, as defined previously.
The differences
and
between the underlying PSF (``true'' index) and its estimation (``est'' index) can be written:
![]() |
(5) |
These differences are of two types: the statistical scatter relative to the average



In P1, we address the zero bias case
![$b[\epsilon_i]=b[R^2]=0$](/articles/aa/full_html/2009/23/aa11061-08/img58.png)

![$b[\epsilon_1]$](/articles/aa/full_html/2009/23/aa11061-08/img59.png)
![$b[\epsilon_2]$](/articles/aa/full_html/2009/23/aa11061-08/img60.png)
![$\sigma[\epsilon]$](/articles/aa/full_html/2009/23/aa11061-08/img61.png)
![$\sigma[R^2]$](/articles/aa/full_html/2009/23/aa11061-08/img62.png)


In the following, we define a ``sparsity parameter''
in the particular case where the biases are modeled as a power-law function of the PSF model complexity (
), and we study the impact of
on the number of stars required to calibrate the PSF. We thus revisit the main result of P1 by deriving N* (the number of stars required to calibrate the PSF) according to
instead of
(the complexity of the underlying PSF).
Moreover, this new relation is optimized to minimize
.
We emphasize that, in this paper, we propose to optimize the complexity of the PSF model within a given the basis set. We do not address the issue of choosing the basis set itself. There is no doubt that, to optimize the PSF modeling, it is necessary to select carefully this basis set. For instance, generic basis sets such as shapelets, wavelets, or Fourier modes, although they have enormous advantages, are not optimal. This issue will be addressed in forthcoming works.
3 PSF calibration for shear measurement
When deconvolving the observed galaxy with the estimated PSF,
and
propagate into an
error
in the estimation of the galaxy ellipticity. We denote
and
,
the rms radius and the two-component ellipticity of the galaxy. When
,
,
,
and
are defined using the unweighted moments of the flux, this propagates to (see P1):
The spatial average of



with

- 1.
- The galaxy is not correlated with the PSF.
- 2.
- The error on the PSF ellipticity (
) and the PSF ellipticity itself (
) are not correlated. This is warranted by the fact that, in the assumed small ellipticity regime,
does not have any preferred direction, implying that
0.
- 3.
- We neglect correlations between the ellipticity and the inverse squared radius of the galaxy. This is reasonable for the PSF calibration in the small ellipticity regime.
We develop a more compact expression by adopting the following notation:
which leads to:
In P1, we considered only the scatters
![$\sigma[R_{\rm PSF}^2]$](/articles/aa/full_html/2009/23/aa11061-08/img81.png)
![$\sigma[\epsilon_{\rm PSF}]$](/articles/aa/full_html/2009/23/aa11061-08/img37.png)
![$b[\epsilon_i]=b[R^2]=0$](/articles/aa/full_html/2009/23/aa11061-08/img58.png)
![$\sigma^2[R^2_{\rm PSF}]\simeq\left<\left\vert\delta R^2_{\rm PSF}\right\vert^2\right>$](/articles/aa/full_html/2009/23/aa11061-08/img82.png)
![$\sigma^2[\epsilon_{{\rm PSF},i}]\simeq\left<\left\vert\delta\epsilon_{{\rm PSF},i}\right\vert^2\right>$](/articles/aa/full_html/2009/23/aa11061-08/img83.png)
This leads to:
We can see that

gives:
Although only




where the overall complexity of the model



Equations (19) and (20) then infer that:
From this equation, we can see that increasing the complexity



4 Optimal PSF model
In Sect. 4.1, we present a PSF example that we use in the remainder of this paper to illustrate our discussion.
In Sect. 4.2, we show the optimal complexity of the PSF model (that minimizes
)
and apply this to the PSF example.
We then explore this optimization in more details in Sect. 4.3 by examining a particular case in which the bias can be described by a power-law function of the complexity.
4.1 PSF example
To illustrate our discussion, we study a realistic example of a PSF with complex features in the tails, and investigate what happens when fitting it with various shapelet basis sets as function of the SNR of the available stars. We also use a shapelet basis set (which differs from that used in the fits) for describing the underlying PSF. This use of shapelets for both the PSF model and the underlying PSF was chosen for three reasons:
- -
- first, it allows pixelation issues to be ignored, which are beyond the scope of this paper. Indeed, the description of the underlying PSF is performed by using the continuous shapelet functions and the fits are performed at high resolution;
- -
- second, it considerably simplifies both the calculations and the fitting process, due to the orthogonality of shapelet functions (the average estimation of a fitted coefficient is the true value, independently of the other coefficients);
- -
- third, it is a simple and convenient framework for illustrating the use of sparsity as a tool in optimizing the complexity of the PSF modeling.

In Fig. 1, we also show the 16 fits performed with the 4 shapelet basis sets (corresponding to
,
6, 10, and 20) and
(the stellar signal-to-noise ratio, see Eq. (3)) equal to 100, 103, 104, or
(the latter is the ideal case of no background).
To determine the overall complexity
,
which depends on the rms of galaxy ellipticities in terms of the parameter
(see Eqs. (13) and (21)), we adopt the typical value
,
for which
,
6, 10, 20, 34 correspond to
,
4.3, 7.8, 16.4, 28.4 respectively.
In the following, we also adopt the value
,
that corresponds to the typical values
and
.
Figure 1 illustrates that:
- -
- when
is sufficiently high, complex basis sets are required to model the complex tails, i.e. the amount of bias B decreases as the complexity
of the model increases. For instance, a fit with
and
would allow one to recover our PSF example exactly, with B=0;
- -
- a higher complexity requires a higher number of DoF to be fitted. Consequently,
for a given value of
, increasing the complexity of the model also increases the scatter in the estimated shape. Therefore, it is not always appropriate to use a complex fit model; it may be more robust to use a simplified (but more biased) fit model.
4.2 Optimizing the complexity of the PSF model
The optimal PSF model is that for which
is minimized, varying
.
We define the optimal value
to be the minimal value of
and in the same spirit, we note that
is the corresponding value of
:
For instance, Fig. 2 illustrates the search for the optimal shapelet basis when our PSF example (see the previous section) is estimated with 50 stars (and with




![]() |
Figure 2:
Total variance
|
Open with DEXTER |
For a given fit model, increasing n* reduces the scatters but not the biases in the model fitting (see Eq. (22)). Therefore, as n* increases,
increases and
decreases.
This is illustrated in Fig. 3, which shows B,
,
and
(see Eqs. (17) to (19)) when our PSF example is estimated with n*=10, 50, or 200 (still
).
![]() |
Figure 3:
Total variance
|
Open with DEXTER |
Figure 4 shows
as a function of n*.
The diamonds represent the curve for our PSF example illustrated in all previous plots, while the bold-straight line without any diamond shows the ideal case (addressed in P1) of a PSF described perfectly by the model (i.e. B=0).
Thus,
varies with 1/n* as predicted by our scaling relation presented in P1.
The dotted and dashed lines are discussed in Sect. 4.3.
![]() |
Figure 4:
Optimal variance of the shape measurement systematics
|
Open with DEXTER |
4.3 Example of optimal complexity in the case of a power-law function
In this section, we derive the optimal complexity when the bias B is a power-law function of the complexity written as
.
We investigate
as a function of n* and
.
We normalize the power-law function, such as:
In our example of a PSF fitted with a shapelet basis set (see Sect. 4.1 and Fig 1), the smallest value of







This representation by a power-law function is particularly convenient because
can be identified with the sparsity: a high value of
means that the PSF model is efficient in representing the underlying PSF with a small number of free parameters. Conversely, a low value of
means the PSF model requires a large number of parameters to describe the underlying PSF without large residuals.
In the following,
is called the ``sparsity parameter''.
Together with this power-law representation (Eq. (24)), Eqs. (22) and (23) imply that:
Note that Eq. (27) expresses
(the minimum variance in the systematic errors in shear measurements that can be achieved, see
Eqs. (9), (10), and (23)) in terms of a set of parameters that can be divided into 2 families:
- 1.
- parameters that are properties of the data set, such as
, B0,
, and
;
- 2.
- parameters that are properties of the analysis method, such as n* (i.e. the number of stars used to calibrate the PSF) and
(i.e. the sparsity parameter of the PSF model).



![]() |
Figure 5:
Overall bias B versus |
Open with DEXTER |
5 Required number of stars
As discussed in the introduction, an important issue for cosmic shear surveys is to ensure that systematics are kept smaller than the statistical errors, by demanding an upper limit to
.
Part of the systematics have their origin in the PSF calibration, which is imperfect due to the limited number of stars available.
In this section, we express N*, the number of stars required to calibrate the PSF, in terms of the level of systematic errors
(note the capital ``N'', as opposed to ``n*'' which is the number of stars involved in the PSF calibration process: we need
to ensure that systematic effects are below
.
N* is the lower limit of n*).
In Sect. 5.1, we summarize the conclusions of P1 which apply when the underlying PSF and the PSF model have the same functional form (i.e. B=0) and we extend these conclusions to the general case of PSF modeling performed with any model (i.e. B not necessarily equal to 0).
In Sect. 5.2, we invert Eq. (27) (that holds when B is described by a power-law function of the complexity:
)
and express N* as a function of
and of the minimum systematic level
achievable when the complexity of the PSF modeling is optimal.
5.1 Generalised scaling relation
In the optimistic case where the PSF calibration is the only significant source of systematic errors, a given value of N* (i.e. a given number of stars involved in the PSF calibration) implies a value of
.
This is presented in P1 in the form of a scaling relation that links N*,
,
(the effective signal-to-noise ratio of stars),
(the ratio between the smallest galaxy size and the PSF size), and
(the complexity of the PSF):
The factor 2 at the end comes from the fact that


where
Thus, taking B into account in the scaling relation translates into the new factor
![$1/\left[1-\mathcal{C}\frac{B}{\sigma_{\rm sys}^2}\right]$](/articles/aa/full_html/2009/23/aa11061-08/img132.png)

![]() |
Figure 6:
|
Open with DEXTER |
5.2 Application to the power-law model
Equation (27) can be inverted to provide N* (the number of stars required to calibrate the PSF) as a function of
(the minimum level of systematics achievable when optimizing the complexity of the PSF modeling),
(the sparsity parameter),
(the effective SNR of stars defined in Eq. (3)), and
(a dimensionless factor defined in Eq. (12)):
with the dimensionless function:
shown in Fig. 6. With the notation and scaling of Eq. (29), Eq. (31) is equivalent to:
This equation allows one to estimate the number of stars required to calibrate the PSF and thus, with respect to the stellar density, the minimum scale on which the PSF calibration is possible. On smaller scales, stars provide insufficient information for calibrating PSF variations. This implies that these smaller scales may be contaminated by systematics due to a poor correction of the PSF and should not be used to estimate cosmological parameters, unless the variabilities on small scales are known to be extremely small. As shown by Amara & Refregier (2007b) and discussed in P1, future all-sky cosmic shear surveys will need to achieve





6 Conclusions
We explore the systematics induced in cosmic shear by the PSF calibration/correction process and
study how to optimize the PSF model to minimize the systematic errors in cosmological parameter estimations.
In this framework, we revisit the concept of the complexity of the PSF, defined in our previous paper (P1), and introduce the concept of the ``sparsity'' of the PSF model.
The complexity
characterizes the number of degrees of freedom in the model.
A small number of degrees of freedom corresponds to a low
and relates to a simple PSF model, which can be fitted to the stellar observations of low signal-to-noise ratio, but is likely to be highly biased.
On the other hand, a large number of degrees of freedom corresponds to a high
and relates to a complex PSF model,
which is expected to have a low bias but requires stellar observations of high signal-to-noise ratio to avoid large statistical scatters in the fitted parameters.
In P1, we related the complexity
of the PSF model to the systematic errors in cosmological parameter estimations. In this paper, we show how the complexity can be optimized depending on the stars available, using the concept of sparsity.
The sparsity characterizes
the decrease of residuals between the best fit of the PSF model on the underlying PSF, when adding degrees of freedom to the model.
In the general case, we also extend the scaling relation, proposed in P1, between the number of stars used to calibrate the PSF and the systematic errors in the cosmological parameter estimations.
As discussed in P1, this relation, with the constraint of maintaining the systematics below the statistical uncertainties when estimating cosmological parameters, infers the number of stars N* required for the PSF calibration. N* corresponds to the minimum scale on which the PSF modeling is accurate: on scales smaller than this minimum, there is insufficent information in the data to calibrate the PSF variations. This implies that these smaller scales may be contaminated by systematics related to a poor PSF correction and should not be used when estimating cosmological parameters (unless the variabilities are known to be small, due, for instance, to the quality of the hardware).
We consider a realistic PSF example and model the amount of bias B between the PSF fit and the underlying PSF by a power-law function of the
fitted
complexity:
where
is the sparsity parameter.
We find that, for this PSF, current cosmic shear analyses that cover
or less, need
to be higher than 2, which is achievable by current analysis methods.
Thus, current cosmic shear analyses do not require a rigorous optimization of the PSF model.
On the other hand, future cosmic shear surveys that aim to measure w0 and wa to an accuracy of 0.02 and 0.1, respectively, will require
to calibrate the PSF with 50 stars.
This relation between the required number of stars N* and the accuracy of the calibration depends on the underlying PSF. This explains why these values, although corresponding to realistic orders of magnitude, cannot be assumed to represent a general result.
Two parameters drive this relation: the amount of biases B0 when fitting the underlying PSF with a PSF model of low complexity (N* being proportional to
;
in our example,
), and the sparsity parameter
of the PSF modeling during the analysis.
It is thus possible to optimize
cosmic shear surveys
at two levels:
when optimizing the observational conditions, the PSF must be as simple and stable as possible in order to make possible its description by a low complexity model (this minimizes B0);
when analysing the data, the PSF modeling must be optimized to have as high a value of the sparsity
as possible.
The approach suggested in this paper is a first step toward introducing the concept of sparsity to weak lensing shape measurements. We do not address issues related to the pixelation. Moreover, although we only address the PSF calibration, this approach is also applicable to other topics such as description of galaxy shapes.
Acknowledgements
We thank Jean-Luc Starck and Jérome Bobin for useful discussions and insight on sparsity. We also thank Sarah Bridle and Lisa Voigt for an ongoing collaboration on weak lensing shape measurements. SPH is supported by the P2I program, contract number 102759. AA is supported by the Swiss Institute of Technology through a Zwicky Prize.
References
- Amara, A., & Refregier, A. 2007a, MNRAS, 381, 1018 [NASA ADS] [CrossRef] (In the text)
- Amara, A., & Refregier, A. 2007b, [arXiv:0710.5171] (In the text)
- Benjamin, J., Heymans, C., Semboloni, E., et al. 2007, MNRAS, 381, 702 [NASA ADS] [CrossRef] (In the text)
- Bergé, J., Pacaud, F., Réfrégier, A., et al. 2008, MNRAS, 385, 695 [NASA ADS] [CrossRef] (In the text)
- Bridle, S., Shawe-Taylor, J., Amara, A., et al. 2008, [arXiv:0802.1214] (In the text)
- Fu, L., Semboloni, E., Hoekstra, H., et al. 2008, A&A, 479, 9 [NASA ADS] [CrossRef] [EDP Sciences]
- Heymans, C., VanWaerbeke, L., Bacon, D., et al. 2006, MNRAS, 139, 313
- Massey, R., & Refregier, A. 2005, MNRAS, 363, 197 [NASA ADS] [CrossRef] (In the text)
- Massey, R., Heymans, C., Bergé, J., et al. 2007, MNRAS, 376, 13 [NASA ADS] [CrossRef]
- Paulin-Henriksson, S., Amara, A., Voigt, L., Refregier, A., & Bridle, S. L. 2008, A&A, 484, 67 [NASA ADS] [CrossRef] [EDP Sciences]
- Rhodes, J., Refregier, A., & Groth, E. J. 2000, ApJ, 536, 79 [NASA ADS] [CrossRef] (In the text)
- Rhodes, J., Refregier, A., & Groth, E. J. 2008, in preparation
Footnotes
- ... Survey
- http://www.cfht.hawaii.edu/Science/CFHLS/
- ... Survey
- http://www.eso.org/sci/observing/policies/PublicSurveys/sciencePublicSurveys.html
- ... Survey
- http://www.darkenergysurvey.org
- ... System
- http://pan-starrs.ifa.hawaii.edu
- ... Probe
- http://snap.lbl.gov
- ... Telescope
- http://www.lsst.org
- ... Explorer
- http://www.dune-mission.net and http://www.esa.int/esaCP/index.html
All Figures
![]() |
Figure 1:
PSF example (top panel) adopted in this paper and best fits of it (other panels) with 4 shapelet basis sets (corresponding to
|
Open with DEXTER | |
In the text |
![]() |
Figure 2:
Total variance
|
Open with DEXTER | |
In the text |
![]() |
Figure 3:
Total variance
|
Open with DEXTER | |
In the text |
![]() |
Figure 4:
Optimal variance of the shape measurement systematics
|
Open with DEXTER | |
In the text |
![]() |
Figure 5:
Overall bias B versus |
Open with DEXTER | |
In the text |
![]() |
Figure 6:
|
Open with DEXTER | |
In the text |
Copyright ESO 2009
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.