I. Tereno 1,3 - O. Doré2 - L. van Waerbeke1 - Y. Mellier1,4
1 - Institut d'Astrophysique de Paris,
98 bis boulevard Arago, 75014 Paris, France
2 -
Department of Astrophysical Sciences, Princeton University,
Princeton NJ 08544, USA
3 -
Departamento de Física, Universidade de Lisboa,
1749-016 Lisboa, Portugal
4 -
Observatoire de Paris, LERMA, 61 avenue de
l'Observatoire, 75014 Paris, France
Received 15 April 2004 / Accepted 23 August 2004
Abstract
We present a prospective analysis of a combined
cosmic shear and cosmic microwave background data sets,
focusing on a Canada France Hawaii Telescope Legacy Survey (CFHTLS) type
lensing survey and the current WMAP-1 year and CBI data.
We investigate the parameter degeneracies and error estimates of a
seven parameter model, for the lensing alone as well as for the combined
experiments. The analysis is performed using a Monte
Carlo Markov Chain calculation, allowing for a more realistic estimate
of errors and degeneracies than a Fisher matrix approach.
After a detailed discussion of the relevant statistical
techniques, the set of
the most relevant 2 and 3-dimensional lensing contours are given.
It is shown that the
combined cosmic shear and CMB is particularly efficient to break some
parameter degeneracies.
The principal component
directions are computed and it is found that the most orthogonal contours
between the two experiments
are for the parameter pairs
,
and
,
where
and
are, respectively, the slope
of the primordial mass power spectrum and the running of the spectral index.
It is shown, under the assumption of perfectely controlled systematics, that
an improvement of a factor of 2 is expected on the running of the spectral index
from the combined data sets.
Forecasts for
error improvements from a wide field space telescope lensing survey
are also given.
Key words: cosmological parameters - large-scale structure of Universe - gravitational lensing
The Canada-France-Hawaii Telescope Legacy Survey
(CFHTLS)
is a long term wide field imaging project that started in early 2003
and should be completed by 2008.
The French and Canadian astronomical communities will
spend about 500 CFHT nights to carry out imaging surveys
with the new Megaprime/Megacam instrument recently mounted at the
CFHT prime focus.
About 160 nights will focus on the "CFHTLS-Wide'' survey
that will cover 170 deg2, spread over 3 uncorrelated patches of
each, in
u*,g',r',i',z' bands, with typical exposure
times of about one hour per filter. The "CFHTLS-Wide'' survey design
and observing strategy are similar
to the VIRMOS-Descart cosmic shear survey
but it
will have a sky coverage 20 times larger. It is widely
seen as a typical second generation cosmic shear survey.
The exploration of weak gravitational distortion produced by the large scale
structures of the universe over fields of view as large as "CFHTLS-Wide''
has an enormous potential for cosmology. Past experiences based on
first generations cosmic shear surveys (see for example reviews in
Réfrégier 2003; Van Waerbeke & Mellier 2003) have demonstrated they can
constrain the dark matter properties (
,
and the shape
of the dark matter power spectrum) from a careful investigation of
the ellipticity induced by weak gravitational shear on distant galaxies.
For example, the most recent cosmic shear results from the
VIRMOS-Descart survey (Van Waerbeke et al. 2004) lead to
the conservative limits
(99% C.L.) and
(99% C.L.), which means an accuracy of
1-3% can be expected with the "CFHTLS-Wide'' for the same set of cosmological parameters. The CFHTLS-Wide will also
explore a broader wavenumber range (105-102) than
VIRMOS-Descart and will extend to linear scale, which
will considerably ease cosmological interpretation of weak lensing data.
Second generation surveys will therefore allow a more thoroughly
investigation of different cosmological models, taking into
account a broad range of cosmological parameters. For instance,
Benabed & van Waerbeke (2003), stressed the use of CFHTLS as a probe of dark energy
evolution.
The full scientific outcome of
the cosmic shear data from the "CFHTLS-Wide'' will only be complete
with a joint analysis with other data sets, like
Type Ia Supernovae, galaxy redshift surveys, Lyman-alpha forest,
or CMB observations. Contaldi et al. (2003) have used
the Red-Sequence Cluster Survey (RCS) cosmic shear data together with
CMB data. It was shown that the
degeneracies
for lensing and CMB are nearly orthogonal, which makes this set
of parameters particularly relevant for such combined analysis
(Van Waerbeke et al. 2002).
The search for orthogonal parameter
degeneracies between different observations
is one of the most important aspects of parameters measurements.
Ishak et al. (2004) recently argued that joint CMB-cosmic shear
surveys provide an optimal data set to explore the amplitude of
the running spectral index and probe inflation models. They used a
Fisher-Matrix analysis on WMAP+ACBAR+CBI plus a cosmic shear
"reference survey''. Their simulated survey
covers 400 deg2 with a depth corresponding to
a galaxy number density of lensed
sources of about 60 arcmin-2, and restricted their
analysis to 3000>l>20. They found that several parameters
can be significantly improved (like
,
,
)
and in particular that both the spectral
index
and the running
spectral index
errors are reduced by a factor of 2.
Their encouraging results show that joint CMB and weak lensing data
may provide interesting insights on inflation models.
Here, we investigate the 2-dimensional structure of
the parameter degeneracies between
lensing and CMB data sets, and look for the
expectation with a "CFHTLS-Wide''-like
survey design.
To explore the smaller scales probed by the "CFHTLS-Wide'', which
will provide cosmic shear information down to 20 arcsec,
it is preferable to avoid prior assumptions
regarding the Gaussian nature of the underlying distribution, and to
discard a Fisher matrix analysis. We used in this work the so-called
Markov Chain Monte Carlo (MCMC) method.
The MCMC computing time linearly scales with the number
of parameters
and eases the exploration of a large sample of parameters and a broad
range of values for each.
Contaldi et al. (2003) already used this approach with the RCS survey
to map the
parameter space, but marginalised
over a small set of cosmological parameters.
The goal of this present work is to map the parameter space that
describes cosmological models in order to extract series
of parameter combinations that would minimise intersections of
CMB and cosmic shear degeneracy tracks.
Compared to the Fisher-Matrix approach which produces
ellipses only, MCMC provides more details of the parameter
space and eventually a more realistic estimate of
error improvements of the joint analyses.
The paper is organised as follows: Sect. 2 introduces the gravitational lensing and defines the cosmic shear fiducial data used and the parameter space investigated. Section 3 gives the details of our MCMC calculations, limitations and convergence criteria. Section 4 shows the MCMC results from the cosmic shear alone, assuming a lensing survey similar to the CFHTLS. In Sect. 5 we present the results of the parameter degeneracies analysis on the combined cosmic shear and cosmic microwave background observations. The assumptions made and the results obtained are discussed in Sect. 6 and we conclude in Sect. 7.
Propagation of galaxy light beams across
large scale mass inhomogeneities produces distorted and (de)magnified
galaxy images (for reviews see Réfrégier 2003; Van Waerbeke & Mellier 2003; Bartelmann & Schneider 2001; Mellier 1999).
The gravitational lensing
magnification and shear are described by the
amplification matrix which involves second order derivatives of the
projected gravitational
potential
:
the shear,
,
and the
convergence,
,
![]() |
(1) |
We parameterize cosmological models with a set of 13 parameters,
,
,
,
h;
,
,
,
r,
;
;
w;
.
Each model defines a point in the high-dimensional space where a value of
the likelihood of the model with respect to the data may be calculated.
We use the CAMB software (Lewis et al. 2000) to compute the dark
matter power spectrum and transfer function. The parameters are defined
as follow:
![]() |
(4) |
![]() |
(6) |
Table 1: Cosmic shear: fiducial cosmological model.
From the power spectrum of the gravitational convergence,
the top-hat shear variance inside a circle of radius
can be computed:
![]() |
(11) |
![]() |
Figure 1:
Shear variance as a function of scale. The bottom left plot shows
the fiducial model with |
| Open with DEXTER | |
Table 2: Cosmic shear: survey specifications.
Table 2 summarizes the survey properties.
These are based on real observations;
the values for
and
(the effective density, once galaxy
selection is done) are found in
cosmic shear surveys (Van Waerbeke et al. 2002) and the field size is the total
size of the CFHT Legacy Survey. To choose the upper limit on the
angular scale, we notice, from Eq. (10),
that the computation of the covariance of the shear dispersion at
a scale
involves an integration up to
.
Furthermore, the integrations involved in the computation of C'++need an extra factor of
.
We define
to be such
that
corresponds to the largest wavelength
that can fit in a field of the survey area. Given that the fields of the
CFHTLS-Wide survey have an approximate size of
,
we find
to be around
.
The lower limit on the scale probes the deep non-linear regime.
The solid line in the bottom left plot of Fig. 1 shows
the shear variance of the fiducial model (Table 1)
as function of angular scale, along with
error bars computed from Eq. (10) at 20 angular scale points
ranging from
to
.
The error bars are smaller
at intermediate scales, slightly larger at the smallest scales, where they are
determined by statistical noise, and are noticeably bigger at the
largest scales, which are cosmic variance dominated.
They are slightly optimistic at small angular scales, due to the
Gaussianity assumption, made in the derivation of the covariance, lacking
the non-linear enhancement of the signal.
The dashed line
shows the shear variance without the non-linear corrections.
Both lines become well separated below
.
The other panels in Fig. 1 illustrate the cosmic shear
sensitivity to different cosmological parameters.
From this figure, it is clear that cosmic shear is more sensitive to
,
,
the source redshift, (with the cosmic shear signal
increasing with the increase of these parameters) and h. It also shows that the
dependence on
is a stronger function of scale than for the other parameters.
This is in agreement with
theoretical expectations derived from linear
perturbation theory (Bernardeau et al. 1997):
The probability distribution function (PDF) of an m-dimensional vector parameter
given the n-dimensional vector data
(the posterior PDF
), can
be calculated using Bayes theorem
from the prior PDF
and the conditional PDF of the data
given the parameter vector (the likelihood
):
In practice, in order to obtain a more precise result, the problem is usually solved by computing the posterior at optimized sample points that pave the parameter space. The traditional approach uses a regular grid. This is a computer-intensive procedure with computation time rising exponentially with the space dimension, which limits the number of parameters that can be explored. Markov chain Monte Carlo sampling (Gilks et al. 1996) overcomes this limitation.
The use of the MCMC technique in cosmological parameter estimation was first implemented in Christensen et al. (2001), following the proposal of Christensen & Meyer (2000). Current tools like CosmoMC (Lewis & Bridle 2002) no longer evaluate the likelihood at fixed points but at selected positions of a Markov chain. Each chain point, pi+1, is derived from the previous chain point, pi, in such a way that the transition probability from pi to pi+1 times the posterior PDF of pi, equals the product of the transition probability from pi+1 to pi by the posterior PDF of pi+1. Thus, after a relaxation time, the chain reaches the equilibrium and constitutes a sample of the posterior. A clear advantage of this is that statistical properties of the distribution, like the mean of a parameter or a marginalised confidence interval, can be directly derived from discreet sample points, without need to use the computed values of the likelihood. Different priors may also be introduced adapting the weighting scheme defined by the sample, without need to build a new chain.
The computing time is determined by the number of points needed to converge to the equilibrium distribution. If the chain is built in an efficient way, CPU time scales linearly with the dimension of the parameter space. Computing time may be reduced by finding an analytical expression for the posterior. For example, the Markov chain data is used to fit the log-likelihood with a polynomial in Sandvik et al. (2004).
The MCMC code we developed is based on the Metropolis-Hastings algorithm (Metropolis et al. 1953; Hastings 1970), like CosmoMC (Lewis & Bridle 2002), Cog (Slosar & Hobson 2003) or the AnalyzeThis (Doran & Müller 2003) public software.
We start several chains at different initial positions chosen randomly inside the limited part of the 7-dimensional parameter space we aim to explore (Table 3).
Table 3: Parameters and exploration range investigated by the MCMC chain: the upper part of the table shows the 7 parameters used in the initial proposal density, along with their exploration range. The bottom part shows extra imposed limits to the MCMC exploration. Other independent cosmological parameters are kept constant at their fiducial values.
The next point is proposed using a proposal PDF,
q(pi+1|pi) and the
unnormalised posteriors of
both points are compared to decide if the new point is acceptable.
The PDF acceptance rule we use for the next step point pi+1 is defined by
The result is independent of the proposal density, q.
We use as q, at the beginning of the chain,
a 2-dimensional Gaussian distribution centered at the current chain element.
Hence, only 2 of the 7 parameters (randomly chosen) change at each step.
The covariance matrix of q is chosen to be
of the order of the expected, squared,
error bars.
The
error
value is used as the step definition criterion and guarantees the
step amplitude has an adequate size. Would it be too small,
the chain would move too slowly and could never leave the vicinity of the
best fit. This situation is known as poor mixing and leads
to underestimated confidence limits.
In contrast, if the proposed steps are too large, the acceptance rate
will be too small and once again the chain will move slowly.
In order to have an adequate initial proposal density we derived approximate
errors from a Fisher matrix computation. Applying Eqs. (14)
to (8) leads to (Tegmark et al. 1997),
In order to better explore the directions of degeneracy and consequently speed up the convergence, a non-diagonal proposal covariance matrix is needed. Hence, after 1000 steps, the covariance matrix of the chain in progress is computed. From it, a new set of 7 parameters, aligned with the eigenvectors of the evaluated correlation matrix, is defined. From that step on, the new sample points are built from a combination of 2 eigenvector directions, randomly chosen at each step. Though it defines the next direction, the step size does not necessarily needs to match the corresponding eigenvalues. In fact, after 1000 steps the covariance matrix is smaller than it will be at its converged value, so we must scale it. We update periodically the proposal covariance matrix. However, since this process computes a new sample covariance matrix, it cannot be done too frequently; otherwise the progression of the chain too much depends on the previous elements and would no longer be a Markov chain. After a few periods we freeze the proposal density and set the multiplicative correction factor to 1. The optimal scaling value depends on the number of dimensions probed (Gelman 1996).
Let us consider a Markov chain of a given parameter composed of 2n iterations. Due to the random selection process of the starting point we expect m different chains to differ significantly at the beginning and to converge towards the same distribution as the number of iteration steps increases. The burn-in interval is set when the typical separation of several chain points at a given iteration is similar to the amplitude of the chain internal fluctuations. At each iteration i, two quantities can be computed for each parameter:
![]() |
(18) |
![]() |
(19) |
![]() |
(20) |
When, after 2n iterations, R is close to one for all the
quantities of interest, i.e.,
for all the parameters and derived parameters we
want to analyse, we assume the chain has converged.
The first n iterations are the burn-in period. They are
discarded and the actual
marginalised posterior density of a parameter is drawn by its
frequency of appearance in each bin, during the iterations
n+1 to 2n.
When after 2n iterations
there was not enough time to explore the tails of the distribution
the errors of the target distribution are underestimated.
Therefore, it is useful to let the chains run for a longer period in
order to
get a better mixing. In the process, the value of the
estimate R may raise before getting smaller again. We show an example
of this situation in Fig. 2, where we
follow a chain evolution.
![]() |
Figure 2:
Monitoring a chain.The upper plot shows the successive values of |
| Open with DEXTER | |
Once a chain has stopped, we must remove the residual
correlation between the consecutive elements.
For this reason it is recommended to thin the chain out, i.e.,
to keep only 1 out of k consecutive elements.
The most widespread method in the literature to determine the
thinning factor, k, is the Raftery and Lewis method (RL)(Raftery & Lewis 1996).
This method starts by constructing several chains from the converged MCMC chain,
by thinning the latter with several different values.
A weight may be assigned to each one of
the thinned chains, according to its compatibility to an independence
chain (a chain with no correlation between its consecutive elements).
RL computes the weight of a chain from
the ratio between its evidence and the evidence of an independence
chain. The evidences are computed in the Bayesian Information
Criterium (BIC) form of Schwarz (1978), which is a Gaussian approximation
that may be derived from the Bayes formula (Eq. (13)). It writes,
,
where nis the number of elements in the chain. The G2 statistic is a
that measures the fit of a chain to an independence chain.
To obtain G2, one counts the number of transitions
between bins of the chain, in order to get the ratio between the probability
of the chain to have a certain value at a certain step i for a given
chain value of a previous step i-j, and the probability independently
of the value at i-j.
In practice, when counting the transitions, only 2 bins are assumed,
i.e., a chain element becomes a 0 or a 1 as whether its parameters
values are
less or greater than a certain cut-off, that we choose to be the parameters
values. The greatest weight is attributed to the longest chain
verifying BIC<0. Its thinning value is the obtained k factor and
that chain is the best-fit to an independence chain that can be obtained by
thinning the original chain.
![]() |
Figure 3:
The standard deviation of |
| Open with DEXTER | |
There are other methods to estimate a thinning factor. In particular, Tegmark et al. (2004) obtain it by defining a more intuitive chain correlation length. Using RL, we obtained very large thinning factors for some of the MCMC chains (of order 100). However, we checked the dependence of the results on the thinning factor, by computing the parameters confidence levels using several chains, thinned from the original chain with different values of k and found no appreciable difference in the results (see Fig. 3 for an example). Hence, in order not to lose so many chain elements we did not use the Raftery and Lewis method results, but a simpler criterium: for each chain we choose the thinning factor as the average multiplicity of the chain elements.
The results in the next section are computed from 4 chains with 105 elements each, with a burn-in size of
and a thinning
factor of 5. The chains were merged, leading to a final sample with
about 50 000 elements.
Table 4:
Numerical results for the cosmic shear sample, including
precision on
individual parameters and a principal components analysis.
I:
confidence levels
in absolute value (first line) and in percentage of the corresponding
parameter fiducial value (second line).
II: The 7 eigenvectors of the correlation matrix ordered by decreasing accuracy.
The column named
,
lists the dispersion of each X parameter defined
in Eq. (21), which is equivalent to the square root of the
corresponding eigenvalue. Each line i shows the coefficients aij of
Eq. (21), i.e., the projections of the corresponding Xi on each of the 7 parameters pj labeled on the very top of the table.
Naturally the derived parameter
is not used for the computation of the
eigendirections and
.
III: The 7 eigenvectors computed for mean subtracted data normalized by the means.
The parameters used are
.
The column
,
lists the dispersion of each Y parameter defined
in Eq. (22). The next 7 columns show the components, bij, of the Yi.
In the last column we show the relative contribution of the main parameter
involved in each pc.
IV: Each line shows the fractional error of each of the 7 parameters
computed
using a limited number of principal components. The first line,
,
refers to using only the projections of Y7. In
both Y7and Y6 are used. Using all the PCS, we recover, in the last line,
of the error values for all parameters.
We will now extract information about the cosmological parameters from the obtained sample of the posterior PDF.
We start by computing
one-dimensional confidence levels for
the parameters. Table 4(I) shows the standard deviations
obtained for a set of 8 parameters;
the 7 original ones and
.
These values are computed
using all the sample points (hence being marginalised values) and
are shown in absolute value in the first line
of Table 4(I) and in percentage of the parameters fiducial
values on the second line.
Since MCMC probes a non-Gaussian posterior, asymmetric error intervals may
also be computed. We found the positive and negative
confidence levels
do not differ much from the standard deviations and do not show them here.
As compared to the early VIRMOS-Descart results, the CFHTLS configuration
does not seem to increase the precision on
.
This is however misleading since Van Waerbeke et al. (2002) carried out their
maximum likelihood analysis using only 4 parameters
.
Hence the actual improvement is eventually much better.
One-dimensional confidence levels do not show the detailed statistical structure of the cosmological parameter space. In the following we describe the interest in using a principal components analysis in the cosmological parameter space, a technique pioneered in Efstathiou & Bond (1999).
The principal components of the sample, Xi, are derived from the
eigenvectors of the sample correlation matrix. The correlation matrix is the
covariance matrix of the sample of parameters in standardized form, which means
each parameter value is rescaled by subtracting the mean and dividing by the
dispersion. The principal components (PCS) can
be expressed as a linear combination of the 7 rescaled parameters as follows:
![]() |
Figure 4:
1 and |
| Open with DEXTER | |
![]() |
Figure 5:
2-dimensional scatter plots colored by a third parameter putting
in evidence the multi-dimensionality of the degeneracies.
Panels are numbered from top left to bottom right.
Panels 1-3: illustrate the best determined pc in the form
|
| Open with DEXTER | |
The most accurate PCS are the best determined quantities by the
CFHTLS-wide cosmic shear experiment. In order to see to which combinations
of cosmological parameters they correspond,
one can look at the eigenvectors components, i.e., a high coefficient aij means pj strongly contributes to Xi.
However, since we are working with standardized parameters,
the direct
reading of the coefficients may be misleading. In fact, if we take any subset
of 2 parameters and compute its eigenvectors, they both will have equal
components, which obviously does not mean each principal component
has equal contributions
from each parameter. Hence, for the
purpose of obtaining a set of
meaningful coefficients, it is adequate to rescale the parameters
differently. We rescale them
by subtracting the mean and dividing by the mean. This is refered to as the
fractional data in Chu et al. (2003).
The covariance matrix of the
rescaled sample relates to the original covariance matrix as,
We compute a set of principal components, Y, from the fractional covariance matrix,
expressed as,
The components of an eigenvector explicitly show the contribution of each parameter to a principal component:
The 2-dimensional projections of Fig. 5 use
colors to produce 3-dimensional plots that better describe sensitivity
of cosmic shear to cosmological parameters. The comparison
of Tables 4(II) and 4(III)
shows for example that
the
degeneracy is equivalent to a
degeneracy. A color scatter plot, like
the top panels of Fig. 5, illustrates this in a simple way.
On this figure, we plot the sample points for the 3 possible pairs of
parameters, colored by the one left out. The continuous gradient along the
third parameter is obvious. In particular, the degeneracy that is hidden in
the
plane becomes evident once
the points are colored according to h. Likewise,
the cosmic shear
degeneracy pattern
is shown on the fourth panel,
with a continuous gradient along
.
The fifth panel shows how this
correlation between
and
is related to the curvature.
The color plots also help in understanding degeneracies derived from
the analysis of Table 4. The bottom panels of
Fig. 5, illustrate the cases on
the Y2 term either when a third parameter does not contribute to the degeneracy (
)
(producing a mixture of colors), or when it does (
). The analysis of Table 4 alone is indeed confusing:
while from 4(III) it is
not evident that
contributes more to the second principal component
than
does, Table 4(II) shows what really happens.
The need for a careful interpretation of this table is of primarily
importance when fractional
covariance matrix is used for parameters with fiducial values close to zero.
The simple extraction
of eigenvectors components to find degeneracies only provides qualitative
insight since there is no unique set of principal components.
For these ambiguous cases, color plots are very useful. The bottom
right panels of Fig. 5, reveal the sensitivity of Y2 to
in a much better way than the tables do.
From the MCMC and the principal components analysis, it is
possible to describe some cosmic shear denegeracies with empirical laws,
using the best determined components Y1 and Y2.
Since the 2-dimensional contours of Y1 and Y2 are not
ellipses, we made a new eigenvector calculation, using
the logarithm of the parameters. The laws are established
for two parameters only, marginalising over the others.
This way we found the shapes
in the
and the
planes to be:
![]() |
(23) |
![]() |
(24) |
As for the other principal components:
![]() |
(26) |
Table 5:
One-dimensional results from the joint sample.
The errors are
.
Above the horizontal line are the 7
explicitly changed MCMC parameters, while results for some
other popular parameters are shown under the line.
The column
labeled g1 shows the gain in the parameters precision in relation to
the values obtained with the CMB chains.
In the last column, the gain g2 is computed in relation to available
CMB results (taken from Table 8 of Spergel et al. 2003, for the case WMAPext).
We produced a different set of chains, computing the joint likelihood of the models with respect to the same cosmic shear fiducial data and CMB data.
We used the WMAP first year data
:
the combined TT power spectrum (Hinshaw et al. 2003)
and the TE power spectrum (Kogut et al. 2003). Model's likelihoods with respect
to WMAP data were computed
using the WMAP likelihood code
(Verde et al. 2003). In order to have
information from smaller scales, we included CBI data
: the mosaic odd binning
(Pearson et al. 2003).
The number of independent parameters explored by the
Markov chains was kept at 7:
.
The normalization is now parameterized by
,
with a
fiducial value of
,
corresponding in
our fiducial model to
.
We restrict to flat models, hence
is no longer an
independent parameter and let
,
the
optical depth to reionization, change in the restricted
region of
., keeping its fiducial
value at
.
We present results from a combination of 8 converged chains about
elements long from which we rejected the first
elements.
The thinning factor is 8, leaving us with a final merged sample of
about 40 000 elements.
![]() |
Figure 6: Marginalised distributions for the cosmological parameters from the joint likelihood chains (solid line) and the CMB chains (dashed line). |
| Open with DEXTER | |
Table 5 shows one-dimensional marginalised results from the joint
sample. In order to explicitly see what can be gained when joining
cosmic shear data to CMB data, we also produced
CMB only chains. One-dimensional
distributions from both CMB and joint samples are shown in
Fig. 6.
The ratio between
the parameters standard deviations obtained with the CMB sample and the
joint sample,
,
tells us
to which parameters the combined analysis is more efficient.
These values are shown in column g1 of Table 5.
As a consistency check we show in column g2 the factor gained
by the joint sample
when compared with the most appropriate case of the published WMAP results.
The largest gain is on the cluster abundance scaling,
.
As we saw, this parameter is roughly the first principal component of the
cosmic shear and its error is well determined by cosmic shear alone.
So, what makes more sense here is to compute the gain with respect to
the cosmic shear sample and not to the CMB sample. In this case we find
the joint sample brings no gain (g1=1.1).
Keeping in mind that the
combined result of 2 independent experiments with
errors of the same order has already a gain of
,
we consider the combined analysis to be efficient for a
certain parameter if
.
Hence, the efficiency is higher for
the dark matter density, the hubble parameter,
and the spectral indexes. To illustrate this result, we plot
in Fig. 7 the pairs of parameters where the orthogonality
between CMB and CFHTLS contours is most striking,
among all possible pairs.
These are contours of equal likelihood, containing
and
of the sample.
All the 4 cases involve only the efficient parameters.
Furthermore, they correspond to cosmic shear Y1 and Y2 related
well constrained cases we found in the previous section. Thus, we found that
projections of the best constrained cosmic shear principal components
are orthogonal to the corresponding CMB contours, which shows a complementarity
between cosmic shear and CMB.
To understand the origin of some of this complementarity, let us consider
the
/
case. The gain on both parameters is around 2,
even though, as we saw, the cosmic shear by itself is not very sensitive to
the running spectral index. In Fig. 8 we plot the
primordial power spectrum from Eq. (3) as function of the linear
wavenumber. The spectral indexes parameterize the shape of the spectrum
and have no further role on its evolution. Thus the opposite
behaviour of the CMB/cosmic shear responses to a change on the indexes
may be understood from these plots. The solid line in all panels is the
fiducial model
.
It bends away from a power
law (
,
the dashed line on the upper left panel) from the
pivot point. The dotted lines are deviations from the fiducial model,
they correspond to the indexes values written as the panels titles.
In the top panels one parameter is changed at a time. On the left,
a change on
produces changes of opposite signs at both ends of
the spectrum. On the right, changing
raises both ends of
the spectrum. The bottom panels show how it is possible to mimic the
fiducial spectrum for large (small) scales by changing both indexes in the
same (opposite) direction.
On the first panel, the solid horizontal lines show the scale ranges probed
by the CMB (the line on the left) and the cosmic shear (on the right)
data used in this work.
These intervals were found by using the calculations of Tegmark & Zaldarriaga (2002),
in particular their fitting
.
Hence, the 2 bottom plots lead to expect an upper left - lower right
direction of degeneracy for the cosmic shear and an orthogonal one
for the CMB.
The shape of the
lensing degeneracy has a similar origin. The slope of
the power spectrum at the scales probed by the cosmic shear is
.
A raise of h, increases the power at small scales that must be compensated by
a decrease in
.
In order to have an explicitly view of the cosmic shear
degeneracy
scale dependence, we produced a new set of 4 cosmic shear MCMC chains.
This time, we only allowed the scalar spectral indexes to change
and kept the other 5 of the 7 original parameters (see Table 3)
at the fiducial values.
The chains were built following the
procedure detailed in Sect. 3. Due to the small number of parameters probed,
convergence was very rapid and a burn-in of 300 elements was enough to reach the
equilibrium. The model shear dispersion and likelihood were computed for 4 cases, distinct on the angular scales used. These are (in arcminutes):
![]() |
Figure 7:
Marginalised 2-dimensional |
| Open with DEXTER | |
![]() |
Figure 8:
Primordial power spectrum parameterized by |
| Open with DEXTER | |
![]() |
Figure 9:
|
| Open with DEXTER | |
There are parameters for which no gain was found, for instance,
the measurement of
is dominated by the CMB. For
,
even though the
cosmic shear is not sensitive to it, the introduction of cosmic shear data
strengthens the
correlation allowing to lower the errors on
e
(Table 5). Thus, even though we do not
predict a gain on the measurement of
from CFHTLS+CMB data,
future cosmic shear surveys, through a more precise measure of
,
will be helpful in its determination.
In Fig. 10 its is shown the correlation between
and e
,
the factor by which the CMB at small scales
is damped after reionization. The information provided by the cosmic shear
is clear.
| |
Figure 10:
Marginalised 2-dimensional |
| Open with DEXTER | |
We studied the determination of cosmological parameters by a CFHTLS-Wide type of experiment. For this we have made some assumptions that may not exactly match the real situation.
Firstly, the real survey properties are generally degraded
with respect to initial goals.
In particular, we made the optimistic assumption that CFHTLS will cover
,
while its size may drop down to
if
a large fraction must be masked
. In order to check the impact of a
change of the survey area on the cosmological parameter determinations,
let us consider the Fisher-matrix approximation. In this regime,
the covariance matrix in the parameters space depends linearly on
the data covariance matrix (see for example Huterer 2002).
From Eq. (10), it follows that the relative merit between 2 different cosmic shear experiments varies as:
Secondly, we assumed that the source redshift distribution was
perfectly known. In reality, there is
an extra source of error coming from the marginalisation over the
real source redshift distribution. The same happens with the marginalisation
over other cosmological parameters not taken into account in this study, such as
the equation of state of dark energy or the neutrino density.
If we take the single source redshift,
,
as an extra free parameter to be
determined by the experiment, we find (using a Fisher-matrix calculation) that
is determined with a precision of
.
The presence of this extra parameter,
which is degenerate with some of the cosmological parameters, will degrade
the latter determinations by a factor
of
1.15-1.40, depending on the parameter.
Finally, the precision of the non-linear
mapping used in our calculations into deep non-linear regime
is another source of error.
To check this point, we assumed a
precision in the HALOFIT and
changed our fiducial matter power spectrum by ![]()
.
The difference
between the new top-hat shear, computed from this power spectrum,
and our fiducial one, was quadratically added to the diagonal part of our data
covariance matrix. Figure 11 shows the relative sizes of
,
the
HALOFIT uncertainty contribution to the error bars, and
,
the
original error bars. The HALOFIT contribution dominates the statistical noise
on scales below
.
However, as the right panel of Fig. 11
shows, the
degeneracy direction is robust.
As for the individual
parameters precisions, we found they are degraded by a factor of
1.15-1.35.
We should also keep in mind that the HALOFIT formula was tested with N-body
simulations using initial power-law spectrum. However, this is probably not a significant
limitation to our study, since the running spectral index is just a first order
approximation of the power-law spectrum.
We found the
precision to be the most stable one against the inclusion
of the sources redshift or the non-linear mapping uncertainties.
On the other hand, the result for the running spectral index, while
not being much affected by the
uncertainties, is the most affected one by
the non-linear error bars. By picking up the 3 factors found
(sky coverage, sources redshift and non-linear modelling), we end up with an
average overall degradation factor of 1.9 for each parameter.
It is important to note that this result refers to
parameter determinations using cosmic shear alone. We will come back to this
issue later on.
![]() |
Figure 11:
Left panel: the ratio between the shear top-hat error bars induced
by a |
| Open with DEXTER | |
On the other hand, it is important to stress that the 2-point correlation functions do not contain all the cosmic shear information. Higher order statistics, for example, have a different sensitivity to the cosmological parameters, allowing to break degeneracies and improve measurements (Bernardeau et al. 1997). Also, we did not use lensing tomography and have not assumed a redshift source distribution. (The integration over the redshift would not significantly increase the time of the Markov chain calculations.) The joint use of power spectrum information with bispectrum and tomography would allow an average gain of a factor of 2 for a survey the size of the CFHTLS-Wide (Takada & Jain 2004).
We explored the cosmological parameter space using cosmic shear,
describing the results in the context of a principal
components analysis, and found a set of parameter degeneracies
orthogonal to CMB ones. This led us to predict a gain of the order of 2 or 3 for
several parameters, when combining CFHTLS-Wide data with WMAP and CBI data.
This means, for example, precisions of
and
.
This result is consistent with the parameter determinations of
Contaldi et al. (2003) that combines CMB data with the Red-Sequence Cluster Survey
(RCS) data, where they found
and
,
since from Eq. (27) the ratio
between CFHTLS-Wide and RCS (with a size
)
is 1.8.
![]() |
Figure 12: 68% and 95% C.L. for the 4 most orthogonal cases found in Sect. 5. Blue is WMAP+CBI data and red shows predictions for the wide field space telescope - cosmic shear parameters of Table 6 (99% C.L. is also shown in this case). |
| Open with DEXTER | |
As compared to the fiducial reference survey used by Ishak et al. (2004),
Eq. (27) shows that the relative merit between that
configuration and ours is about 4.
However, we find the same, or only slightly bigger,
values.
The main reason for this discrepancy is the
fact that our degraded configuration, as compared to the survey in Ishak et al. (2004),
is partly compensated by the inclusion of smaller angular scales.
In fact, we saw that it is in the non-linear regime that lies not only the
greatest sensitivity of cosmic shear to the cosmological parameters,
but also the cause of its orthogonality to CMB.
Futhermore, the estimator of gain, given by Eq. (27),
only applies to cosmic shear results,
being an upper limit of the combined gain. This also means that the
factor that will be lost in the joint measurement, when including extra sources
of errors, will be less severe than the estimated value of 1.9.
To check this point, we proceed as follows:
we start by computing the covariance matrix, C1, of
our cosmic shear PDF sample determined by the cosmic shear Markov chains.
We assume C1 describes the parameter errors
as determined by the cosmic shear, i.e., we assume a Gaussian posterior
in the parameter space. Then, we take the CMB sample and
weight each of its elements according to this cosmic shear
Gaussian posterior. This technique is known as importance sampling and
produces a good approximation of a joint cosmic shear + CMB sample
from separated cosmic shear and CMB samples, provided the widest sample
(in this case the cosmic shear one) has good sampling in the region
covered by the narrowest one.
Then, we define a degraded cosmic shear covariance matrix as C2=4*C1to account for the 1.9 factor, and apply the importance sampling.
This way, we find two joint cosmic shear + CMB samples and can compare
their results for the parameter precisions.
We obtain a ratio, between the results of the two samples,
in the range of
1.25-1.45, depending on the parameters.
This means that the inclusion of
a
uncertainty on the non-linear mapping, plus leaving the redshift of
the sources as a free parameter, plus the reduction of the survey area due
to masking, only implies a loss of a factor of
1.25-1.45 in the joint
constraints. In particular, for the running spectral index, we find a factor
of 1.3.
The smaller impact on the joint results, as compared to the impact on the cosmic
shear results, comes from the fact that the
CMB contours are smaller than
the cosmic shear ones and also from the complementarity between the two experiments.
Table 6: Cosmic shear: wide field space telescope illustration specifications.
The CMB/cosmic shear complementarity opens
good prospectives for the determinations of cosmological parameters
by combining CMB and cosmic shear data sets. In fact, even for CFHTLS, whose
contours are, in general, noticeably larger than the
WMAP+CBI ones (Fig. 7), we
predict non negligible gains. Figure 12 shows what can be
expected with future space telescope data.
These results were produced with a cosmic shear Fisher matrix analysis,
using Eq. (17) and the fiducial model of Table 1 (except for
the redshift of the sources which was moved to
).
For this illustration, the data covariance matrix
of Eq. (10) was computed for the configuration shown in
Table 6, which is close to the
SuperNova Accelerator Probe/Joint Dark Energy Mission (SNAP/JDEM)
"Wide+'' case of Réfrégier et al. (2003). The CMB ellipses are plotted from
the parameter covariance matrix found with our WMAP+CBI chains.
In summary, we found the best constrained parameters by 2-point cosmic shear
correlation functions to be
and
(with
).
We have shown that 2-dimensional degeneracies defined by these
parameters plus another one defined by
and
are orthogonal
to CMB degeneracies.
Due to this CMB/cosmic shear complementarity, current weak lensing surveys,
such as
the CFHTLS, already have the potential to improve the precision on several
cosmological parameters. In particular, a better knowledge of
will
have an impact on inflationary scenarios. The crucial information
provided by the
cosmic shear comes from the small scales it probes. Thus, it provides an
additional possibility, along with galaxy redshift surveys
and Lyman-
forest, to combine with CMB data.
Acknowledgements
We thank D. Bond, F. Bernardeau, S. Prunet, C. Contaldi, D. Pogosyan, K. Benabed and R. Gavazzi for useful discussions. We thank CITA for the use of the DOLPHIN cluster, where the chain calculations were performed, and the TERAPIX data center for additional computing facilities. IT is supported by a Fundação para a Ciência e a Tecnologia (FCT) scholarship.