Issue 
A&A
Volume 504, Number 3, September IV 2009



Page(s)  705  717  
Section  Cosmology (including clusters of galaxies)  
DOI  https://doi.org/10.1051/00046361/200912424  
Published online  16 July 2009 
Constrained correlation functions
P. Schneider  J. Hartlap
ArgelanderInstitut für Astronomie, Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany
Received 5 May 2009 / Accepted 7 July 2009
Abstract
Measurements of correlation functions and their comparison with theoretical models are often employed in natural sciences, including astrophysics and cosmology, to determine bestfitting
model parameters and their confidence regions. Due to a lack of better descriptions, the likelihood function of the correlation function is often assumed to be a multivariate Gaussian. Using different methods, we show that correlation functions have to satisfy constraint relations, owing to the nonnegativity of the power spectrum of the underlying random process. Specifically, for any statistically homogeneous and (for more than one spatial dimension) isotropic random field with
correlation function ,
we derive inequalities for the correlation coefficients
(for integer n) of the form
,
where the
lower and upper bounds on r_{n} depend on the r_{j}, with j<n, or more explicitly
Explicit expressions for the bounds are obtained for arbitrary n. We show that these constraint equations very significantly limit the set of possible correlation functions. For one particular example of a fiducial cosmic shear survey, we show that the Gaussian likelihood ellipsoid has a
significant spillover into the region of correlation functions forbidden by the aforementioned constraints, rendering the resulting bestfitting model parameters and their error region questionable, and indicating the need for a better description of the likelihood function. We conduct some simple numerical experiments which explicitly demonstrate the failure of a Gaussian description for the likelihood of .
Instead, the shape of the likelihood function of the correlation coefficients appears to follow approximately the shape of the bounds on the r_{n}, even
if the Gaussian ellipsoid lies well within the allowed region. Therefore, we define a nonlinear and coupled transformation of the r_{n}, based on these bounds. Some numerical experiments then
indicate that a Gaussian is a much better description of the likelihood in these transformed variables than of the original correlation coefficients  in particular, the full probability
distribution then lies explicitly in the allowed region. For more than one spatial dimension of the random field, the explicit expressions of the bounds on the r_{n} are not optimal. We outline a
geometrical method how tighter bounds may be obtained in principle. We illustrate this method for a few simple cases; a more general treatment awaits future work.
Key words: methods: statistical  cosmological parameters  cosmology: miscellaneous
1 Introduction
One of the standard ways to obtain constraints on model parameters of a stochastic process is the determination of its twopoint correlation function
from observational data, where
is the separation vector between pairs of points. This observed correlation function is then compared with the corresponding correlation function
from a model, where p denotes the model parameter(s). A commonly used method for this comparison is the consideration of the likelihood function
,
which yields the probability for observing the correlation
function
for a given set of parameters p. It is common (see Seljak & Bertschinger 1993, for an application to microwave background anisotropies; Fu et al. 2008, for a cosmic shear analysis; or Okumura et al. 2008, for an application to the spatial correlation function of galaxies) to approximate this likelihood by a Gaussian,
where it has been assumed that the random field is homogeneous and isotropic, so that the correlation function depends only on the absolute value of the separation vector. Furthermore, it has been assumed that the correlation function is obtained at discrete points x_{i}; for an actual measurement, one usually has to bin the separation of pairs of points, in which case x_{i} is the central value of the bin. In (1), is the covariance matrix of the correlation function between any pair of separations x_{i}, x_{j}.
In Hartlap et al. (2009) we have investigated the likelihood function for the cosmic shear correlation function and found that it deviates significantly from a Gaussian. This study relied on numerical raytracing simulations through the density field obtained from Nbody simulations of the largescale structure in the Universe.
In this paper, we will show that the likelihood function of the correlation function cannot be a Gaussian. In particular, we show that any correlation function obeys strict constraints, which can be expressed as
for arbitrary x and integer n; these constraints can be derived by several different methods. With one of these methods, one can derive explicit equations for the upper and lower bounds in (2) for arbitrary values of n. The basic reason for the occurrence of such constraints is the nonnegativity of the power spectrum, or equivalently, the fact that covariance matrices of the values of a random fields at different positions are positive (semi)definite. For , such constraints were derived by Kurchan (2002).
The outline of the paper is as follows: in Sect. 2, we obtain bounds on the correlation function using the CauchySchwarz inequality, as well as making use of the positive definiteness of the covariance matrix of random fields. It turns out that the latter method gives tighter constraints on ; in fact, these constraints are optimal for onedimensional random fields. We show in Sect. 3 that these bounds significantly constrain the set of functions which can possibly be correlation functions. Whereas the bounds obtained in Sect. 2 are valid for any dimension of the random field, they are not optimal in more than one dimension; we consider generalizations to higherorder random fields, and to arbitrary combinations of separations x_{i} in Sect. 4. In Sect. 5 we introduce a nonlinear coupled transformation of the correlation coefficients based on the bounds; for the case of a onedimensional field, all combinations of values in these transformed quantities correspond to allowed correlation functions (meaning that one can find a nonnegative power spectrum yielding the corresponding correlations). Hence, a Gaussian probability distribution of these transformed variables appears to be more realistic than one for the correlation function itself. This expectation is verified with some numerical experiments which are described in Sect. 6. Furthermore, we show that for a fiducial cosmological survey the Gaussian likelihood for the correlation function can significantly overlap with the region forbidden by (2), depending on survey size and number of separations at which the correlation function is measured. We conclude with a discussion and an outlook to future work and open questions.
2 Upper and lower bounds on correlation functions
Consider an ndimensional homogeneous and isotropic random field
,
with vanishing expectation value
,
and with power spectrum
and correlation function
which depends only on the absolute value of , due to the assumed isotropy. This relation immediately shows that
owing to and . However, the lower bound in (4) is not an optimal one for more than one dimension. In two dimensions, the integral over the polar angle can be carried out, yielding
where J_{0} is the Bessel function of the first kind of zero order. Since J_{0}(x) has an absolute minimum at with (see also Abrahamsen 1997), the nonnegativity of P(k) implies that . Similarly, in three dimensions one has
where is the spherical Bessel function of zero order. Since j_{0}(x) has an absolute minimum at of , the nonnegativity of P(k) implies that .
In the following, we will concentrate mainly on the onedimensional case and write
However, higher dimensions of the random field are included in all what follows, since by specifying in (3), we find
which thus takes the same form as (7). Thus, the ndimensional case can be included in the same formalism as the onedimensional case; note that for this argument, the random field is not restricted to be isotropic. However, as we shall discuss later, the resulting inequalities will not be optimal for isotropic fields of higher dimension.
In the foregoing equations, P(k) can correspond either to the power spectrum of the underlying random process, or the sum of the underlying process and statistical noise. Furthermore, the power spectrum can also be the square of the Fourier transform of the realization of a random process in a finite sample volume. In all these cases, the nonnegativity of P(k) applies, and the constraints on the corresponding correlation function derived below must hold.
2.1 Constraints from the CauchySchwarz inequality
Making use of the CauchySchwarz inequality,
(9) 
we obtain by setting and that^{}
(10) 
where we made use of the identity . Together with (4) we therefore obtain the constraint equation
The interpretation of this constraint can be better seen in terms of the correlation coefficient , which is defined for an arbitrary x. Then, (11) reads
This result can be interpreted as follows: if two points separated by x are strongly correlated, , then the value of the field at a position 2 x must equally be correlated with that at x, which implies that also the correlation between the point 2 x and the origin must be large. If the field at x is strongly anticorrelated with that at the origin, , than the field at 2x must be similarly anticorrelated with that at x, implying a strong correlation between the point 2x and the origin. The smaller r_{1}, the weaker is the constraint (12).
Making use of the identity
and applying the CauchySchwarz inequality with and , we find that
(13) 
A second inequality is obtained by using in a similar way the identity
Both of these inequalities are summarized in terms of the correlation coefficient as
Further inequalities involving , with being an integer, can be derived in this way. Making use of the relations
for , and employing the CauchySchwarz inequality in the same way as before, we find
where the special case n=3 has been derived already. We have thus found a set of inequalities for all correlation coefficients r_{n}. In the next section, we will obtain bounds on the correlation function using a different method, and will show that these ones are stricter than those in (15).
2.2 Constraints from a covariance matrix approach
We will proceed in a different way which is more straightforward. Consider a set of N points
x_{m}= m x, with m integer and
.
The covariance matrix of the random field at these N points has the simple form
(16) 
As is well known, the covariance matrix must be positive semidefinite, i.e., its eigenvalues must be nonnegative. Dividing by , we define
and the eigenvalues of must obey . For N=2, the eigenvalues read , yielding
(18) 
i.e. we reobtain (4). For N=3, the eigenvalues read
and the conditions can be solved for r_{2}, yielding (12). The four eigenvalues of in the case N=4 are
and the conditions after some algebra can be brought into the form (14). For , the eigenvalues of have a more complicated form; they are obtained as solutions of higherorder polynomials (see below).
However, we do not need an explicit expression for the eigenvalues, but only need to assure that they are nonnegative. This condition can be formulated in a different way. The eigenvalues of the matrix
are given by the roots of the characteristic polynomial, which is the determinant of the matrix
.
For a given N, this polynomial is of order N in
and of the form
The coefficients b_{k} of the polynomial are functions of the r_{k}, as obtained from calculating the determinant. On the other hand, they can be expressed by the roots of the polynomial; for example, for N=3, one finds that
The condition that all is then equivalent to the condition that , , and . It is easy to show that these conditions lead to the constraint (12), together with .
This procedure can be generalized for arbitrary N: the condition that all eigenvalues of the correlation matrix are nonnegative is equivalent to requiring that the coefficients of the characteristic polynomial (19) satisfy if n is odd, and if n is even. The explicit calculation of the characteristic polynomial by hand becomes infeasible, hence we use the computer algebra program Mathematica (Wolfram 1996). For successively larger N, we calculate the characteristic polynomial. As we will show below, the characteristic polynomial factorizes into two polynomials; if N is even, it factorizes into two polynomials of order N/2, whereas if N is odd, it factorizes into polynomials of order . The condition that all eigenvalues are nonnegative is equivalent to the condition that the roots of both polynomials are nonnegative; this condition can be translated into inequalities for the coefficients of the polynomials in the same way as described above.
We illustrate this procedure for the case N=5. The characteristic polynomial in this case becomes
(20) 
with
b_{1}  =  r_{2}+r_{4}2,  
b_{0}  =  (1r_{2})(1r_{4})(r_{1}r_{3})^{2},  
c_{2}  =  3r_{2}r_{4},  
c_{1}  =  3(1r_{1}^{2})2r_{1} r_{3}r_{3}^{2}+2r_{4}+r_{2}(2+r_{4})2 r_{2}^{2},  
c_{0}  =  (2 r_{1}^{2}1r_{2})r_{4}+r_{3}^{2}+2 r_{1} r_{3}(12r_{2}^{2})  
+ 2r_{2}^{2}(1+r_{2}) r_{2}+r_{1}^{2}(34r_{2})1. 
The conditions , and are satisfied, irrespective of the value of r_{4}, owing to for all n, and the inequalities obtained before for r_{n}, . Thus, these three inequalities do not yield additional constraints of r_{4}. Those come from requiring and . Both coefficients are linear in r_{4}, which allows us to write the conditions explicitly,
Alternatively, one can use the function InequalitySolve of Mathematica for the five inequalities for the four coefficients, to obtain the same result.
We see that the upper bound in (21) agrees with that obtained from the CauchySchwarz inequality  see (15)  but the lower bounds differ. Hence, for n=4 the bounds on r_{n} derived from the two methods are different, and that will be the case for all . This is not surprising, since the CauchySchwarz inequality does not make any statement on how ``sharp'' the bounds are, i.e., whether the bounds can actually be approached. One can argue, using discrete Fourier transforms, that the bounds obtained from the covariance method are ``sharp'' (for a onedimensional field) in the sense that for each combination of allowed r_{n}, one can find a nonnegative power spectrum corresponding to these correlation coefficients  the bounds can therefore not be narrowed further.
It should be noted that in the cases given above, the lower (upper) bound on r_{n} is a quadratic function in r_{n1}, and its quadratic term c r_{n1}^{2} has a positive (negative) coefficient c. This implies that the allowed region in the r_{n1}r_{n}plane is convex; indeed, as we will show below, the allowed ndimensional region for the is convex.
The procedure illustrated for the case N=5 holds for larger N as well: only the coefficients of in the two polynomials yield new constraints on the correlation coefficient r_{N1}, and these two coefficients are linear in r_{N1}, so the constraints for r_{N1} can be obtained explicitly. This then leads us to conclude that the positivedefiniteness of the matrix for a given N is equivalent to the positivity of the determinants of all submatrices which are obtained by considering only the first n rows and columns of , with . We will make use of this property in the next subsection.
For larger N, the upper and lower bounds are given by quite complicated expressions, so we refrain from writing them down; however, using the output from Mathematica, they can be used for subsequent numerical calculations. A much more convenient method for calculating the upper and lower bounds explicitly will be obtained below.
To summarize, the procedure outlined gives us constraints on the form
(22) 
where the lower and upper bounds are function of the r_{k}, k<n, and satisfy . For , the bounds and have been written down explicitly above.
The existence of these upper and lower bounds on r_{n}, and thus on the correlation function, immediately implies that the likelihood function of the correlation function cannot be a Gaussian, since the support of the latter is unbounded. This does not imply necessarily that the Gaussian approximation is bad; if the Gaussian probability ellipsoid described by (1) is ``small'' in the sense that it is well contained inside the allowed region, the existence of the bounds alone yields no estimate for the accuracy of the Gaussian approximation. We will come back to this issue in Sect. 6.
Whereas the expressions for
and
for larger n become quite long, we found a remarkable result for the difference
.
This is most easily expressed by defining the functions
(23) 
Then, for odd n
and for even n,
This result has been obtained by guessing, and subsequently verified with Mathematica, up to n=16, using the explicit expressions for the bounds derived next. We will make use of these properties in the Sect. 3.
2.3 Explicit expression for the bounds
We will now derive an explicit expression for the upper and lower bounds on the r_{n}. In doing so, we will first show that the determinant of the matrix for N points factorizes into two polynomials, both of which are linear in r_{N1}. Then we make use of the fact that the positive definiteness of is equivalent to having positive determinant of all submatrices obtained from by considering only the first n rows and columns, ; however, these submatrices of correspond to the matrix for lower numbers of points, and their positive determinant is equivalent to the upper and lower bounds on the r_{n} for . Hence, by increasing N successively, the constraints on r_{N1} are obtained from requiring .
The determinant of a matrix is unchanged if a multiple of one row (column) is added to another row (column). We make use of this fact for the calculation of , by carrying out the following four steps:
 1.
 We add the first row to the last, obtaining the matrix
with
elements
 2.
 We subtract the last column from the first,
 3.
 The second row is added to the (N1)st one, the third row is added to the (N2)nd one, and so on. For N odd, this reads
whereas for N even, the sum extends to (N2)/2.  4.
 Finally, the (N1)st column is subtracted from the second one, the (N2)nd column is subtracted from the third one, and so on, which for odd N reads
whereas for even N the sum extends to (N2)/2.
Figure 1: For N=7, the original covariance matrix is shown, together with four transformations of it, described in the text, which leave the determinant unchanged. 

Open with DEXTER 
These steps are illustrated for the case N=7 in Fig. 1. For N even [odd], this results in a matrix
which contains in the lower left corner a N/2
N/2 ((N1)/2
(N+1)/2) submatrix with elements zero. Therefore, the determinant of
factorizes into the determinants of two N/2
N/2 matrices (of a (N1)/2
(N1)/2 and of a (N+1)/2
(N+1)/2 matrix). Thus,
(26) 
where we explicitly indicate the number N of points, and thus the dimensionality of , with a superscript, and where for even N, and are N/2 N/2 matrices with elements
(27) 
whereas for odd N, and are (N1)/2 (N1)/2 and (N+1)/2 (N+1)/2 matrices, respectively, with elements
(28) 
Since , and the (N/21) (N/21) (for N even; for N odd this is a [(N1)/21] [(N1)/21]) submatrix obtained by cancelling the first column and row of is just , we can write
where is the matrix which is obtained from by setting . The upper bound for r_{N1} is found by setting , which yields
(30) 
where we set n=N1. Analogously, the final element of reads , where m=N/2 for even N, and m=(N+1)/2 for odd N. Therefore,
(31) 
where is obtained from by setting ; the lower bound for r_{N1} is then obtained by setting this expression to zero, or
(32) 
Since is a linear function of r_{N1}, it must be of the form , where the coefficients c,d are independent of r_{N1}. The value of d yields the root of , and is thus the upper bound on r_{N1}. The coefficient c follows from (29); therefore,
(33) 
The analogous result holds for the , i.e.
(34) 
These recursion relations then yield the explicit expressions
(35) 
for N even, and
(36) 
for N odd. This yields for the determinant of the matrix the explicit expression
(37) 
for even and odd N, respectively. Accordingly, the width of the allowed range of r_{n} then becomes
(38) 
where we made use of (24) and (25).
3 How strong are the constraints?
We shall now investigate the question how constraining the constraints on the correlation function or, equivalently, the correlation coefficients are. In order to quantify this question, we imagine that the correlation function is measured on a regular grid of separations n x, and from that we define as before . Due to (4), for all n. We therefore consider the set of functions defined by their values on the grid points, and allow only values in the interval for all n. Let M be the largest value of n that we consider; then the functions we consider are defined by a point in the Mdimensional space spanned by the r_{n}. This Mdimensional cube has sidelength 2 and thus a volume of 2^{M}. We will now investigate which fraction of this volume corresponds to correlation functions which satisfy the constraints derived in the previous section.
A related question that can be answered is: suppose the values of r_{k}, are given; what fraction of the (Mn)dimensional subspace, spanned by the r_{k} with corresponds to allowed correlation functions? For example, if r_{1}=1, then all other r_{k}=1, and hence the condition r_{1}=1 is very constraining  the volume fraction of the subspace spanned by the r_{k} with is zero in this case.
Mathematically, we define these volume fractions as
V_{Mn}  =  
=  (39) 
The factor 1/2 in each integration accounts for the sidelength of the (Mn)dimensional cube, so that the V_{Mn} are indeed fractional volumes. V_{Mn} depends on the r_{k} with ; in particular, is the fractional volume which is allowed if all constraints are taken into account. From the definition of the V_{Mn} it is obvious that the following recursion holds:
(40) 
We can therefore calculate the V_{Mn} iteratively, starting with
(41) 
where we have skipped the integration limits; here and in the following, an integral over r_{n} always extends from to . Here we made use of (24) or (25), depending on M. The dots indicate that factors are added until or is reached.
It should be noted that the only dependence of
V_{M(M1)} on r_{M1} is through the factor
.
Therefore, the next recursion step reads
V_{M(M2)}  =  
=  
=  
=  (42) 
where is the betafunction, and we used the relation
For the next step we notice that the only dependence of V_{M(M2)} on r_{M2} is through the function , which therefore lets us write
V_{M(M3)}  =  
=  
=  (44) 
where we made again use of (43). Based on these results, we can now obtain a general expression,
(45) 
which can be proved by induction. In particular, the are given as
The values of V_{M} up to M=20 are shown in Fig. 2. As can be seen from the upper panel, the admissible volume very quickly decreases as M increases. For comparison, in the lower panel we show V_{M}^{1/M}, i.e. the typical diameter of the allowed region.
Figure 2: Upper panel: volume fraction V_{M} (Eq. (46)) as function of the number M of separations for which the correlation function is measured. Lower panel: V_{M}^{1/M} as function of M. One sees that the typical linear dimension of the allowed region decreases with M, meaning that the strong decrease of V_{M} with M is not just an effect of the dimensionality of the volume considered. 

Open with DEXTER 
4 Generalizations
The considerations of the previous sections were restricted to correlation functions as measured at equidistant points x_{n}=n x. In some cases, however, it may be advantageous to drop this constraint, e.g., to consider the correlation function at logarithmically spaced points. Furthermore, as mentioned before, tighter constraints on the correlation function are expected to hold in the multidimensional case. We shall consider these aspects in this section, presenting another method for deriving constraints which yields the optimal constraints for arbitrarily spaced points x_{n} in one or higherdimensional fields. But before, we briefly consider the covariance method for higher dimensions.
4.1 Higherdimensional fields: the covariance method
As was noted above, the inequalities for the correlation functions have been obtained with a onedimensional random field in mind. Whereas we have shown that all the bounds on correlation functions are also valid for higherdimensional fields, they are not assumed to be ``optimal''  the reason is that the equivalent onedimensional power spectrum defined in (7) was assumed to be totally arbitrary, except from being nonnegative, whereas for isotropic fields in higher dimensions, it will obey further constraint relations due to the (n1)dimensional integration in (8). A first indication that the bounds can be improved has been seen with the lower bounds on the ratio , which turned out to be larger for two and three dimensions than for a onedimensional field. The foregoing constraints on the correlation function are reobtained with the covariance matrix approach in higher dimensions if the set of points are placed equidistant along one direction. New (and stronger) constraints are expected from this method if the distribution of points makes use of these higher dimensions.
As a first example, we consider a twodimensional (or higherdimensional) field and place three points in an equilateral triangle of sidelength x. The separation between any pair of points
is then x, and the covariance matrix of these three points, normalized by ,
then reads
The eigenvalues of this matrix are , , and requiring their nonnegativity leads to
for all x. The lower bound is somewhat smaller than the one obtained earlier for twodimensional random fields, , but significantly larger than that obtained for the 1D case, . Next we consider a set of three points forming a triangle of which two sides have length x, and the third side has length , with . The corresponding covariance matrix reads
and its eigenvalues are , ; note that we used the notation . Nonnegativity of the eigenvalues leads to the constraints
where we also used (47).
Finally, we consider a square of sidelength x, for which the covariance matrix reads
with the eigenvalues , . Their nonnegativity, combined with (47), yields
(49) 
which is a weaker constraint than (48) for . Thus we see that the choice of the geometrical configuration of points affects the resulting constraints on the correlation function. It is by no means clear how to find a set of configurations such as to obtain the ``optimal'' constraints in two (or higher) dimensions. In fact, methods other than using the covariance matrix need to be considered for obtaining optimal constraints (see below).
Finally, we consider the simplest case in three dimensions, namely a set a of four points arranged in a regular tetrahedron of sidelength x. The resulting covariance matrix is
with eigenvalues , , yielding
(50) 
a constraint stronger than the ones obtained for one and two dimensions, but falling short of the one derived from the global minimum of the spherical Bessel function, .
Figure 3: Left panel: the curve , together with its convex envelope. The latter consists of two segments of the curve and two straight lines, one of which is tangent to at the points and , the other one is tangent to at and intersects . The corresponding values of the curve parameter are , , and . These numerical values are found as follows: the requirement that the lower straight line segment is tangent to curve at the two points leads to the conditions for i=1,2, where a_{i} are scalars. These four scalar equations for the four unknowns a_{i}, have several solutions, the relevant one is the ``outermost'' one. For the upper straight line segment, one employs the condition that the tangent at point goes through , i.e., , and of the multiple solutions of these two scalar equations for the two unknowns a and , one takes the ``outermost'' one. Middle panel: in a similar way, the curve is plotted, together with its convex envelope. Here, , and , and , , and . Right panel: comparison of the allowed regions in the r_{1}r_{2}plane in 13 dimensions. 

Open with DEXTER 
4.2 Optimal constraints in the general case
We now describe a general method for deriving constraints on correlation functions of homogeneous and isotropic random fields, allowing for arbitrary values of separations x_{n} and arbitrary dimensions. However, it should be said right at the beginning that we were unable to obtain the corresponding bounds on the correlation functions explicitly.
We can write the general relation (3) in the form
(51) 
where is nonnegative and u(0)=1. For a 1dimensional field, , ; for two dimensions, , u(y)=J_{0}(y), and for three dimensions, , u(y)=j_{0}(y). Next we consider a quadrature formula for the integral, and write
(52) 
where the w_{j} are (positive) weights corresponding to the quadrature formula, and in the last step we defined . This approximation can be made arbitrarily accurate by letting . Defining the correlation coefficient as before, we obtain
(53) 
where the coefficients
(54) 
satisfy and .
If we now consider a set of N points x_{n}, together with the correlation coefficients
,
then we see that a point in the Ndimensional space of
can be
described as a weighted sum of points lying along the curve
,
,
i.e.,
(55) 
where we considered the transition of the discrete points k_{j} to a continuous variable . Since and , the point must be located inside the convex volume containing all points on the curve . It is clear that this convex envelope of the curve yields indeed the optimal general bounds on the r_{i}: every point within the convex envelope can be realized by choosing a set of n points on the curve appropriately. Since the function u(y) depends on the dimension of the random field, the constraints will be different for different numbers of dimensions. Furthermore, the curve depends on the choice of the x_{n}, and therefore the constraints will also depend on the choice of separations for which the correlation function is measured.
Unfortunately, we have not found a way how to algebraically describe the convex envelope of the curve , and thus to obtain explicit expressions for the upper and lower bounds on r_{i}. For now, we therefore will present just a few simple examples.
For the 1dimensional case with two points x_{2}= 2 x_{1}, the curve reads , with ; the same set of points is described by the curve (using trigonometric identities), . Thus, the convex envelope of is the region between the parabola and the line r_{2}=+1, and thus we reobtain the bounds (11). For the choice x_{2} = 3 x_{1}, the set of points of is equivalent to that of the curve , . The convex envelope can then be described by the bounds , where the lower bound differs from 1 for r_{1}> 1/2, and the upper bound is different from +1 for r_{1}< 1/2. If we choose instead , then for a generic (nonrational) , the curve fills the whole square , , which is also coincident with its convex envelope.
As the next example, we consider a 2dimensional field with x_{2}=2 x_{1}. In the left panel of Fig. 3, the curve is plotted for . The convex envelope in this case can be constructed explicitly: the boundary of the smallest convex region which contains the curve is composed of four parts: (1) the section of the curve for ; (2) the straight line connecting the two points and ; (3) the section of the curve for ; and (4) the straight line connecting and . In a similar way, the convex envelope can be constructed for a 3dimensional random field; see Fig. 3, middle panel.
Unfortunately, we have not yet found a systematic way how to construct the convex envelope for n>2, and how to obtain explicit bounds on the correlation coefficients in these cases  although it is clear that the allowed region, at least in the r_{1}r_{2}plane, decreases as one goes to higherdimensional fields (see right panel of Fig. 3). Therefore, the development of explicit bounds in higher dimensions is of great interest.
5 Transformation of variables
The finite bounds on the correlation coefficients clearly show that the likelihood of the correlation function cannot be a Gaussian. However, the bounds on r_{n} may suggest that the Gaussian approximation for the likelihood could be better in terms of transformed variables, in which the allowed range for each r_{n} is mapped onto the real axis. Such a transformation would simplify the parametrization of the likelihood from numerical simulations. Defining
the allowed range of r_{n} is mapped onto 1<x_{n}<1. It should be noted that (56) is a coupled nonlinear transformation of variables, since the bounds on r_{n} depend on the r_{k}, k<n. The mapping to the real axis is then obtained by any function that maps [1,1] to ; we choose
Thus, no bounds on the correlation coefficients are violated if the probability distribution of the y_{n} would follow a multivariate Gaussian.
We next consider the relation between the likelihood of the correlation function and the corresponding likelihood of the y_{n}. To shorten notation, we drop the explicit dependence on the model parameters p in the following. Then,
(58) 
where in the first step we have used the product rule of probability theory and introduced the conditional probability density , and where the next two steps define the likelihood in terms of the r_{n} and the y_{n}, respectively. Thus, the distribution of the r_{n} and y_{n} can depend explicitly on the value of .
The relation between
and
is obtained from
(59) 
where the y_{i} are functions of the r_{j}, and the transformation matrix J is given by
J_{ij}  
(60) 
Note that the partial derivatives vanish for , which means that J is tridiagonal, and thus its determinant is given by the product of the diagonal elements,
(61) 
Thus, the transformation between the probability distribution of the r_{i} and that of the y_{i} is rather complicated, implying that the behaviour of these two probability distributions can be considerably different. In particular, the could be approximated by a Gaussian, the corresponding would have a significantly different functional form; also the covariances of the two distributions will differ significantly.
6 Simulations
In order to illustrate the analytical results discussed in the previous sections and to explore the effects of the constraints on the shape of the likelihood of the correlation function, we have conducted some simple numerical experiments. We have generated realizations of a periodic onedimensional Gaussian random field
on a regular grid according to
where is the discrete Fourier transform of the random field, and has a normal distribution of zero mean, , where P_{i} is the discretized power spectrum and is the Gaussian distribution with mean and standard deviation . From each realization, we measure the correlation function using the estimator
and calculate the correlation coefficients r_{i}. Note that this estimator of the correlation function explicitly employs the periodicity of the discrete random field. As defined in this way, the estimated correlation function is the Fourier transform of a power spectrum P'_{i}, proportional to the squares of the amplitudes of the . Hence, P'_{i} is nonnegative, and we thus expect that the correlation function of each realization obeys the constraints derived before  indeed, this is the case.
Figure 4: Constraints in the r_{1}r_{2}plane ( left panel) and in the r_{2}r_{3}plane ( left panel), where r_{1} was constrained to 0.41< r_{1} <0.39. Each point corresponds to a correlation function measured from a realization of a onedimensional Gaussian random field with a randomly drawn power spectrum and N=16 modes. Plotted as lines are the analytically determined constraints as given by Eqs. (12) and (14). 

Open with DEXTER 
We illustrate the constraints in the r_{1}r_{2}plane (Eq. (12)) and in the r_{2}r_{3}plane for fixed r_{1} (Eq. (14)) in Fig. 4. In order to fully populate the allowed regions with data points, we do not use a single power spectrum for all realizations of the random field, but draw for each realization random positive values, uniformly distributed in [0,1], for each P_{i}. Note that the data points do not completely fill out the regions allowed by Eqs. (12) and (14), but are constrained to a slightly smaller region with piecewise linear boundaries. The reason for this is that we have used a random field with N=16 modes, whereas Eqs. (12) and (14) have been derived without reference to the specific way the random field is created and are valid also for an infinite number of modes.
The covariance matrix of the correlation function estimator given by Eq. (63) is given by
With this, we can compare the true distribution of the correlation function (as estimated from a large number of realizations of the Gaussian field) to the commonly used Gaussian approximation to the likelihood. As an example, we show the likelihoods and in Fig. 6, where we have marginalized over all other components of . We have used a random field with N=64 modes and a single Gaussian power spectrum. Even though the estimated likelihood and the Gaussian prediction have identical first and second moments, a Gaussian likelihood clearly is a bad approximation to the distribution of the correlation function.
Figure 5: Distribution of (r_{1}, r_{2}) for correlation functions measured from simulated Gaussian random fields with N=256 and random power spectra (similar to Fig. 4). Even though the points lie well inside the admissible range, the shape of the distribution is similar to the shape of the allowed region. 

Open with DEXTER 
We now investigate whether the transformation of variables from r_{n} to y_{n} given by Eqs. (56) and (57) leads to a likelihood of the y_{n} that is better described by a Gaussian. We find that this is indeed the case, as is illustrated in Figs. 7 and 8 for the marginalized distributions in the 12, and 23planes. In the left panel, we show the estimated distribution of the correlation coefficients, whereas the right panel contains the distribution after the transformation. The probability distribution in rspace ``feels'' the presence of the boundary between allowed and forbidden regions, even at the innermost contours which are quite well away from the boundary. We also show the contours of the bestfitting bivariate Gaussian for comparison and see that the distribution in the yspace is much better approximated by a Gaussian than the distribution of the original correlation coefficients. In addition, we note that the transformation seems to reduce the correlations between most of the components of the correlation function. It may be suggested that the fact of the far more Gaussian distribution in the transformed variable is related to the choice of the transformation whose functional form is that of Fisher's ztransformation (Fisher 1915).
Figure 6: Marginalized likelihood of the correlation function components and ( left panel) and and ( right panel) as estimated from 10^{5} realizations of a onedimensional Gaussian random field with N=64 modes and a Gaussian power spectrum (black contours). The corresponding Gaussian likelihood with covariance matrix as predicted from the power spectrum (Eq. (64)) is shown by the red dashed contours. 

Open with DEXTER 
Figure 7: Marginalized distribution of r_{1} and r_{2} for a Gaussian random field with N=64 modes and Gaussian power spectrum ( right panel) and the distribution of the corresponding transformed variables y_{1} and y_{2} ( right panel). Shown as red dashed contours are the bestfitting bivariate Gaussian distributions; in the left panel, the constraint given by Eq. (12) is shown as solid blue line. 

Open with DEXTER 
Figure 8: Distribution of r_{2} and r_{3} for a Gaussian random field with N=64 modes and Gaussian power spectrum ( right panel), where we set 0.05<r_{1}<0.15 and have marginalized over all other components of the correlation function, and the distribution of the corresponding transformed variables y_{2} and y_{3} ( right panel). Shown as red dashed contours are the bestfitting bivariate Gaussian distributions; in the left panel, the constraint given by Eq. (14) for r_{1}=0.1 is shown as solid blue line. 

Open with DEXTER 
Finally, we wish to assess the importance of the constraints in a more realistic, twodimensional setting and consider as an example a weak lensing survey. Our strategy is as follows: we draw a large number of realizations of the shear correlation function ,
which is the twodimensional Fourier transform of the convergence power spectrum
(Kaiser 1992; Bartelmann & Schneider 2001) from a multivariate Gaussian likelihood. The covariance matrix for the likelihood function is computed under the assumption that the convergence is a Gaussian random field using the methodology described in Joachimi et al. (2008). For each of these realizations, we compute the matrix
(see Eq. (17)). We then use this sample of correlation functions to compute a MonteCarlo estimate of the integral
where denotes the Gaussian likelihood as given by Eq. (1) and if is positive semidefinite and otherwise. We test for positive semidefiniteness using the Cholesky decomposition (Press et al. 1992). measures the overlap of the Gaussian distribution with the allowed region. If , the Gaussian likelihood assigns a finite probability to regions containing correlation functions that do not correspond to a positive power spectrum and are therefore forbidden by the constraints discussed in this paper. We can therefore use as a rough indicator of the validity of the assumption of a Gaussian likelihood. However, note that even if , the shape of the true likelihood might deviate significantly from a Gaussian distribution.
For the numerical experiment, we choose a WMAP5like cosmology to compute the shear correlation function and its covariance matrix (approximating the shear field to be Gaussian), and a redshift distribution of the source galaxies similar to the one found for the CFHTLSWide (Benjamin et al. 2007). The results of the numerical experiment are shown in Fig. 9, where we plot as a function of the number of bins n (linear binning), keeping the maximum angular scale to which is evaluated constant at . We display the results for three different survey sizes (1, 10 and ). The constraints are more important for smaller survey areas and a larger number of bins. This is because as the number of bins increases, the constaints on the admissible values for the following components of the correlation function become tighter, so that the correlation function is basically determined by its first few components. Increasing the noise (small area) or increasing the number of bins therefore decreases the fraction of positive semidefinite correlation functions that can be drawn from the Gaussian likelihood. These results indicate that the likelihood of the correlation function might deviate significantly from a Gaussian even in realistic situations. It is expected that for real shear fields, which are not Gaussian on the angular scales considered here, the values of will deviate from unity even more. We speculate that the nonGaussianity found in Hartlap et al. (2009) for the shear correlation functions might at least partly be caused by the constraint of positive semidefiniteness.
Figure 9: Fraction (Eq. (65)) of admissible (i.e. positive semidefinite) shear correlation functions drawn from a Gaussian likelihood plotted versus the number of linear angular bins in the interval . Solid, dashed and dotted curves are for surveys with survey areas of 1, 10 and . 

Open with DEXTER 
7 Conclusions and outlook
We have considered constraints on correlation functions that need to be satisfied in order for the correlation function to correspond to a nonnegative power spectrum. Using the covariance matrix method, we have derived explicit expressions for the upper and lower bounds on the correlation coefficients r_{n}; these were derived for the case that the spatial sampling of occurs at points x_{j}= j x. This method yields optimal constraints for the correlation of onedimensional random fields, whereas they are not optimal for higherdimensional homogeneous and isotropic random fields. We have indicated a method with which such optimal constraints can in principle be derived for higherdimensional fields and for nonlinear spacing of grid points, and presented a few simple applications of this method; however, up to now we have not obtained a systematic method for deriving explicit upper and lower bounds on the r_{n} in these cases. Finding those will be of considerable interest since they are expected to be tighter than the corresponding bounds derived for the 1D case.
Using cosmic shear as an example, we have demonstrated that the Gaussian probability ellipsoid, which is obtained under the assumption that the likelihood function of the correlation function is given by a Gaussian characterized by the covariance matrix, significantly spills over to the forbidden region of correlation functions. This effect is even more serious than considered here, since we have used for this analysis the constraints from Sect. 2 which are not optimal, as shown in Sect. 4. Hence, from this argument alone we conclude that the assumption of a Gaussian likelihood is not very realistic and probably leads to erroneous estimates of parameters and their confidence regions.
Even if the Gaussian likelihood ellipsoid is contained inside the allowed region, the true likelihood deviates from a Gaussian, as simple numerical experiments with onedimensional Gaussian random fields have shown. Surprisingly, the shape of the resulting likelihood contours have some resemblance to the shape of the boundary of the allowed region even if the size of the probability distribution is considerably smaller than the allowed region. The origin of this result is not understood. Most likely, the likelihood of the correlation function depends not only on its covariance matrix, but also on higherorder moments.
A nonlinear coupled transformation of the correlation coefficients leads to a distribution that appears much more Gaussian (in the transformed variables), and there may be a connection of this fact to the Independent Components analyzed in Hartlap et al. (2009). Indeed, in these transformed variables, the likelihood not only is much more Gaussian, but also less correlated, which supports the hypothesis about a connection between the constraints derived here and the ICA analysis of Hartlap et al. (2009). More extensive numerical tests may yield better insight into this connection. It must be stressed that such a result, if it can be obtained, would be of great importance, given that the determination of multivariate probability distributions from numerical simulations is prohibitively expensive.
An alternative route for understanding the connection between the constraints derived here and the shape of the likelihood function is the explicit calculation of the multivariate probability distribution of the correlation function for a Gaussian random field. The constraints on the r_{j} should be explicitly present in this probability distribution. Work on these issues is currently in progress.
The results of this paper can most likely be generalized to random fields which are not scalars, e.g., the polarization of radiation, or the orientation of objects. The cosmic shear correlation function (e.g., Kaiser 1992; Schneider et al. 2002) which has been considered in Sect. 6 is equivalent to the correlation function of the underlying surface mass density , and thus the correlation behaves in the same way as that of a scalar field. However, the other cosmic shear correlation function is qualitatively different, being a spin4 quantity, for which the filter function in (5) is replaced by J_{4}(kx).
The foremost aim of this paper was the derivation of exact constraints on correlation functions; in contrast, we have not considered methods for measuring a correlation function from data. For example, in many cases the correlation function cannot be measured at zero lag, so that the correlation coeffcients can then not be determined directly. Furthermore, one derives the correlation function from data in a given volume, and thus the measured correlation function will deviate from the ensemble average, even in the absence of noise. This effect has two different aspects: suppose for a moment that the observed field is onedimensional and forced (or assumed) to be periodic. If the correlation function on such a field is measured using the definition (63), then the measured correlation function will deviate from the ensemble average, but it will still correspond to a nonnegative power spectrum, given by the square of the Fourier transform of the realization of the field. Hence, in such a case, every measured correlation function will satisfy the constraints derived in Sect. 2. If the field has more than one dimension, but is still periodic, the correlation function measured by a generalization of (63) to higher dimensions will still obey the constraints from Sect. 2, for the same reason. However, the power spectrum of the realization of the field will in general not be isotropic, and thus the considerations of Sect. 4 do not apply strictly. In the more realistic case where periodicity cannot be assumed, one cannotmeasure the correlation function by ``wrapping around'' as in (63); in this case, there are ``boundary effects'', which are the stronger the more the separation approaches the size of the data field. Then, there exists not necessarily a nonnegative power spectrum related to the measured correlation function through (3), and the constraints may not apply strictly. How important these effect are needs to be studied with a more extended set of simulations.
Acknowledgements
We thank Martin Kilbinger and Cristiano Porciani for constructive comments on the manuscript. This work was supported by the Deutsche Forschungsgemeinschaft in the framework of the Priority Programme 1177 on ``Galaxy Evolution'' and of the Transregional Cooperative Research Centre TRR 33 ``The Dark Universe'', and by the BonnCologne Graduate School of Physics and Astronomy.
References
 Abrahamsen, P. 1997, Technical Report 917, Norwegian Computing Center, http://publications.nr.no/917_Rapport.pdf (In the text)
 Bartelmann, M., & Schneider, P. 2001, Phys. Rep., 340, 291 [NASA ADS] [CrossRef] (In the text)
 Benjamin, J., Heymans, C., Semboloni, E., et al. 2007, MNRAS, 381, 702 [NASA ADS] [CrossRef] (In the text)
 Fisher, R. A. 1915, Biometrika, 10, 507 (In the text)
 Fu, L., Semboloni, E., Hoekstra, H., et al. 2008, A&A, 479, 9 [NASA ADS] [CrossRef] [EDP Sciences] (In the text)
 Hartlap, J., Schrabback, T., Simon, P., & Schneider, P. 2009, A&A, 504, 689 [CrossRef] [EDP Sciences] (In the text)
 Joachimi, B., Schneider, P., & Eifler, T. 2008, A&A, 477, 43 [NASA ADS] [CrossRef] [EDP Sciences] (In the text)
 Kaiser, N. 1992, ApJ, 388, 272 [NASA ADS] [CrossRef] (In the text)
 Kurchan, J. 2002, PhRvE, 66, 017101 [NASA ADS] (In the text)
 Okumura, T., Matsubara, T., Eisenstein, D. J., et al. 2008, ApJ, 676, 889 [NASA ADS] [CrossRef] (In the text)
 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes, CUP (In the text)
 Schneider, P., van Waerbeke, L., & Mellier, Y. 2002, A&A, 389, 729 [NASA ADS] [CrossRef] [EDP Sciences] (In the text)
 Seljak, U., & Bertschinger, E. 1993, ApJ, 417, L9 [NASA ADS] [CrossRef] (In the text)
 Wolfram, S. 1996, The Mathematica Book, CUP (In the text)
Footnotes
All Figures
Figure 1: For N=7, the original covariance matrix is shown, together with four transformations of it, described in the text, which leave the determinant unchanged. 

Open with DEXTER  
In the text 
Figure 2: Upper panel: volume fraction V_{M} (Eq. (46)) as function of the number M of separations for which the correlation function is measured. Lower panel: V_{M}^{1/M} as function of M. One sees that the typical linear dimension of the allowed region decreases with M, meaning that the strong decrease of V_{M} with M is not just an effect of the dimensionality of the volume considered. 

Open with DEXTER  
In the text 
Figure 3: Left panel: the curve , together with its convex envelope. The latter consists of two segments of the curve and two straight lines, one of which is tangent to at the points and , the other one is tangent to at and intersects . The corresponding values of the curve parameter are , , and . These numerical values are found as follows: the requirement that the lower straight line segment is tangent to curve at the two points leads to the conditions for i=1,2, where a_{i} are scalars. These four scalar equations for the four unknowns a_{i}, have several solutions, the relevant one is the ``outermost'' one. For the upper straight line segment, one employs the condition that the tangent at point goes through , i.e., , and of the multiple solutions of these two scalar equations for the two unknowns a and , one takes the ``outermost'' one. Middle panel: in a similar way, the curve is plotted, together with its convex envelope. Here, , and , and , , and . Right panel: comparison of the allowed regions in the r_{1}r_{2}plane in 13 dimensions. 

Open with DEXTER  
In the text 
Figure 4: Constraints in the r_{1}r_{2}plane ( left panel) and in the r_{2}r_{3}plane ( left panel), where r_{1} was constrained to 0.41< r_{1} <0.39. Each point corresponds to a correlation function measured from a realization of a onedimensional Gaussian random field with a randomly drawn power spectrum and N=16 modes. Plotted as lines are the analytically determined constraints as given by Eqs. (12) and (14). 

Open with DEXTER  
In the text 
Figure 5: Distribution of (r_{1}, r_{2}) for correlation functions measured from simulated Gaussian random fields with N=256 and random power spectra (similar to Fig. 4). Even though the points lie well inside the admissible range, the shape of the distribution is similar to the shape of the allowed region. 

Open with DEXTER  
In the text 
Figure 6: Marginalized likelihood of the correlation function components and ( left panel) and and ( right panel) as estimated from 10^{5} realizations of a onedimensional Gaussian random field with N=64 modes and a Gaussian power spectrum (black contours). The corresponding Gaussian likelihood with covariance matrix as predicted from the power spectrum (Eq. (64)) is shown by the red dashed contours. 

Open with DEXTER  
In the text 
Figure 7: Marginalized distribution of r_{1} and r_{2} for a Gaussian random field with N=64 modes and Gaussian power spectrum ( right panel) and the distribution of the corresponding transformed variables y_{1} and y_{2} ( right panel). Shown as red dashed contours are the bestfitting bivariate Gaussian distributions; in the left panel, the constraint given by Eq. (12) is shown as solid blue line. 

Open with DEXTER  
In the text 
Figure 8: Distribution of r_{2} and r_{3} for a Gaussian random field with N=64 modes and Gaussian power spectrum ( right panel), where we set 0.05<r_{1}<0.15 and have marginalized over all other components of the correlation function, and the distribution of the corresponding transformed variables y_{2} and y_{3} ( right panel). Shown as red dashed contours are the bestfitting bivariate Gaussian distributions; in the left panel, the constraint given by Eq. (14) for r_{1}=0.1 is shown as solid blue line. 

Open with DEXTER  
In the text 
Figure 9: Fraction (Eq. (65)) of admissible (i.e. positive semidefinite) shear correlation functions drawn from a Gaussian likelihood plotted versus the number of linear angular bins in the interval . Solid, dashed and dotted curves are for surveys with survey areas of 1, 10 and . 

Open with DEXTER  
In the text 
Copyright ESO 2009
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.