A&A 392, 1153-1174 (2002)
DOI: 10.1051/0004-6361:20020965

Smooth maps from clumpy data: Covariance analysis

M. Lombardi - P. Schneider

Institüt für Astrophysik und Extraterrestrische Forschung, Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany

Received 23 January 2002 / Accepted 27 June 2002

Interpolation techniques play a central role in Astronomy, where one often needs to smooth irregularly sampled data into a smooth map. In a previous article (Lombardi & Schneider 2001, hereafter Paper I), we have considered a widely used smoothing technique and we have evaluated the expectation value of the smoothed map under a number of natural hypotheses. Here we proceed further on this analysis and consider the variance of the smoothed map, represented by a two-point correlation function. We show that two main sources of noise contribute to the total error budget and we show several interesting properties of these two noise terms. The expressions obtained are also specialized to the limiting cases of low and high densities of measurements. A number of examples are used to show in practice some of the results obtained.

Key words: methods: statistical - methods: analytical - methods: data analysis - gravitational lensing

1 Introduction

Raw astronomical data are very often discrete, in the sense that measurements are performed along a finite number of directions on the sky. In many cases, the discrete data are believed to be single measurements of a smooth underlying field. In such cases, it is desirable to reconstruct the original field using interpolation techniques. A typical example of the general situation just described is given by weak lensing mass reconstructions in clusters of galaxies. In this case, thousands of noisy estimates of the tidal field of the cluster (shear) can be obtained from the observed shapes of background galaxies whose images are distorted by the gravitational field of the cluster. All these measurements can then be combined to produce a smooth map of the cluster shear, which in turn is subsequently converted into a projected density map of the cluster mass distribution.

One of the most widely used interpolation techniques in Astronomy is based on a weighted average. More precisely, a positive weight function, describing the relative weight of a datum at the position $\vec \theta + \vec\phi$ on the point $\vec\theta$, is introduced. The weight function is often chosen to be of the form $w \bigl( \vert
\vec\phi \vert \bigr)$, i.e. depends only on the separation $\vert \vec\phi
\vert$ of the two points considered. Normally, w is also a decreasing function of $\vert \vec\phi
\vert$ in order to ensure that the largest contributions to the interpolated value at $\vec\theta$ comes from nearby measurements. Then, the data are averaged using a weighted mean with the weights given by the function w. More precisely, calling $\hat f_n$ the nth datum obtained at the position $\vec\theta_n$, the smooth map is defined as

\tilde f(\vec\theta) \equiv \frac{\sum_{n=1}^N \hat f_n
... \vec\theta_n)}{\sum_{n=1}^N w(\vec\theta -
\vec\theta_n)} ,
\end{displaymath} (1)

where N is the total number of objects. In a previous paper (Lombardi & Schneider 2001, hereafter Paper I) we have evaluated the expectation value for this expression under the following hypothesis: In Paper I we have shown that

\bigl\langle \tilde f(\vec\theta) \bigr\rangle = \int f(\ve...
w_{\rm eff}(\vec\theta - \vec\theta') ~ \rm d^2 \theta' .
\end{displaymath} (5)

Thus, $\bigl\langle \tilde f \bigr\rangle$ is the convolution of the unknown field f with an effective weight $w_{\rm eff}$which, in general, differs from the weight function w. We also have shown that $w_{\rm eff}$ has a "similar'' shape as w and converges to w when the object density $\rho $ is large; however for finite $\rho $, $w_{\rm eff}$ is broader than w.

Here we proceed further with the statistical analysis by obtaining an expression for the two-point correlation function (covariance) of this estimator. More precisely, given two points $\vec\theta_A$ and $\vec\theta_B$, we consider the two-point correlation function of $\tilde f$, defined as

{\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B) \equiv \big...
...igr\rangle \bigl\langle
\tilde f(\vec\theta_B) \bigr\rangle .
\end{displaymath} (6)

In our calculations, similarly to Paper I, we assume that $\hat f_n$are unbiased and mutually independent estimates of $f(\vec\theta_n)$ (cf. Eq. (2)). We also assume that the $\{ \hat f_n \}$ have fixed variance $\sigma^2$, so that

\bigl\langle \bigl[ \hat f_n - f(\vec\theta_n) \bigr]
... f(\vec\theta_m) \bigr] \bigr\rangle = \sigma^2
\delta_{nm} .
\end{displaymath} (7)

The paper is organized as follows. In Sect. 2 we summarize the results obtained in this paper. In Sect. 3 we derive the general expression for the covariance of the interpolating techniques and we show that two main noise terms contribute to the total error. These results are then generalized in Sect. 4 to include the case of weight functions that are not strictly positive. A useful expansion at high densities $\rho $ of the covariance is obtained in Sect. 5. Section 6 is devoted to the study of several interesting properties of the expressions obtained in the paper. Finally, in Sect. 7 we consider three simple weight functions and derive (analytically or numerically) the covariance for these cases. Four appendixes on more technical topics complete the paper.

2 Summary

As mentioned in the introduction, the primary aim of this paper is the evaluation of the covariance (two-point correlation function) of the smoothing estimator (1) under the hypotheses that measurements $\hat f_n$ are unbiased estimates of a field $f(\vec\theta)$ (Eq. (2)) and that location measurements $\{ \vec\theta_n \}$ are independent, uniformly distributed variables with density $\rho $. Hence, we do not allow for angular clustering on the positions $\{ \vec\theta_n \}$, and we also do not include the effects of a finite field in our calculations (these effects are expected to play a role on points close to the boundary of the region where data are available). Moreover, we suppose that the noise on the measurements $\{ \hat f_n \}$ is uncorrelated with the signal (i.e., that variance $\sigma^2$ is constant on the field), and that measurements are uncorrelated to each other. Finally, we stress that in the whole paper we assume a non-negative (i.e., positive or vanishing) weight function $w(\vec\theta) \ge 0$. Surprisingly, weight functions with arbitrary sign cannot be studied in our framework (see discussion at the end of Sect. 4).

The results obtained in this paper can be summarized in the following points.

We evaluate analytically the two-point correlation function of $\tilde f(\vec\theta)$, showing that it is composed of two main terms:

{\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B) \equiv \big...
... \tilde f(\vec\theta_B) \bigr\rangle = T_\sigma + T_{\rm P} .
\end{displaymath} (8)

The term $T_\sigma $ is proportional to $\sigma^2$ and can thus be interpreted as the contribution to the covariance from measurement errors; the term $T_{\rm P}$ depends on the signal $f(\vec\theta)$ and can be interpreted as Poisson noise. These terms can be evaluated using the following set of equations:
Q(sA, sB) = $\displaystyle \int_\Omega \bigl[ {\rm e}^{-s_A w_A(\vec\theta) -
s_B w_B(\vec\theta)} - 1 \bigr] ~ \rm d^2
\theta ,$ (9)
Y(sA, sB) = $\displaystyle \exp \bigl[ \rho Q(s_A, s_B) \bigr] .$ (10)
C(wA, wB) = $\displaystyle \rho^2 \int_0^\infty \rm ds_A \int_0^\infty
\rm ds_B ~ {\rm e}^{-s_A w_A - s_B w_B} Y(s_A, s_B) = \rho^2
\mathcal L[Y](w_A, w_B) ,$ (11)
$\displaystyle T_\sigma$ = $\displaystyle \frac{\sigma^2}{\rho} \int_\Omega \rm d^2 \theta
w_A(\vec\theta) w_B(\vec\theta) C \bigl( w_A(\vec\theta),
w_B(\vec\theta) \bigr) ,$ (12)
$\displaystyle T_{\rm P}$ = $\displaystyle \frac{1}{\rho} \int_\Omega \rm d^2 \theta
\bigl[ f(\vec\theta) \b...
... \theta_2
~ f(\vec\theta_1) f(\vec\theta_2) w_A(\vec\theta_1)
    $\displaystyle \times\Bigl[ C\bigl( w_A(\vec\theta_1) +
w_A(\vec\theta_2), w_B(\...
... C_A\bigl( w_A(\vec\theta_1)
\bigr) C_B\bigl( w_B(\vec\theta_2) \bigr) \Bigr] .$ (13)

In the last two equations we used the notation $w_A(\vec\theta) =
w(\vec\theta_A - \vec\theta)$, and similarly for $w_B(\vec\theta)$; moreover the two functions CA and CB can be obtained from the following limits:
$\displaystyle C_A(w_A) = \frac{1}{\rho} \lim_{w_B \rightarrow \infty}
w_B C \bi...
...) = \frac{1}{\rho} \lim_{w_A \rightarrow \infty}
w_A C \bigl( w_A, w_B \bigr) .$     (14)

We show that the quantity C(wA, wB) of Eq. (11), in the limit of high density $\rho $, converges to

C(w_A, w_B) \simeq \frac{\rho^2}{(\rho + w_A) (\rho + w_B)}...
...w_B)^2} +
\frac{\rho^3 S_{02}}{(\rho + w_A) (\rho + w_B)^3},
\end{displaymath} (15)

where Sij are the moments of the functions (wA, wB):

S_{ij} \equiv \int_\Omega \rm d^2 \theta \bigl[ w_A(\vec\theta)
\bigr]^i \bigl[ w_B(\vec\theta) \bigr]^j .
\end{displaymath} (16)

We derive a number of properties for the noise terms and the function C(wA, wB). In particular, we show (1) that $w_A(\vec\theta) w_B(\vec\theta) C\bigl( w_A(\vec\theta),
w_B(\vec\theta) \bigr) \le \rho^2$ in every point $\vec\theta$; (2) that the measurement error has as upper bound $T_\sigma \le
\sigma^2$; (3) that the same error has as lower bound the convolution $\sigma^{-1} \int w_{{\rm eff} A}(\vec\theta)
w_{{\rm eff} B}(\vec\theta) ~ \rm d^2 \theta$ of the two effective weights $w_{{\rm eff}A}(\vec\theta) =
w_A(\vec\theta) C\bigl( w_A(\vec\theta), \infty \bigr)$ and $w_{{\rm eff}B}(\vec\theta) = w_B(\vec\theta) C\bigl( \infty,
w_B(\vec\theta) \bigr)$ (cf. Lombardi & Schneider 2001); (4) that the measurement noise converges to $T_\sigma \simeq \sigma^2$ at low densities ( $\rho \rightarrow 0$) and to $T_\sigma \simeq \sigma^2
S_{11} / \rho$ at high densities ( $\rho \rightarrow \infty$).

3 Evaluation of the covariance

3.1 Preliminaries

Before starting the analysis, let us introduce a simpler notation. In the following we will often drop the arguments $\vec\theta_A$ and $\vec\theta_B$ in ${\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B)$ and other related quantities. Note, in fact, that the problem is completely defined with the introduction of the two "shifted'' weight functions $w_A(\vec\theta) \equiv w(\vec\theta_A - \vec\theta)$ and $w_B(\vec\theta) \equiv w(\vec\theta_B - \vec\theta)$. We also call $\tilde f_A \equiv \tilde f(\vec\theta_A)$ and $\tilde f_B \equiv
\tilde f(\vec\theta_B)$ the values of $\tilde f$ at the two points of interest $\vec\theta_A$ and $\vec\theta_B$, so that

\tilde f_X = \frac{\sum_{n=1}^N \hat f_n
w_X(\vec\theta_n)}{\sum_{n=1}^N w_X(\vec\theta_n)} \cdot
\end{displaymath} (17)

Hence, Eq. (6) can be rewritten in this notation as

{\rm Cov}(\tilde f) = \bigl\langle \tilde f_A \tilde f_B \b...
...tilde f_A \bigr\rangle \bigl\langle \tilde f_B
\bigr\rangle .
\end{displaymath} (18)

Note that, using this notation, we are not taking advantage of the invariance upon translation of $w(\vec\theta)$ in Eq. (1); in other words, we are not using the fact that wA and wB are basically the same function shifted by $\vec\theta_A - \vec\theta_B$. Actually, all calculations can be carried out without using this property; however, we will explicitly point out simplifications that can be made using the invariance upon translation.

We would also like to spend a few words about averages. Note that, as anticipated in Sect. 1, we need to carry out two averages, one with respect to $\{ \hat f_n \}$ (Eqs. (2) and (7)), and one with respect to $\{ \vec\theta_n \}$(Eqs. (3) and (4)). Taking $\{ \vec\theta_n \}$ to be random variables is often reasonable because in Astronomy one does not have a direct control over the positions where observations are made (this happens because measurements are normally performed in the direction of astronomical objects such as stars and galaxies, and thus at "almost random'' directions); it also has the advantage of letting us obtain general results, independent of any particular configuration of positions. Note, however, that taking $\{ \vec\theta_n \}$ to be independent variables is a strong simplification which might produce inaccurate results in some context (e.g., in case of a direction dependent density, or in case of clustering; see Lombardi et al. 2001). Finally, since the number of observations N is itself a random variable, we need to perform first the average on $\{ \hat f_n \}$ and then the one on $\{ \vec\theta_m \}$.

In closing this section, we observe that in this paper, similarly to Paper I, we will almost always consider the smoothing problem on the plane, i.e. we will assume that the positions $\{ \vec\theta_n \}$are vectors of $\mathbb R^2$. We proceed this way because in Astronomy the smoothing process often takes places on small regions of the celestial sphere, and thus on sets that can be well approximated with subsets of the plane. However, we stress that all the results stated here can be easily applied to smoothing processes that takes places on different sets, such as the real axis $\mathbb R$ or the space $\mathbb R^3$.

3.2 Analytical solution

Let us now focus on the first term on the r.h.s. of Eq. (18). We have

\bigl\langle \tilde f_A \tilde f_B \bigr\rangle = \frac{1}{...
...theta_n) \bigr] \bigl[ \sum_m w_B(\vec\theta_m) \bigr]}
\end{displaymath} (19)

Note that the average in the r.h.s. of this equation is only with respect to $\{ \hat f_n \}$. Expanding the numerator in the integrand of this equation, we obtain N2 terms, N of which have n = m and N (N - 1) have $n \neq m$. We can then rewrite Eq. (19) above as

\bigl\langle \tilde f_A \tilde f_B \bigr\rangle = T_1 + T_2 ,
\end{displaymath} (20)

$\displaystyle T_1 \equiv \frac{1}{A^N} \int_\Omega \rm d^2 \theta_1
...bigl[ \sum_n w_A(\vec\theta_n) \bigr] \bigl[
\sum_m w_B(\vec\theta_m) \bigr]} ,$     (21)
$\displaystyle T_2 \equiv \frac{1}{A^N} \int_\Omega \rm d^2 \theta_1
...[ \sum_n
w_A(\vec\theta_n) \bigr] \bigl[ \sum_m w_B(\vec\theta_m) \bigr]}
\cdot$     (22)

Despite the apparent differences, these two terms can be simplified in a similar manner. Let us consider first T1. Using Eq. (7), we can evaluate the average $\langle \hat f^2_n
\rangle = \sigma^2 + \bigl[ f(\vec\theta_n) \bigr]^2$. Since the positions $\{ \vec\theta_n \}$ appear as "dummy variables'' in Eq. (21), we can relabel them as follows

T_1 = \frac{N}{A^N} \int_\Omega \rm d^2 \theta_1 \int_\Omeg...
...theta_n) \bigr] \bigl[
\sum_m w_B(\vec\theta_m) \bigr]} \cdot
\end{displaymath} (23)

In order to simplify this equation, we use a technique similar to the one adopted in Paper I. More precisely, we split the two sums in the denominator of the integrand of Eq. (23), taking away the terms $w_A(\vec\theta_1)$ and $w_B(\vec\theta_1)$. Hence, we write

T_1 = \frac{1}{\rho} \int_\Omega \rm d^2 \theta_1 \bigl[
...eta_1) C \bigl( w_A(\vec\theta_1), w_B(\vec\theta_1)
\bigr) ,
\end{displaymath} (24)

where C(wA, wB) is a corrective factor given by

C(w_A, w_B) \equiv \frac{N^2}{A^{N+1}} \int_\Omega \rm d^2
...gr] \bigl[w_B + \sum_{m=2}^N
w_B(\vec\theta_n) \bigr]} \cdot
\end{displaymath} (25)

The additional factor $\rho =
N/A$ has been introduced to simplify some of the following equations. Note that in the definition of CwA and wB are formally taken to be two real variables (instead of two real functions of argument $\vec\theta_1$).

The definition of C above suggests to define two new random variables yA and yB:

y_X \equiv \sum_{n=2}^N w_X(\vec\theta_n), \qquad {\rm with}\ X
= \{ A, B \} .
\end{displaymath} (26)

Note that the sum runs from n = 2 to n = N. If we could evaluate the combined probability distribution function py(yA, yB) for yA and yB, we would have solved our problem: In fact we could use this probability to write C(wA, wB) as follows

C(w_A, w_B) = \rho^2 \int_0^\infty \rm dy_A \int_0^\infty \rm dy_B
\frac{p_y(y_A, y_B)}{(w_A + y_A) (w_B + y_B)} \cdot
\end{displaymath} (27)

To obtain the probability distribution py(yA, yB), we need to use the combined probability distribution pw(wA, wB) for wA and wB. This distribution is implicitly defined by saying that the probability that $w_A(\vec\theta)$ be in the range $[ w_A, w_A + \rm d
w_A ]$ and $w_B(\vec\theta)$ be in the range $[ w_B, w_B + \rm dw_B
]$ is $p_w(w_A, w_B) ~ \rm dw_A ~ \rm dw_B$. We can evaluate pw(wA, wB) using

p_w(w_A, w_B) = \frac{1}{A} \int_\Omega \delta\bigl( w_A -
... \delta\bigl( w_B - w_B(\vec\theta) \bigr) ~
\rm d^2 \theta .
\end{displaymath} (28)

Turning back to (yA, yB), we can write a similar expression for py:

p_y(y_A, y_B) = \frac{1}{A^{N-1}} \int_\Omega \rm d^2
- \dots - w_{AN} ) \delta( y_B - w_{B2} - \dots - w_{BN} ) ,
\end{displaymath} (29)

where for simplicity we have called $w_{Xn} = w_X(\vec\theta_n)$. Note that inserting this equation into Eq. (27) we recover Eq. (25), as expected. Actually, for our purposes it is more useful to consider yX to be the sum of N random variables $\{
w_{Xn} \}$. In other words, we consider the set of couples $\bigl\{
(w_{An}, w_{Bn}) \bigr\}$, made of the two weight functions at the various positions, as a set of N independent two-dimensional random variables (wA, wB) with probability distribution pw(wA, wB). (Hence, similarly to Eq. (25), we consider the weight functions wX to be real variables instead of real functions; the independence of the positions $\vec\theta_n$ then implies the independence of the couples (wAn, wBn)). Taking this point of view, we can rewrite Eq. (29) as
py(yA, yB) $\textstyle = \int_0^\infty$ $\displaystyle \rm dw_{A2} \int_0^\infty \rm d
w_{B2} ~ p_w( w_{A2}, w_{B2} ) \dots \int_0^\infty \rm dw_{AN}
\int_0^\infty \rm dw_{BN} ~ p_w( w_{AN}, w_{BN} )$  
    $\displaystyle \times \delta( y_A - w_{A2} - \dots - w_{AN} ) \delta( y_B -
\dots - w_{BN} ) .$ (30)

It is well known in Statistics that the sum of independent random variables with the same probability distribution can be better studied using Markov's method (see, e.g., Chandrasekhar 1989; see also Deguchi & Watson 1987 for an application to microlensing studies). This method is based on the use of Fourier transforms for the probability distributions pw and py. However, since we are dealing with non negative quantities (we recall that we assumed $w(\vec\theta) \ge 0$), we can replace the Fourier transform with Laplace transform which turns out to be more appropriate in for our problem (see Appendix D for a summary of the properties of Laplace transforms). Hence, we define W(sA, sB) and Y(sA, sB) to be the Laplace transforms of pw(wA, wB) and py(wA, wB), respectively. Note that, since both functions pwand py have two arguments, we need two arguments for the Laplace transforms as well:
$\displaystyle W(s_A, s_B) \equiv \mathcal L[p_w](s_A, s_B) = \int_0^\infty \rm dw_A
\int_0^\infty \rm dw_B {\rm e}^{-s_A w_A - s_B w_B} p_w(w_A, w_B) ,$     (31)
$\displaystyle Y(s_A, s_B) \equiv \mathcal L[p_y](s_A, s_B) = \int_0^\infty \rm dy_A
\int_0^\infty \rm dy_B {\rm e}^{-s_A y_A - s_B y_B} p_y(y_A, y_B) .$     (32)

We use now in these expressions the Eq. (28) for pw and Eq. (30) for py, thus obtaining
W(sA, sB) = $\displaystyle \frac{1}{A} \int_\Omega {\rm e}^{-s_A w_A(\vec\theta)
- s_B w_B(\vec\theta)} ~ \rm d^2 \theta ,$ (33)
Y(sA, sB) = $\displaystyle \frac{1}{A^{N-1}} \int_\Omega \! \rm d^2
\theta_2 \dots \int_\Ome...
...^N w_{An} - s_B \sum_{m=2}^N w_{Bm} \biggr] = \bigl[
W(s_A, s_B) \bigr]^{N-1} .$ (34)

Hence, py can in principle be obtained from the following scheme. First, we evaluate W(sA, sB) using Eq. (33), then we calculate Y(sA, sB) from Eq. (34), and finally we back-transform this function to obtain py(yA, yB).

Actually, another, more convenient, technique is viable. Following the path of Paper I, we now take the "continuous limit'' and treat N as a random variable. As explained in Sect. 1, we can take this limit using two equivalent approaches:

The equivalence of the two methods can be shown as follows. Let us consider a large area $A' \supset A$, and let us suppose that the number $N' = \rho A'$ of objects inside A' is fixed. Since objects are randomly distributed inside A', the probability for each object to fall inside A is just A / A'. Hence N, the number of objects inside A, follows a binomial distribution:

p_N(N) = \left(\begin{array}{c}
N' \\ N
...'} \right)^N \left( \frac{A'
- A}{A'} \right)^{N' - N} \cdot
\end{displaymath} (35)

If we now let N' go to infinity with $N' / A' = \rho$ fixed, the probability distribution for N converges (see, e.g. Eadie et al. 1971) to the Poisson distribution in Eq. (3).

We will follow here the second strategy, i.e. we will take the limit $N, A \rightarrow \infty$ keeping $\rho =
N/A$ constant. In the limit $A \rightarrow \infty$ the quantity W(sA, sB) goes to unity and thus is not useful for our purposes. Instead, it is convenient to define

Q(s_A, s_B) \equiv \int_\Omega \bigl[ {\rm e}^{-s_A w_A(\ve...
...1 \bigr] ~ \rm d^2 \theta = A \bigl[
W(s_A, s_B) - 1 \bigr] .
\end{displaymath} (36)

This definition is sensible because, this way, Q remains finite for $A \rightarrow \infty$. In the continuous limit, Eq. (34) becomes

Y(s_A, s_B) = \lim_{N \rightarrow \infty} \left[ 1 + \frac{\rho
Q(s_A, s_B)}{N} \right]^{N-1} = {\rm e}^{Q(s_A, s_B)} .
\end{displaymath} (37)

In order to evaluate C(wA, wB), we rewrite its definition (27) as

C(w_A, w_B) = \rho^2 \int_0^\infty \rm dx_A \int_0^\infty \rm dx_B
\frac{\zeta_w(x_A, x_B)}{x_A x_B} ,
\end{displaymath} (38)

where $x_X \equiv y_X + w_X$ and

\zeta_w(x_A, x_B) \equiv {\rm H}(x_A - w_A) {\rm H}(x_B - w_B)
p_y(x_A - w_A, x_B - w_B) .
\end{displaymath} (39)

Here ${\rm H}(x_X - w_X)$ are Heaviside functions at the positions wX, i.e.

{\rm H}(x) =
0 & {\rm if}\ x < 0 , \\
1 & {\rm otherwise.}
\end{displaymath} (40)

Note that $\zeta_w$ is basically a "shifted'' version of py. Looking back at Eq. (38), we can interpret the integration present in this equation as a very particular case of Laplace transform with vanishing argument. In other words, we can write

C(w_A, w_B) = \rho^2 \mathcal L[\zeta_w/x_A x_B](0, 0) .
\end{displaymath} (41)

Thus our problem is solved if we can obtain the Laplace transform of $\zeta_w / x_A x_B$ evaluated at sA = sB = 0. From the properties of Laplace transform (cf. Eq. (D.7)) we find

\mathcal L[\zeta_w(x_A, x_B) / x_A x_B](s_A, s_B) = \int_{s...
...ty \rm d
s'_A \int_{s_B}^\infty \rm ds'_B ~ Z_w(s'_A, s'_B) ,
\end{displaymath} (42)

where Zw is the Laplace transform of $\zeta_w$:

Z_w(s_A, s_B) \equiv \mathcal L[\zeta_w](s_A, s_B) = {\rm e}^{-s_A w_A - s_B w_B}
Y(s_A, s_B) .
\end{displaymath} (43)

Combining together Eqs. (41)-(43) we finally obtain

C(w_A, w_B) = \rho^2 \int_0^\infty \rm ds_A \int_0^\infty
...w_A - s_B w_B} Y(s_A, s_B) = \rho^2
\mathcal L[Y](w_A, w_B) .
\end{displaymath} (44)

In summary, the set of equations that can be used to evaluate T1are

Q(s_A, s_B) = \int_\Omega \bigl[ {\rm e}^{-s_A w_A(\vec\theta)
- s_B w_B(\vec\theta)} - 1 \bigr] ~ \rm d^2 \theta ,
\end{displaymath} (45)

Y(s_A, s_B) = \exp \bigl[ \rho Q(s_A, s_B) \bigr] ,
\end{displaymath} (46)

C(w_A, w_B) = \rho^2 \int_0^\infty \rm ds_A \int_0^\infty
..._A - s_B w_B} Y(s_A, s_B) = \rho^2
\mathcal L[Y](w_A, w_B) ,
\end{displaymath} (47)

T_1 = \frac{1}{\rho} \int_\Omega \rm d^2 \theta \bigl[
...vec\theta) C
\bigl( w_A(\vec\theta), w_B(\vec\theta) \bigr) .
\end{displaymath} (48)

These equations solve completely the first part of our problem, the determination of T1.

Let us now consider the second term of Eq. (20), namely T2(see Eq. (22)). We first evaluate the average in $\{ \hat f_n \}$ that appears in the numerator of the integrand of Eq. (22), obtaining $\langle \hat f_n \hat f_m \rangle =
f(\vec\theta_n) f(\vec\theta_m)$ (cf. Eq. (7) with $n \neq m$). Then we relabel the "dummy'' variables $\{ \vec\theta_n \}$similarly to what has been done for T1, thus obtaining

T_2 = \frac{N (N - 1)}{A^N} \int_\Omega \rm d^2 \theta_1 ~
...theta_n) \bigr] \bigl[
\sum_m w_B(\vec\theta_m) \bigr]} \cdot
\end{displaymath} (49)

We now split, in the two sums in the denominator, the terms $w_A(\vec\theta_1) + w_A(\vec\theta_2)$ and $w_B(\vec\theta_1) +
w_B(\vec\theta_2)$ and define the new random variables

z_X \equiv \sum_{n=3}^N w_X(\vec\theta_n) , \qquad {\rm with} \ X
= \{ A, B \} .
\end{displaymath} (50)

Again, if we know the combined probability distribution pz(zA, zB) of zA and zB our problem is solved, since we can write (cf. Eqs. (24) and (27))
$\displaystyle T_2 = \frac{N (N - 1)}{A^2} \int_\Omega \rm d^2 \theta_1
...vec\theta_2) + z_A} \frac{1}{w_B(\vec\theta_1) + w_B(\vec\theta_2) + z_B} \cdot$     (51)

Actually, in the continuous limit, zX is indistinguishable from yX (zX differs from yX only on the fact that it is the sum of N-2 "weights'' instead of N-1; however, N goes to infinity in the continuous limit and thus yX and zX converge to the same quantity). Thus we can rewrite Eq. (51) as

T_2 = \int_\Omega \rm d^2 \theta_1 \int_\Omega \rm d^2 \the...
w_B(\vec\theta_1) + w_B(\vec\theta_2) \bigr) ,
\end{displaymath} (52)

where C is still given by Eq. (47).

Finally, in order to evaluate ${\rm Cov}(\tilde f)$, we still need the simple averages $\bigl\langle \tilde f_A \bigr\rangle$ and $\bigl\langle \tilde f_B \bigr\rangle$. These can be obtained directly using the technique described in Paper I, where we have shown that the set of equations to be used is

 \begin{displaymath}Q_X(s_X) \equiv \int_\Omega \bigl[ {\rm e}^{-s_X
w_X(\vec\theta)} - 1 \bigr] \; ~ \rm d^2 \theta ,
\end{displaymath} (53)

 \begin{displaymath}Y_X(s_X) \equiv \exp \bigl[ \rho Q_X(s_X) \bigr] ,
\end{displaymath} (54)

 \begin{displaymath}C_X(w_X) \equiv \rho \int_0^\infty \rm ds_X {\rm e}^{-s_X w_X}
Y_X(s_X) ,
\end{displaymath} (55)

 \begin{displaymath}\bigl\langle \tilde f_X \bigr\rangle = \int_\Omega \rm d^2
...ec\theta) w_X(\vec\theta) C_X \bigl( w_X(\vec\theta)
\bigr) .
\end{displaymath} (56)

We recall that in Paper I we called the combination $w_{{\rm eff}X}(\vec\theta) = w_X(\vec\theta) C_X \bigl(
w_X(\vec\theta) \bigr)$ effective weight (cf. Eq. (5) in the introduction). Alternatively, we can use the quantities Y(sA, sB) and C(wA, wB) to calculate the correcting factors CA and CB. From Eqs. (53) and (54) we immediately find
QA(sA) = $\displaystyle Q(s_A, 0) ,\hspace*{3cm} Y_A(s_A) = Y(s_A, 0) ,$ (57)
QB(sB) = $\displaystyle Q(0, s_B) ,\hspace*{3cm} Y_B(s_B) = Y(0, s_B) .$ (58)

Then, using the properties of Laplace transforms (cf. Eq. (D.10)), and comparing the definition of C(wA, wB)(Eq. (44)) with the one of CX(wX) (Eq. (55)) we find
$\displaystyle C_A(w_A) = \frac{1}{\rho} \lim_{w_B \rightarrow \infty}
w_B C \bi...
...) = \frac{1}{\rho} \lim_{w_A \rightarrow \infty}
w_A C \bigl( w_A, w_B \bigr) .$     (59)

We now have at our disposal the complete set of equations that can be used to determine the covariance of $\tilde f$.

In closing this subsection we makes a few comments on the translation invariance for wX (see Sect. 3.1). Since $w_A(\vec\theta)$ and $w_B(\vec\theta)$ differ by an angular shift only, the two functions QA and QB are the same, so that CAcoincides with CB. Not surprisingly, the two effective weights $w_{{\rm eff}A}$ and $w_{{\rm eff}B}$ differ also only by a shift.

3.3 Noise contributions

A simple preliminary analysis of the Eqs. (48) and (52) allows us to recognize two main sources of noise. In fact, a term in Eq. (48) is proportional to $\sigma^2$, and is clearly related to measurement errors of f, namely

T_\sigma \equiv \frac{\sigma^2}{\rho} \int_\Omega \rm d^2 \...
...vec\theta) C \bigl( w_A(\vec\theta),
w_B(\vec\theta) \bigr) .
\end{displaymath} (60)

Other factors entering ${\rm Cov}(\tilde f)$ can be interpreted as Poisson noise. Hence, we call $T_{\rm P1} \equiv T_1 - T_\sigma$, $T_{\rm P2} \equiv T_2$, and $T_{\rm P3} \equiv \bigl\langle
\tilde f_A \bigr\rangle \bigl\langle \tilde f_B \bigr\rangle$, so that the total Poisson noise is $T_{\rm P} \equiv T_{\rm P1} +
T_{\rm P2} - T_{\rm P3}$. Note that the Poisson noise $T_{\rm P}$, in contrast with the measurement noise $T_\sigma $, strongly depends on the signal $f(\vec\theta)$.

The noise term $T_\sigma $ is quite intuitive and does not require a long explanation. We note here only that this term is independent of the field $f(\vec\theta)$ because we assumed measurements $\hat f_n$with fixed variance $\sigma^2$ (see Eq. (7)).

The Poisson noise $T_{\rm P}$ can be better understood with a simple example. Suppose that $f(\vec\theta)$ is not constant and let us focus on a point where this function has a strong gradient. Then, when measuring $\tilde f$ in this point, we could obtain an excess of signal because of an overdensity of objects in the region where $f(\vec\theta)$ is large; the opposite happens if we have an overdensity of objects in the region where $f(\vec\theta)$ is small. This noise source, called Poisson noise, vanishes if the function $f(\vec\theta)$ is flat.

In the rest of this paper we will study the properties of the two-point correlation function. Before proceeding, however, we need to consider an important generalization of the results obtained here to the case of vanishing weights.

4 Vanishing weights

So far we have implicitly assumed that both wA and wB are always positive. In some cases, however, it might be interesting to consider vanishing weight functions (for example, functions with finite support). We need then to modify accordingly our equations.

When using vanishing weights, we might encounter situations where the denominator of Eq. (1) vanishes because all weight functions $w(\vec\theta - \vec\theta_n)$ vanish as well. In this case, the estimator $\tilde f(\vec\theta)$ cannot be even defined (we encounter the ratio 0 / 0), and any further statistical analysis is meaningless. In practice, when smoothing data using a vanishing weight function, one could just ignore the points $\vec\theta$ where the smoothed function $\tilde f(\vec\theta)$ is not defined, i.e. the points $\vec\theta$ for which $w(\vec\theta - \vec\theta_n) = 0$ for every n. This simple approach leads to smoothed maps with "holes'', i.e. defined only on subsets of the plane. Hence, if we choose this approach we need to modify accordingly the statistical analysis that we carry out in this paper.

This problem was already encountered in Paper I, where we used the following prescription. When using a finite-field weight function, we discard, for every configuration of measurement points $\{ \vec\theta_n \}$, the points $\vec\theta$ on the plane for which the smoothing $\tilde f(\vec\theta)$ is not defined. Then, when taking the average with respect to all possible configurations $\{ \vec\theta_n \}$ of $\tilde f(\vec\theta)$, we just exclude these configurations. We stress that, this way, the averages $\bigl\langle
\tilde f(\vec\theta) \bigr\rangle$ and $\bigl\langle \tilde
f(\vec\theta') \bigr\rangle$ of the smoothing (1) at two different points $\vec\theta$ and $\vec\theta'$ are effectively carried out using different ensembles: In one case we exclude the "bad configurations'' for $\vec\theta$, in the other case the "bad configurations'' for $\vec\theta'$.

The same prescription is also adopted here to evaluate the covariance of our estimator. Hence, when performing the ensemble average to estimate the covariance ${\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B)$, we explicitly exclude configurations where either $\tilde f_A$ or $\tilde f_B$ cannot be evaluated. This is implemented with a slight change in the definition of py, which in turn implies a change in Eq. (46) for Y. A rigorous generalization of the relevant equations can now be carried out without significant difficulties. However, the equations obtained are quite cumbersome and present some technical peculiarities. Hence, we prefer to postpone a complete discussion of vanishing weights until Appendix A; we report here only the main results.

As mentioned above, the basic problem of having vanishing weights is that in some cases the estimator is not defined. Hence, it is convenient to define three probabilities, namely PA and PB, the probabilities, respectively, that $\tilde f_A$ and $\tilde f_B$ are not defined, and PAB, the probability that both quantities are not defined. Note that, because of the invariance upon translation for w, we have PA = PB. These probabilities can be estimated without difficulties. In fact, the quantity $\tilde f_X$ is not defined if and only if there is no object inside the support of wX. Since the number of points inside the support of wX follows a Poisson probability, we have $P_X = \exp(-\rho \pi_X)$, where $\pi_X$is the area of the support of wX. Similarly, calling $\pi_{A \cup
B}$ the area of the union of the supports of wA and wB, we find $P_{AB} = \exp(-\rho \pi_{A \cup B})$. Using Eqs. (45) and (46) we can also verify the following relations

PAB=$\displaystyle \lim_{\begin{array}{c}
\scriptstyle s_A \rightarrow \infty \\  [-...
...rrow 0^+ \\  [-1mm]
\scriptstyle s_B \rightarrow 0^+
\end{array}} Y(s_A, s_B) ,$ (61)
PA=$\displaystyle \lim_{\begin{array}{c}
\scriptstyle s_A \rightarrow 0^+ \\  [-2mm...
...w \infty \\  [-1mm]
\scriptstyle s_B \rightarrow 0^+
\end{array}} Y(s_A, s_B) .$ (62)

Appendix A better clarifies the relationship between the limiting values of Y and the probabilities defined above. In the following we will use a simplified notation for limits, and we will write something like $P_A = Y(0^+, \infty)$ for the left equation in (62).

The only significant modification to the equations obtained above for vanishing weights is an overall factor in Eq. (47), which now becomes

C(w_A, w_B) = \frac{\rho^2}{1 - P_A - P_B + P_{AB}} \mathcal L[Y](w_A,
w_B) .
\end{displaymath} (63)

The factor 1/(1 - PA - PB + PAB) is basically a renormalization; more precisely, it is introduced to take into account the fact that we are discarding cases where either $\tilde f_A$ or $\tilde f_B$ are not defined. Note, in fact, that in agreement with the inclusion-exclusion principle, (1 - PA - PB + PAB) is the probability that the both $\tilde f_A$ and $\tilde f_B$ are defined. Since the combination (1 - PA - PB + PAB) enters several equations, we define

\nu \equiv \frac{1}{1 - P_A - P_B + P_{AB}} \cdot
\end{displaymath} (64)

Equation (63) is the most important correction to take into account for vanishing weights. Actually, there are also a number of peculiarities to consider when dealing with the probability py and its Laplace transform Y. Fortunately, however, these peculiarities have no significant consequence for our purpose and thus we can still safely use Eqs. (45) and (46). Again, we refer to Appendix A for a complete explanation.

In closing this section, we spend a few words on weight functions with arbitrary sign (i.e., functions $w(\vec\theta)$ that can be positive, vanishing, or positive depending on $\vec\theta$). As mentioned in Sect. 2, in this case a statistical study of the smoothing (1) cannot be carried out using our framework. In order to understand why this happens, let us consider the weight function

w(\vec\theta) = \left( 1 - \vert\vec\theta\vert^2 \right) \exp
\left( - \vert\vec\theta \vert^2 \right) .
\end{displaymath} (65)

This function is continuous, positive for $\vert\vec\theta\vert <
1$, and quickly vanishes for large $\vert\vec\theta\vert$. Let us then consider separately the numerator and denominator of Eq. (1). The denominator can clearly be positive or negative; more precisely, the denominator is positive for points $\vec\theta$ close to at least one of the locations $\vec\theta_n$, and negative for points $\vec\theta$ which are in "voids'' (i.e., far away from the locations  $\{ \vec\theta_n \}$). Hence, the lines where the denominator vanishes separate the regions of high density of locations from the regions of low density. Note that, even for very large average densities $\rho $, we still expect to find "voids'' of arbitrary size (in other words, for every finite density $\rho $, there is a non-vanishing probability of having no point inside an arbitrarily large region). As a result, there will be always regions where the denominator vanishes. The discussion for the numerator is similar but, in this case, we also need to take into account the field $f(\vec\theta)$. Hence, we still expect to have regions where the numerator is positive and regions where it is negative but, clearly, these regions will in general be different from the analogous regions for the denominator. As a result, when evaluating the ratio between the numerator and the denominator, we will obtain arbitrarily large values close to the lines where the denominator vanishes. Note also that these lines will change for different configurations of locations $\{ \vec\theta_n \}$. In summary, if the weight function is allowed to be negative, the denominator of Eq. (1) is no longer guaranteed to be positive, and infinities are expected when performing the ensemble average.

5 Moments expansion

In most applications, the density of objects is rather large. Hence, it is interesting to obtain an expansion for C(wA, wB) valid at high densities.

In Paper I we already obtained an expansion for CA(wA) (or, equivalently, CB(wB)) for $\rho \rightarrow \infty$:

C_A(w_A) \simeq \frac{\rho}{\rho + w_A} + \frac{\rho^2
...\frac{\rho^2 S_{40} + 3 \rho^3 S_{20}^2}{(\rho + w_A)^5} \cdot
\end{displaymath} (66)

In this equation, Sij are the moments of the functions (wA, wB), defined as

S_{ij} \equiv \int_\Omega \rm d^2 \theta \bigl[ w_A(\vec\theta)
\bigr]^i \bigl[ w_B(\vec\theta) \bigr]^j .
\end{displaymath} (67)

Clearly, in Eq. (66) enter only the moments Si0, since the form of wB is not relevant for CA(wA). Similarly, the expression for CB(wB) contains only the moments S0j. Note that for weight functions invariant upon translation we have Sij = Sji.

\includegraphics[width=14cm,clip]{2294f1.eps}\end{figure} Figure 1: The moment expansion of C(wA, wB) for 1-dimensional Gaussian weight functions wA(x) = wB(x) centered on 0 and with unit variance. The plot shows the various order approximations obtained using the method described in Sect. 5 (equations for the orders n=3 and n=4 are not explicitly reported in the text; see however Table B.1 in Appendix B). The density used is $\rho = 1$.
Open with DEXTER

\input fig5.tex}
\end{figure} Figure 2: The function C(wA, wB) is monotonically decreasing with wA and wB, while wA wB C(wA, wB) (scaled in this plot) is monotonically increasing. The parameters used for this figure are the same as Fig. 1. Note that, since PA = PB = 0, we have $\lim_{w_A \rightarrow 0^+} w_A w_B C(w_A, w_B) =
\lim_{w_B \rightarrow 0^+} w_A w_B C(w_A, w_B) = 0$ in agreement with Eqs. (82) and (83); moreover $w_A w_B C(w_A, w_B) < \rho ^2 = 1$ as expected from Eq. (84).
Open with DEXTER

A similar expansion can be obtained for C(wA, wB). Calculations are basically a generalization of what was done in Paper I for C(w)and can be found in Appendix B. Here we report only the final result obtained:

$\displaystyle C(w_A, w_B) \simeq \frac{\rho^2}{(\rho + w_A) (\rho + w_B)} +
..._A)^2 (\rho + w_B)^2} +
\frac{\rho^3 S_{02}}{(\rho + w_A) (\rho + w_B)^3} \cdot$     (68)

We note that using this expansion and Eqs. (59) we can recover the first terms of Eq. (66), as expected. Figure 1 shows the results of applying this expansion to a Gaussian weight. For clarity, we have considered in this figure (and in others shown below) a 1-dimensional smoothing instead of the 2-dimensional case discussed in the text, and we have used x as spatial variable instead of $\vec\theta$. The figure refers to two identical Gaussian weight functions with vanishing average and unit variance. A comparison of this figure with Fig. 2 of Paper I shows that the convergence here is much slower. Nevertheless, Eq. (68) will be very useful to investigate some important limiting cases in the next section.

6 Properties

In this section we will study in detail the two noise terms $T_\sigma $and $T_{\rm P}$ introduced in Sect. 3.3, showing their properties and considering several limiting cases. The results obtained are of clear interest of themselves; for example, we will derive here upper and lower limits for the measurement error $T_\sigma $ that can be used at low and high densities. Moreover, this section helps us understand the results obtained so far, and in particular the peculiarities of vanishing weights.

6.1 Normalization

A simple normalization property for C(wA, wB) can be derived, similarly to what we have already done for the average of $\tilde f$in Paper I. Suppose that $f(\vec\theta) = 1$ and that no errors are present on the measurements, so that $\sigma^2 = 0$. In this case we will always measure $\tilde f(\vec\theta) = 1$ (see Eq. (1)), so that $\bigl\langle \tilde f_A \bigr\rangle = \bigl\langle \tilde
f_B \bigr\rangle = 1$, $\bigl\langle \tilde f_A \tilde f_B
\bigr\rangle = 1$, and no error is expected on $\tilde f$. This result can be also recovered using the analytical expressions obtained so far. Let us first consider the simpler case of non-vanishing weights.

Using Eqs. (47) and (48), we can write the term $T_{\rm P1}$ in the case $f(\vec\theta) = 1$ as

T_{\rm P1} = \rho \int_0^\infty \rm ds_A \int_0^\infty
...theta) {\rm e}^{-s_A w_A(\vec\theta) - s_B
w_B(\vec\theta)} .
\end{displaymath} (69)

The last integrand in this equation can be rewritten as $\partial^2 Q
/ \partial s_A ~ \partial s_B$ (cf. the definition of Q, Eq. (45)):

T_{\rm P1} = \rho \int_0^\infty \rm ds_A \int_0^\infty \rm ...
...frac{\partial^2 Q(s_A, s_B)}{\partial s_A
\partial s_B} \cdot
\end{displaymath} (70)

Analogously, for $T_{\rm P2}$ we obtain (cf. Eq. (52))
$\displaystyle T_{\rm P2}$ = $\displaystyle \rho^2 \int_0^\infty \rm ds_A \int_0^\infty
\rm ds_B ~ {\rm e}^{\...
..._2 ~
w_B(\vec\theta_2) {\rm e}^{-s_A w_A(\vec\theta_2) - s_B
  = $\displaystyle \rho^2 \int_0^\infty \rm ds_A \int_0^\infty \rm ds_B ~
{\rm e}^{\...
...tial Q(s_A, s_B)}{\partial s_A}
\frac{\partial Q(s_A, s_B)}{\partial s_B} \cdot$ (71)

We can integrate this expression by parts taking $\rho {\rm e}^{\rho Q} ~
(\partial Q / \partial s_B) = \bigl[ \partial \exp(\rho Q) / \partial
s_B \bigr]$ as differential term:

T_{\rm P2} = \rho \int_0^\infty \rm ds_A \biggl\{ \biggl[
...tial^2 Q(s_A, s_B)}{\partial s_A \partial
s_B} \biggr\} \cdot
\end{displaymath} (72)

We now observe that the last term in Eq. (72) is identical to what we founded in Eq. (70). Hence, the sum $T_{\rm P1} +
T_{\rm P2}$ is
$\displaystyle T_{\rm P1} + T_{\rm P2}$ = $\displaystyle \biggr[ \rho \int_0^\infty
\rm ds_A ~ {\rm e}^{\rho Q(s_A, s_B)} ...
...ho {\rm e}^{\rho Q(s_A, s_B)} \biggr]_{s_A = 0}^\infty \biggr]_{s_B =
  = $\displaystyle Y(\infty, \infty) - Y(\infty, 0^+) - Y(0^+, \infty) + Y(0^+,
0^+) = 1 .$ (73)

The last equation holds because, for non-vanishing weights, Y(0+, 0+) = 1 and all other terms vanishes (cf. Eqs. (61)-(62)). Hence, as expected, $\bigl\langle
\tilde f_A \tilde f_B \bigr\rangle = T_{\rm P1} + T_{\rm P2} = 1
= \bigl\langle \tilde f_A \bigr\rangle \bigl\langle \tilde f_B

In case of vanishing weights, we can still use Eqs. (70) and (72) with an additional factor $\nu$ (due to the extra factor in Eq. (63)). The last step in Eq. (73) thus now becomes

T_{\rm P1} + T_{\rm P2} = \nu \bigl[ Y(\infty, \infty) -
Y(\infty, 0^+) - Y(0^+, \infty) + Y(0^+, 0^+) \bigr] = 1 .
\end{displaymath} (74)

The last equality holds since now Y does not vanishes for large (sA, sB) (see again Eqs. (61)-(62)).

6.2 Scaling

Similarly to what was already shown in Paper I, for all expressions encountered so far some scaling invariance properties hold.

First, we note that, although we have assumed that the weight functions wA and wB are normalized to unity, all results are clearly independent of their actual normalization. Hence, a trivial scaling property holds: All results (and in particular the final expression for ${\rm Cov}(\tilde f)$) are left unchanged by the transformation $w(\vec\theta) \mapsto k w(\vec\theta)$ or, equivalently,

$\displaystyle w_A(\vec\theta) \mapsto k w_A(\vec\theta) , \hspace*{3cm}
w_B(\vec\theta) \mapsto k w_B(\vec\theta) .$     (75)

A more interesting scaling property is the following. Consider the transformation

$\displaystyle w(\vec\theta) \mapsto k^2 w(k \vec\theta) , \hspace*{3cm}
\rho \mapsto k^2 \rho ,$     (76)

where both factors k2 must be changed according to the dimension of the $\vec\theta$ vector space. If we apply this transformation, then the expression for ${\rm Cov}(\tilde f)$ is transformed according to

{\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B) \mapsto {\rm Cov}(\tilde f;
k \vec\theta_A, k \vec\theta_B) .
\end{displaymath} (77)

This invariance suggests that the shape of ${\rm Cov}(\tilde f)$ is controlled by the expected number of objects for which the two weight functions are significantly different from zero. Hence, similarly to what done in Paper I, we define the two weight areas $\mathcal{A}_A$and $\mathcal{A}_B$ as

\mathcal{A}_X \equiv \biggl[ \int_\Omega \bigl[
S_{02}^{-1}\quad {\rm if} \ X = B .
\end{displaymath} (78)

For weight functions invariant upon translation we have $\mathcal{A}_A
= \mathcal{A}_B$. We call $\mathcal{N}_X \equiv \rho \mathcal{A}_X$the weight number of objects (again, $\mathcal{N}_A =
\mathcal{N}_B$ because of the invariance upon translation). Note that this quantity is left unchanged by the scaling (76). Similar definitions hold for the effective weight $w_{{\rm eff}X}(\vec\theta) \equiv w_X(\vec\theta) C_X\bigl(
w_X(\vec\theta) \bigr)$ and the effective number of objects $\mathcal{N}_{{\rm eff}X} \equiv \rho \mathcal{A}_{{\rm eff}X}$.

6.3 Behavior of C

In order to better understand the properties of C, it is useful to briefly consider its behavior as a function of the weights wA and wB.

We observe that, since Y(sA, sB) > 0 for every (sA, sB) (see Eq. (46)), C(wA, wB) decreases if either wA or wBincrease. In order to study the behavior of the quantity wA wB C(wA, wB) that enters the noise term T1, we consider the quantity wA C(wA, wB):

w_A C(w_A, w_B) = \nu \rho^2 \int_{0^-}^\infty \rm ds_B ~
..._A} \biggr) {\rm e}^{- s_A w_A}
\biggr] {\rm e}^{- s_B w_B} .
\end{displaymath} (79)

This equation can be shown by integrating by parts the integral over sA. The partial derivative required in Eq. (79) can be evaluated from Eq. (46):

\frac{\partial Y(s_A, s_B)}{\partial s_A} = \rho \frac{\par...
... Q(s_A, s_B)}{\partial s_A} {\rm e}^{\rho Q(s_A, s_B)} \le 0 .
\end{displaymath} (80)

Since this derivative is negative, we can deduce that the integral over sA in Eq. (79) increases with wA, and thus wA C(wA, wB) also increases as wA increases. Similarly, it can be shown that wB C(wA, wB) increases as wB increases. In summary, the quantity wA wB C(wA, wB) behaves as wA wB, in the sense that its partial derivatives have the same sign as the partial derivatives of wA wB (see Fig. 2). Also, since C(wA, wB) decreases if either wA or wB increase, we can deduce that wA wB C(wA, wB) is "broader'' than wA wB.

Since $C\bigl( w_A(\vec\theta), w_B(\vec\theta) \bigr)$ is positive, the function $w_A(\vec\theta) w_B(\vec\theta) C\bigl( w_A(\vec\theta),
w_B(\vec\theta) \bigr)$ shares the same support as $w_A(\vec\theta)
w_B(\vec\theta)$. It is also interesting to study the limits of wA wB C(wA, wB) at high and low values for wA and wB. From the properties of Laplace transform (see Eq. (D.10)), we have

\scriptstyle{w_A \rightarrow 0^+} \...
...htarrow \infty
\end{array}} Y(s_A, s_B) = \nu \rho^2 P_{AB} ,
\end{displaymath} (81)

where Eq. (61) has been used in the second equality. Hence, the quantity wA wB C(wA, wB) goes to zero only if PAB = 0. In other cases, we expect a discontinuity at wA = wB = 0. Similarly, using Eqs. (61)-(62) we find
$\displaystyle \lim_{\begin{array}{c}
\scriptstyle w_A \rightarrow \infty \\  [-1mm]
\scriptstyle w_B \rightarrow 0^+
\end{array}} w_A w_B
C(w_A, w_B)$ = $\displaystyle \nu \rho^2 \lim_{\begin{array}{c}
\scriptstyle s_A \rightarrow 0^...
...\scriptstyle s_B \rightarrow \infty
\end{array}} Y(s_A, s_B) = \nu \rho^2 P_A ,$ (82)
$\displaystyle \lim_{\begin{array}{c}
\scriptstyle w_A \rightarrow 0^+ \\  [-2mm]
\scriptstyle w_B \rightarrow \infty
\end{array}} w_A w_B
C(w_A, w_B)$ = $\displaystyle \nu \rho^2 \lim_{\begin{array}{c}
\scriptstyle s_A \rightarrow \i...
\scriptstyle s_B \rightarrow 0^+
\end{array}} Y(s_A, s_B) = \nu \rho^2 P_B ,$ (83)
$\displaystyle \lim_{\begin{array}{c}
\scriptstyle w_A \rightarrow \infty \\  [-2mm]
\scriptstyle w_B \rightarrow \infty
\end{array}} w_A w_B
C(w_A, w_B)$ = $\displaystyle \nu \rho^2 \lim_{\begin{array}{c}
\scriptstyle s_A \rightarrow 0^...
\scriptstyle s_B \rightarrow 0^+
\end{array}} Y(s_A, s_B) = \nu \rho^2 .$ (84)

Since wA wB C(wA, wB) increases with both wA and wB, the last equation above puts a superior limit for this quantity:

w_A w_B C(w_A, w_B) \le \nu \rho^2 .
\end{displaymath} (85)

6.4 Large separations

Suppose that the two points $\vec\theta_A$ and $\vec\theta_B$ are far away from each other, so that $w_A(\vec\theta)
w_B(\vec\theta)$ is very close to zero everywhere. In this situation we can greatly simplify our equations.

If $\vec\theta_A$ is far away from $\vec\theta_B$, then $w_A(\vec\theta)$ and $w_B(\vec\theta)$ are never significantly different from zero at the same position $\vec\theta$. In this case, the integral in the definition of Q(sA, sB) (see Eq. (45)) can be split into two integrals that corresponds to QA and QB (Eq. (53)):

$\displaystyle Q(s_A, s_B) \simeq Q_A(s_A) + Q_B(s_B) , \qquad
Y(s_A, s_B) \simeq Y_A(s_A) Y_B(s_B) , \qquad
C(w_A, w_B) \simeq C_A(w_A) C_B(w_B) .$     (86)

Hence, if the two weight functions wA and wB do not have significant overlap, the function C(wA, wB) reduces to the product of the two correcting functions CA and CB.

In general, it can be shown that $C(w_A, w_B) \ge C_A(w_A) C_B(w_B)$. In fact, we have

C(w_A, w_B) - C_A(w_A) C_B(w_B) = \rho^2 \int_0^\infty \rm ...
...(s_A, s_B)} - {\rm e}^{\rho Q_A(s_A) + \rho Q_B(s_B)} \bigr] .
\end{displaymath} (87)

We now observe that

Q(s_A, s_B) - Q_A(s_A) - Q_B(s_B) = \int_\Omega \bigl[ {\rm...
...igr] \bigl[ {\rm e}^{-s_B w_B(\vec\theta)} - 1
\bigr] \ge 0 .
\end{displaymath} (88)

Hence, $Q(s_A, s_B) \ge Q_A(s_A) + Q_B(s_B)$ and the difference between the two terms of this inequality is an indication of overlap between the two weight functions wA and wB. Since the exponential function is monotonic, we find $Y(s_A, s_B) \ge Y_A(s_A)
Y_B(s_B)$ and thus

C(w_A, w_B) \ge C_A(w_A) C_B(w_B) .
\end{displaymath} (89)

6.5 Upper and lower limits for $\mathsfsl{T_{\protect\sigma}}$

The normalization property shown in Sect. 6.1 can also be used to obtain an upper limit for $T_\sigma $. We observe, in fact, that $T_\sigma $ is indistinguishable from $\sigma^2
T_{\rm P1}$ for a constant function $f(\vec\theta) = 1$. This case, however, has already been considered above in Sect. 6.1: There we have shown that $T_{\rm P1}
+ T_{\rm P2} = 1$. Since $T_{\rm P2} \ge 0$, we find the relation $T_\sigma \le

The property just obtained has a simple interpretation. As shown by Eq. (60), $T_\sigma $ is proportional to $1/\rho$ and thus we would expect that this quantity is unbounded superiorly. In reality, even when we are dealing with a very small density of objects, the estimator (1) "forces'' us to use at least one object. This point has already been discussed in Paper I, where we showed that the number of effective objects, $\mathcal{N}_{\rm eff}$, is always larger than unity. The upper limit found for $T_\sigma $ can be interpreted using the same argument. Note that this result also holds for weight functions with finite support.

A lower limit for $T_\sigma $, instead, can be obtained from the inequality (89):

T_\sigma \ge \frac{\sigma^2}{\rho} \int_\Omega w_A(\vec\the...
w_{{\rm eff}B}(\vec\theta) ~ \rm d^2 \theta .
\end{displaymath} (90)

Hence, the error $T_\sigma $ is larger than a convolution of the two effective weight functions. In case of finite-field weight functions, the limit just obtained must be corrected with a factor $\nu$. The argument to derive Eq. (90) is then slightly more complicated because of the presence of the PX probabilities. However, using the relation $P_A P_B \le P_{AB}$, we can recover Eq. (90) with the aforementioned corrective factor.

6.6 Limit of low and high densities

In the limit $\rho \rightarrow 0$ we can obtain simple expressions for the noise terms. If $\rho $ vanishes, we have Y(sA, sB) = 1 (cf. Eq. (46)) and thus

C(w_A, w_B) \simeq \frac{\nu \rho^2}{w_A w_B} \cdot
\end{displaymath} (91)

In this equation we have assumed wA wB > 0. Note that we have reached here the superior limit indicated by Eq. (85). In the same limit, $\rho \rightarrow 0$, $P_X \simeq 1 - \pi_X \rho$ and $\nu \simeq 1 / \rho \pi_{A \cap B}$, where $\pi_{A \cap B} = \pi_{A}
+ \pi_B - \pi_{A \cup B}$ is the area of the intersection of the supports of wA and wB. Hence we find

C(w_A, w_B) \simeq \frac{\rho}{\pi_{A \cap B} w_A w_B} \cdot
\end{displaymath} (92)

Analogously, in the same limit, we have found in Paper I

C_X(w_X) \simeq \frac{1}{\pi_X w_X} ,
\end{displaymath} (93)

where wX > 0 has been assumed. We can then proceed to evaluate the various terms. For $T_\sigma $ we obtain the expression

T_\sigma \simeq \frac{\sigma^2}{\rho} \int_{\pi_{A \cap B}}
\frac{\rho}{\pi_{A \cap B}} ~ \rm d^2 \theta = \sigma^2 .
\end{displaymath} (94)

Note that the integral has been evaluated only on the subset of the plane where wA wB > 0; the case where this product vanishes, in fact, need not to be considered because the quantity wA wB C(wA, wB) vanishes as well. Exactly the same result holds for weight functions with infinite support. Hence, when $\rho \rightarrow 0$ we reach the superior limit discussed in Sect. 6.5 for $T_\sigma $.

Equation (94) can be better appreciated with the following argument. As the density $\rho $ approaches zero, the probability of having two objects on $\pi_{A \cup
B}$ vanishes. Because of the prescription regarding vanishing weights (cf. beginning of Sect. 4), the ensemble average in our limit is performed with one and only one object in $\pi_{A \cap B}$. Since we have only one object, this is basically used with unit weight in the average (17), and thus the measurement noise is just given by $T_\sigma = \sigma^2$.

Let us now consider the limit at low densities of the Poisson noise, which, we recall, has been split into three terms, $T_{\rm P1}$, $T_{\rm P2}$, and $T_{\rm P3}$ (see Sect. 3.3). Inserting Eq. (92) into Eq. (24), we find for $T_{\rm P1}$

T_{\rm P1} \simeq \frac{1}{\rho} \int_{\pi_{A \cap B}}
...d^2 \theta =
\bigl\langle f^2 \bigr\rangle_{\pi_{A \cap B}} ,
\end{displaymath} (95)

where $\bigl\langle f^2 \bigr\rangle_{\pi_{A \cap B}}$ denotes the simple average of f2 on the set $\pi_{A \cap B}$. Hence, $T_{\rm P1}$ converges to the average of f2 on the intersection of the supports of wA and wB. Again, we can explain this result using an argument similar to the one used for Eq. (94). Regarding $T_{\rm P2} \equiv T_2$, we observe that this term is of first order in $\rho $ because C(wA, wB) is of first order (cf. Eqs. (92) and (52)). We can then safely ignore this term in our limit $\rho \rightarrow 0$. Finally, as shown in Paper I, at low densities the expectation value for $\tilde f_X$ is a simple average of f on the support of wX, i.e. $\langle \tilde f_X
\rangle \simeq \langle f \rangle_{\pi_X}$. Hence, $T_{\rm P3} =
\langle f \rangle_{\pi_A} \langle f \rangle_{\pi_B}$ and the Poisson noise in the limit of small densities is given by

T_{\rm P} = \bigl\langle f^2 \bigr\rangle_{\pi_{A \cap B}} - \langle
f \rangle_{\pi_A} \langle f \rangle_{\pi_B} .
\end{displaymath} (96)

In case of a constant function $f(\vec\theta)$, this expression vanishes as expected. Surprisingly, in general, we cannot say that $T_{\rm P} \ge 0$. Rather, if $w_A \neq w_B$, and if in particular the two weight functions have different supports, we might have a negative $T_{\rm P}$. Suppose, for example, that f vanishes on the intersection of the two supports $\pi_{A \cap B}$, but is otherwise positive. In this case, the first term in the r.h.s. of Eq. (96) vanishes, while the second term contributes with a negative sign, and thus $T_{\rm P} < 0$. On the other hand, if wA = wB then $T_{\rm P}$ has to be non-negative.

We now consider the opposite limiting case, namely high density. In this case, it is useful to use the moment expansion (68). Since $T_\sigma $ and $T_{\rm P1}$ have an overall factor $1/\rho$in its definition (cf. Eq. (60)), we can simply take the 0th order for C(wA, wB), thus obtaining

$\displaystyle T_\sigma \simeq \frac{\sigma^2}{\rho} \int w_A(\vec\theta)
...}{\rho} \int w_A(\vec\theta)
w_B(\vec\theta) f^2(\vec\theta) ~ \rm d^2 \theta .$     (97)

For $T_{\rm P2}$ and $T_{\rm P3}$, instead, we need to use a first order expansion in $1/\rho$ for C(wA, wB). This can be done by using the first terms in series (66), and by expanding all fractions in terms of powers of $1/\rho$. Inserting the result into the definitions of $T_{\rm P2}$ and $T_{\rm P3}$ we obtain
$\displaystyle T_{\rm P2}$$\textstyle \simeq$$\displaystyle \int_\Omega \rm d^2 \theta_1 \int_\Omega \rm d^2
\theta_2 ~ f(\ve...
...frac{w_B(\vec\theta_2)}{\rho} + \frac{S_{20} + S_{11} +
S_{02}}{\rho} \biggr] ,$ (98)
$\displaystyle T_{\rm P3}$$\textstyle \simeq$$\displaystyle \int_\Omega \rm d^2 \theta_1 \int_\Omega
\rm d^2 \theta_2 ~ f(\ve...
...} -
\frac{w_B(\vec\theta_2)}{\rho} + \frac{S_{20} + S_{02}}{\rho}
\biggr] \cdot$ (99)

Note that we have dropped, in these equations, terms of order higher than $1/\rho$. The difference $T_{\rm P2} - T_{\rm P3}$ is

T_{\rm P2} - T_{\rm P3} \simeq \frac{1}{\rho}
\int_\Omega ...
...\bigl[ S_{11} - w_A(\vec\theta_2) - w_B(\vec\theta_1) \bigr] .
\end{displaymath} (100)

Using Eqs. (100) and (97), we can verify that $T_{\rm P}$ vanishes if f is constant, as expected:
$\displaystyle T_{\rm P1} + T_{\rm P2} - T_{\rm P3}$ $\textstyle \simeq$ $\displaystyle \frac{S_{11}}{\rho} + \frac{1}{\rho} \int_\Omega \rm d\theta_1
... w_A(\theta_2) w_B(\theta_2) -
w_A(\theta_1) w_B(\theta_2) w_B(\theta_1) \bigr]$  
  = $\displaystyle \frac{S_{11}}{\rho} + \frac{1}{\rho} \bigl[ S_{11} - S_{11} -
S_{11} \bigr] = 0 ,$ (101)

where the normalization of w has been used. Also, it is apparent that all noise sources, including Poisson noise, are proportional to $1/\rho$ at high densities.

In order to further investigate the properties of Poisson noise at high densities, we write it in a more compact form. Let us define the average of a function $g(\vec\theta)$ weighted with $q(\vec\theta)$ as

\langle g \rangle_q \equiv \biggl[ \int_\Omega g(\vec\theta...
... \biggl[ \int_\Omega
q(\vec\theta) ~ \rm d^2 \theta \biggr] .
\end{displaymath} (102)

Using this definition we can rearrange Eqs. (97) and (100) in the form

T_{\rm P} = \frac{S_{11}}{\rho} \Bigl[ \bigl\langle f^2
...f \rangle_{w_A
w_B} - \langle f \rangle_{w_B} \bigr) \Bigr] .
\end{displaymath} (103)

This expression suggests that the Poisson noise is actually made of two different terms, $\bigl\langle f^2 \bigr\rangle_{w_A w_B} -
\langle f \rangle_{w_A w_B} \langle f \rangle_{w_A w_B}$ and $\bigl(
\langle f \rangle_{w_A w_B} - \langle f \rangle_{w_A} \bigr) \bigl(
\langle f \rangle_{w_A w_B} - \langle f \rangle_{w_B} \bigr)$. The first term is proportional to the difference between two averages of f2 and f; both averages are performed using wA wB as weight. Hence, this term is controlled by the "internal scatter'' of f on points where both weight functions are significantly different from zero; it is always positive. The second term is made of averages fusing different weight functions. It can be either positive or negative if $w_A \neq w_B$. Actually, as already seen in the limiting case $\rho \rightarrow 0$, the overall Poisson noise does not need to be positive, and anti-correlation can be present in some cases.

6.7 Limit of high and low frequencies

The strong dependence of the Poisson noise on the function $f(\vec\theta)$ makes an analytical estimate of this noise contribution extremely difficult in the general case. However, it is still possible to study the behavior of $T_{\rm P}$ in two important limiting cases, that we now describe.

Suppose that the function $f(\vec\theta)$ does not change significantly on the scale length of the weight functions $w_A(\vec\theta)$ and $w_B(\vec\theta)$ (or, in other words, that the power spectrum of f has a peak at significantly lower frequencies than the power spectra of wA and wB). In this case, we can take the function f as a constant in the integrals of Eq. (13), and apply the results of Sect. 6.1. Hence, in the limit of low frequencies, the Poisson noise vanishes.

Suppose now, instead, that the function $f(\vec\theta)$ does not have any general trend on the scale length of the weight functions, but that instead changes at significantly smaller scales (again, this behavior is better described in terms of power spectra: We require here that the power spectrum of f has a peak at high frequencies, while it vanishes for the frequencies where the power spectra of wA and wB are significantly different from zero). In this case, we can assume that integrals such as

$\displaystyle \int_\Omega f(\vec\theta) w_X(\vec\theta) ~ \rm d^2 \theta
\simeq 0 \hspace*{3cm}
X = \{ A, B \}$     (104)

vanish approximately, because the average of f on large scales vanishes (remember that we are assuming that f has no general trend on large scales). Similarly, the integrals that appear in $T_{\rm P2}$and $T_{\rm P3}$ vanish as well. In this case, then, the only contribution to the Poisson noise arises from $T_{\rm P1}$. This can be easily evaluated

T_{\rm P} \simeq T_{\rm P1} \simeq \frac{\bigl\langle \vert...
...(\vec\theta) C\bigl( w_A(\vec\theta), w_B(\vec\theta) \bigr) ,
\end{displaymath} (105)

where we have denoted with $\bigl\langle \vert f \vert^2 \bigr\rangle$ the average of |f|2 on large scales. Hence we finally obtain

T_{\rm P} \simeq \frac{\bigl\langle \vert f \vert^2 \bigr\rangle}{\sigma^2}
T_{\sigma} .
\end{displaymath} (106)

The results discussed in this section can also be numerically verified in simple cases. Figure 8, for example, shows the Poisson noise expected in the measurement of a periodic field when using two Gaussian weight functions (see Sect. 7.2 for details). From this figure, we see that the Poisson noise increases with the frequency of the field f, and quickly attains a maximum value at high frequencies. Moreover, the same figure shows that, in agreement with Eq. (106), the Poisson noise at the maximum is simply related to the measurement noise $T_\sigma $ (cf. Fig. 7 for $\rho = 2$).

7 Examples

Similarly to what has been done in Paper I, in this section we consider three typical weight functions, namely a top-hat, a Gaussian, and a parabolic weight. For simplicity, we will consider 1-dimensional cases only; this will have also some advantages when representing the results obtained with figures. Hence, we will use x instead of $\vec\theta$ as spatial variable.

7.1 Top-hat

\includegraphics[width=14cm,clip]{2294f3.eps} \end{figure} Figure 3: The value of $C(1,1)/\rho $ for top-hat weights as a function of the density $\rho $. Both weight functions wA and wB are top-hats (see Eq. (107)) centered on zero. Using Eq. (108), we can use this graph to obtain $T_\sigma $ as a function of the density and the point separation xA - xB.
Open with DEXTER

\includegraphics[width=14cm,clip]{2294f4.eps} \end{figure} Figure 4: The noise term $T_\sigma $ for two top-hat weights as a function of the point separation $\delta = \vert x_A - x_B \vert$ for two densities, $\rho = 2$ and $\rho = 5$. The plot also shows the quantity $S_{11} / \rho $, which at high densities approximates $T_\sigma $ (since then $C \simeq 1$). Note that S11 for a top-hat function is just given by $S_{11} = ( 2 - \delta ) / 4$.
Open with DEXTER

The simplest weight that we can consider is a top-hat function, defined as

w(x) =
1 \quad {\rm if} \ \vert x...
...< 1/2 , \\ [2mm]
0 \quad {\rm otherwise.}
\end{displaymath} (107)

Since w is either 1 or 0, we just need to consider C(1,1) to evaluate $T_\sigma $. Regarding the Poisson noise, from Eq. (52) we deduce that C(1,2), C(2,1), and C(2,2) are also required.

Figure 3 shows C(1,1) and $C(1,1)/\rho $ as functions of the density $\rho $ for two identical top-hat weight functions centered on the origin. From this plot we can recognize some of the limiting cases studied above. In particular, the fact that $C(1,1)/\rho $ goes to unity at low densities is related to Eq. (92); similarly, the limit of C(1,1) at high densities is consistent with Eq. (68). The same figure shows also the moments expansion of C(1,1) up to forth order. As expected, the expansion completely fails at low densities, while is quite accurate for $\rho > 5$.

Curves in Fig. 3 have been calculated using the standard approach described by Eqs. (45), (46) and (63). Actually, in the simple case of top-hat weight functions, we can evaluate C(1,1) using a more direct statistical argument. We start by observing that in our case, for xA = xB, we have

T_\sigma = \sigma^2 C(1,1) / \rho .
\end{displaymath} (108)

On the other hand, a top-hat weight function is basically acting by taking simple averages for all objects that fall inside its support. This suggests that, for xA = xB, we can evaluate its measurement noise as

T_\sigma = \sigma^2 \sum_{N=1}^\infty \frac{p(N)}{N} ,
\end{displaymath} (109)

where p(N) is the probability of having N objects inside the support. This probability is basically a Poisson probability distribution with average $\rho $. However, since we are adopting the prescription of "avoiding'' weight functions without objects in their support, we must explicitly discard the case N = 0 and consequently renormalize the probability. In summary, we have

p(N) = \frac{{\rm e}^{-\rho} \rho^N}{N!} \biggm/ \bigl[ 1 - {\rm e}^{-\rho}
\bigr] .
\end{displaymath} (110)

This expression combined with Eq. (109) allows us to evaluate $C(1,1) = \rho T_\sigma / \sigma^2$:

C(1,1) = \frac{{\rm e}^{-\rho}}{1 - {\rm e}^{-\rho}} \sum_{N=1}^\infty
\frac{\rho^{N + 1}}{N! ~ N} \cdot
\end{displaymath} (111)

We can directly verify this result using Eqs. (45), (46) and (63). In fact, for the top-hat function we find
Q(sA, sB) = $\displaystyle \bigl[ {\rm e}^{-s_A -s_B} - 1 \bigr] ,$ (112)
Y(sA, sB) = $\displaystyle {\rm e}^{\rho Q(s_A, s_B)} = {\rm e}^{-\rho}
\sum_{k=0}^\infty \frac{{\rm e}^{-k (s_A + s_B)} \rho^k}{k!} ,$ (113)
C(1, 1) = $\displaystyle \frac{\rho^2}{1 - {\rm e}^{-\rho}} \int_0^\infty \!\rm d
s_A \int...
...\rm ds_A \int_0^\infty \!\rm ds_B
\frac{{\rm e}^{-(k+1)(s_A + s_B)} \rho^k}{k!}$  
  = $\displaystyle \frac{{\rm e}^{-\rho}}{1 - {\rm e}^{-\rho}} \sum_{k=0}^\infty
\frac{\rho^{k + 2}}{k! ~ (k+1)^2} \cdot$ (114)

Finally, with a change of the dummy variable $k \mapsto n - 1$ we recover Eq. (111).

\includegraphics[width=14cm,clip]{2294f5.eps} \end{figure} Figure 5: Numerical calculations for 1-dimensional Gaussian weight functions wA = wB centered on 0 and with unit variance. The various curves shows the function wA wB C(wA, wB) for different densities $\rho $. Note that, as expected, C(wA, wB) approaches unity for largedensities.
Open with DEXTER

\includegraphics[width=14cm,clip]{2294f6.eps} \end{figure} Figure 6: Same as Fig. 5, but for two Gaussian weight functions centered on 0 and 1 and with unit variance.
Open with DEXTER

  \begin{figure}\includegraphics[width=14cm,clip]{2294f7.eps} \end{figure} Figure 7: The noise term $T_\sigma $ for two Gaussian weights (of unit variance) as function of their separation. Similarly to Fig. 4, the plot also shows the high-density approximations $S_{11} / \rho $. Note that in this case S11 is also a Gaussian (with double variance).
Open with DEXTER

\includegraphics[width=14cm,clip]{2294f8.eps} \end{figure} Figure 8: The Poisson noise TP for two Gaussian weights (of unit variance) for a periodic function of the form $f(x) = \sin k x$ as a function of the weight separation $\delta = \vert x_A - x_B \vert$, for a density $\rho = 2$. Note that, as expected, the Poisson noise increases with k and approaches the limit discussed in Sect. 6.7 for high frequencies. More precisely, since for a sine function we have $\langle \sin^2 x
\rangle = 1/2$, Eq. (106) gives $T_P \simeq T_\sigma / (2 \sigma ^2)$ (this can indeed be verified by a comparison with Fig. 7). Note also that, while TP is strictly positive for $\delta = 0$, it can became negative (see curve for k = 0.5) at larger separations.
Open with DEXTER

The other terms needed for the Poisson noise can be evaluated using a calculation similar to the one performed in Eq. (114). Actually, it can be shown that for any positive integers wA and wB we have

C(w_A, w_B) = \frac{{\rm e}^{-\rho}}{1 - {\rm e}^{-\rho}} \...
\frac{\rho^{k + 2}}{k! ~ (k + w_A) (k + w_B)} \cdot
\end{displaymath} (115)

Figure 4 shows the expected measurement noise $T_\sigma $ as a function of the point separation $\delta = \vert x_A - x_B \vert$. Note that, for densities of order $\rho = 5$ or larger, a good approximation is obtained by just taking C(wA, wB) = 1 (cf. the moments expansion (68)), so that $T_\sigma \simeq S_{11} / \rho$; we also observe that, for a top-hat weight function, S11 is a linear function.

7.2 Gaussian

Frequently, a Gaussian weight function of the form

w(x) = \frac{1}{\sqrt{2 \pi}} {\rm e}^{-x^2/2}
\end{displaymath} (116)

is used. Although it is not possible to carry out analytical calculations and obtain C(wA, wB), numerical integrations do not pose any problem. Figure 5 shows, for different densities, the function wA wB C(wA, wB) for two identical weights wA = wB centered in zero; Fig. 6 shows the same quantity when one of the weight function is centered at unity. Note that, in this last figure, the largest covariance is at x = 0.5, as expected.

Figure 7 shows the expected measurement noise $T_\sigma $ as a function of the weight separation. Similarly to the top-hat weight, an approximation valid for high density is $T_\sigma = S_{11}/\rho$. Figure 8 shows, instead, the Poisson noise $T_{\rm P}$ expected for a field f of the form $f(x) = \sin k x$, for different values of k. Note that the noise, as expected, increases with k, and quickly reaches the "saturation'' value discussed in Sect. 6.7. Note also that the noise is, at lowest lowest density, negative for $\delta \simeq 2.5$.

7.3 Parabolic weight

\includegraphics[width=14cm,clip]{2294f9.eps} \end{figure} Figure 9: Numerical calculations for 1-dimensional parabolic weight functions wA = wB centered on 0 and with unit variance. The various curves shows the function wA wB C(wA, wB) for different densities $\rho $.
Open with DEXTER

\includegraphics[width=14cm,clip]{2294f10.eps} \end{figure} Figure 10: The noise term $T_\sigma $ for two parabolic weights as a function of their separation (see also Figs. 4 and 7).
Open with DEXTER

Finally, we study of a parabolic weight function of the form

w(x) =
3 x^2 / 4 \quad {\rm if} \...
..., \\ [2mm]
0 \qquad\quad {\rm otherwise.}
\end{displaymath} (117)

This function illustrates well some of the peculiarities of finite support weight functions. Figure 9 shows the results of numerical integrations for wA wB C(wA, wB) at different densities $\rho $. A first interesting point to note is the discontinuity observed at x = 1, which is in agreement with Eq. (81). Moreover, as expected from Eq. (92), the function plotted clearly approaches a constant at low densities $\rho $. Finally, the measurement noise $T_\sigma $ is plotted in Fig. 10.

8 Conclusions

In this article we have studied in detail the covariance of a widely used smoothing technique. The main results obtained are summarized in the following items.

The covariance is composed of two main terms, $T_\sigma $ and $T_{\rm P}$, representing measurement errors and Poisson noise, respectively; the latter one depends on the field f on which the smoothing is performed.
Expressions to compute $T_\sigma $ and $T_{\rm P}$ have been provided. In particular, it has been shown that both terms can be obtained in term of a kernel C(wA, wB), which in turn can be evaluated from the weight function $w(\vec\theta)$.
We have obtained an expansion of the kernel C(wA, wB) valid at high densities $\rho $.
We have shown that $T_\sigma $ has an upper limit, given by $\sigma^2$, and a lower limit, provided by Eq. (90).
We have evaluated the form of the noise contributions in the limiting cases of high and low densities.
We have considered three typical cases of weight functions and we have evaluated C(wA, wB) for them.
Finally, we note that although the smoothing technique considered in this paper is by far the most widely used in Astronomy, alternative methods are available. A statistical characterization of these methods, using a completely different approach, will be presented in a future paper (Lombardi & Schneider, in preparation).

This work was partially supported by a grant from the Deutsche Forschungsgemeinschaft, and the TMR Network "Gravitational Lensing: New constraints on Cosmology and the Distribution of Dark Matter.''

Appendix A: Vanishing weights

In Sect. 3.2 we have obtained the solution of the covariance problem under the hypothesis that the weight function $w(\vec\theta)$ is strictly positive. In this appendix we will generalize the results obtained there to non-negative weight functions (see also Sect. 4).

If wA is allowed to vanish, then we might have a finite probability that yA vanishes, i.e. a finite probability that no point $\vec\theta_n$ is inside the support of wA. A finite probability in a probability distribution function appears as a Dirac's delta distribution. Since this point is quite important for our discussion, let us make a simple example. Suppose that $\xi$ is a real random variable with the following characteristics:

Then we can write the probability distribution function for $\xi$as

p_\xi(\xi) = \frac{1}{3} \delta(\xi) + \frac{2}{3} {\rm H}(\xi)
\exp(-\xi) ,
\end{displaymath} (A.1)

where ${\rm H}$ is the Heaviside function (see Eq. (40)). In other words, the probability distribution for $\xi$ includes the contribution from a Dirac's delta distribution centered on $\xi = 0$. If $p_\xi$ is known, the probability that $\xi$ is exactly zero (1/3in this example) can be obtained using

P(\xi = 0) = \int_{0^-}^{0^+} p_\xi(\xi') ~ \rm d\xi' = \lim_{\xi
\rightarrow 0^+} \int_0^\xi p_\xi(\xi') ~ \rm d\xi' .
\end{displaymath} (A.2)

Let us now turn to our problem. As mentioned above, for vanishing weights we expect that yA might vanish, i.e. its probability might include the contribution from a delta distribution centered on yA = 0; similarly, if wB is allowed to vanish, the probability distribution for yB might include a delta centered in yB = 0. For a given yB, the probability PA(yB) that yA vanishes is given by

P_A(y_B) \equiv \lim_{y_A \rightarrow 0^+} \int_{0^-}^{y_A}...
...row \infty} \mathcal L_A\bigl[ p_y( \cdot, y_B)
\bigr](s_A) ,
\end{displaymath} (A.3)

where the properties of Laplace transform have been used in the last equality (see Appendix D). A similar equation holds for the probability that yB vanishes, PB(yA). Note that the Laplace transform in Eq. (A.3) is performed only with respect to the first variable. The joint probability PABthat both yA and yB vanish is (cf. Eq. (61))

P_{AB} \equiv \lim_{\begin{array}{c}
\scriptstyle y_A \rig...
... \end{array}} \mathcal L[p_y](s_A, s_B) =
Y(\infty, \infty) .
\end{displaymath} (A.4)

We then also define (cf. Eqs. (62))
PA $\textstyle \equiv$ $\displaystyle \int_0^\infty P_A(y_B) ~ \rm dy_B =
\mathcal L[p_y](\infty, 0^+) = Y(\infty, 0^+) ,$ (A.5)
PB $\textstyle \equiv$ $\displaystyle \int_0^\infty P_B(y_A) ~ \rm dy_A = \mathcal L[p_y](0^+,
\infty) = Y(0^+, \infty) .$ (A.6)

Using Eq. (45), we find $P_A = \exp (-\rho \pi_A)$, $P_B =
\exp (-\rho \pi_B)$, and $P_{AB} = \exp(-\rho \pi_{A \cup B})$, where $\pi_A$ is the area of the support of wA, $\pi_B$ is the area of the support of wB, and $\pi_{A \cup
B}$ is the area of the union of the two supports. This result is of course not surprising and has been already derived in the paragraph before Eq. (61) using a different approach.

For vanishing weights, we decided to use the following prescription: We discard, in the ensemble average for ${\rm Cov}(\tilde f; \vec\theta_A, \vec\theta_B)$, the configurations $\{ \vec\theta_n \}$ for which the function $\tilde f$ is not defined either at $\vec\theta_A$ or at $\vec\theta_B$. In order to implement this prescription, we can explicitly modify the probability distribution py and exclude "by hand'' cases where the denominator of Eq. (19) vanishes; for the purpose, we consider separately cases where wA or wB vanish. We define a new probability distribution for (yA, yB) which accounts for vanishing weights:

\bar p_y(y_A, y_B) \equiv
\qquad {\rm if} \ w_A = 0,\ w_B = 0 .
\end{displaymath} (A.7)

We recall that $\nu = 1/(1 - P_A - P_B + P_{AB})$. In constructing this probability, first we have explicitly removed the degenerate situations, then we have renormalized the resulting probability. Note that the normalization factor in the last case, namely 1 - PA - PB + PAB, comes from the so-called "inclusion-exclusion principle'' ( 1 - PA - PB + PAB is the probability that both fA and fB are defined). Using this new probability distribution in the definition (32) for Y we obtain

Y(s_A, s_B) =
{\rm e}^{\rho Q(s_A...
\qquad {\rm if} \ w_A = 0,\ w_B = 0 .
\end{displaymath} (A.8)

Finally, we need to change the normalization factor in Eq. (47) in order to account for cases where yA or yBare vanishing. Indeed, the correcting factor C(wA, wB) has been obtained by assuming that all objects can populate all the plane with uniform probability distribution (cf. Eq. (25)); now, however, a fraction (PA + PB - PAB) of configurations have been excluded. Hence we have

C(w_A, w_B) = \frac{\rho^2}{1 - P_A - P_B + P_{AB}} \mathcal L[Y](w_A,
w_B) .
\end{displaymath} (A.9)

This complete the discussion of vanishing weights.

Appendix B: Moments expansion

In Sect. 5 we have written the moments expansion for C(wA, wB). Here we complete the discussion by providing a proof for that result.


Table B.1: Moments Mij up to the fourth order. The table shows, for each row, the values of Mij with (i + j), the order, fixed. Hence, for example, the row for order 2 shows M20, M11, and M02 in sequence.
Order Mij
(i+j) j=0 j=1 j=2 j=3 j=4
0 1 - - - -
1 0 0 - - -
2 $\rho S_{20}$ $\rho S_{11}$ $\rho S_{02}$ - -
3 $\rho S_{30}$ $\rho S_{21}$ $\rho S_{12}$ $\rho S_{03}$ -
4 $\rho S_{40} + 3 \rho^2 S_{20}$ $\rho S_{30} + 3 \rho^2
S_{20} S_{11}$ $\rho S_{22} + \rho^2 S_{02} S_{20} + 2 \rho^2
S_{11} S_{11}$ $\rho S_{03} + 3 \rho^2
S_{02} S_{11}$ $\rho S_{04} + 3 \rho^2 S_{02}$

At high densities, yA and yB are basically Gaussian random variables with average values $\bar y_A$ and $\bar y_B$ (we anticipate here that these averages are given by the density $\rho $). Hence, we can expand them in the definition of C(wA, wB):

C(wA, wB) = $\displaystyle \rho^2 \int_0^\infty \rm dy_A \int_0^\infty
\rm dy_B \frac{p_y(y_A, y_B)}{(w_A + y_A) (w_B + y_B)}$  
  = $\displaystyle \rho^2 \int_0^\infty \rm dy_A \int_0^\infty \rm dy_B
...sum_{j=0}^\infty \left( \frac{\bar
y_B - y_B}{w_B + \bar y_B} \right)^j \biggr]$  
  = $\displaystyle \rho^2 \sum_{i, j=0}^\infty (-1)^{i+j} \frac{M_{ij}}{( \bar
y_A + w_A)^{i+1} (\bar y_B + w_B)^{j+1}} ,$ (B.1)

where Mij are the "centered'' moments of py:

M_{ij} \equiv \int_0^\infty \rm dy_A \int_0^\infty \rm dy_B
p_y(y_A, y_B) ~ (y_A - \bar y_A)^i (y_B - \bar y_B)^j .
\end{displaymath} (B.2)

The centered moments can be expressed in terms of the "un-centered'' ones, defined as

\mathcal{M}_{ij} \equiv \int_0^\infty \rm dy_A \int_0^\inft...
...y_B p_y(y_A, y_B) ~ y_A^i y_B^j = (-1)^{i+j} Y^{(i,j)}(0, 0) .
\end{displaymath} (B.3)

Here Y(i,j)(0, 0) is the ith partial derivative on sA and jth partial derivative on sB of Y(sA, sB), evaluated at (0, 0). These, in turn, can be expressed as derivatives of Q. For the first terms we have
Y(0,0)(0,0) = $\displaystyle Y(0,0) = 1 , \hspace*{5cm}
Y^{(1,0)}(0,0) = \rho Q^{(1,0)}(0,0) , \qquad
Y^{(0,1)}(0,0) = \rho Q^{(0,1)}(0,0) ,$ (B.4)
Y(2,0)(0,0) = $\displaystyle \rho Q^{(2,0)}(0,0) + \bigl[ \rho Q^{(1,0)}
(0,0) \bigr]^2 ,$ (B.5)
Y(1,1)(0,0) = $\displaystyle \rho Q^{(1,1)}(0,0) + \bigl[ \rho Q^{(1,0)}(0,0)
\bigr] \bigl[ \rho Q^{(0,1)}(0,0) \bigr] ,$ (B.6)
Y(0,2)(0,0) = $\displaystyle \rho Q^{(0,2)}(0,0) + \bigl[ \rho
Q^{(0,1)}(0,0) \bigr]^2 .$ (B.7)

Finally, the derivatives of Q can be evaluated as

Q(i,j)(0, 0) = (-1)i+j Sij , (B.8)

where Sij, we recall, is given by Eq. (67). Note that S01 = S10 = 1 because of the normalization of wA and wB, and thus, as already anticipated, $\bar y_A = \bar y_B = \rho$. In summary, we find
M00 = $\displaystyle 1 , \hspace*{5cm}
M_{10} = M_{01} = 0 , \hspace*{3cm}
M_{20} = \mathcal{M}_{20} - (\mathcal{M}_{10})^2 = \rho S_{20}
,$ (B.9)
M11 = $\displaystyle \mathcal{M}_{11} - \mathcal{M}_{10} \mathcal{M}_{01} =
\rho S_{11...
M_{02} = \mathcal{M}_{20} - (\mathcal{M}_{01})^2 = \rho S_{20} .$ (B.10)

We stress that, in general, it is not true that $M_{ij} = \rho S_{ij}$(more complex expressions are encountered for higher order terms; cf. the last term in Eq. (66)). Finally, we can write the expansion of C(wA, wB):

C(w_A, w_B) \simeq \frac{\rho^2}{(\rho + w_A) (\rho + w_B)}...
...^2} +
\frac{\rho^3 S_{02}}{(\rho + w_A) (\rho + w_B)^3} \cdot
\end{displaymath} (B.11)

This is precisely Eq. (68). Using the same technique and a little more perseverance, we can also obtain higher order terms. In particular, Table B.1 reports the moments Mij defined in Eq. (B.2) up to the forth order. This table, together with Eq. (B.1), can be used to write an accurate moment expansion of C(wa, wB).

Appendix C: Varying weights

In Paper I we have considered a modified version of the estimator (1) which allows for the use of supplementary weights. Suppose that we measure a given field $f(\vec\theta)$ at some positions $\vec\theta_n$ of the sky. Suppose also that we use a weight un for each object observed, so that we replace Eq. (1) with

\tilde f(\vec\theta) \equiv \frac{\sum_{n=1}^N \hat f_n
...heta_n)}{\sum_{n=1}^N u_n w(\vec\theta -
\vec\theta_n)} \cdot
\end{displaymath} (C.1)

For example, if we have at our disposal some error estimate $\sigma_n$for each object, we might use the weighting scheme $u_n =
1/\sigma_n^2$ in order to minimize the noise of the estimator (C.1).

A statistical study of the expectation value of this estimator has already been carried out in Paper I. Here we proceed further and study its covariance under the same assumptions as the ones used for the study of Eq. (1) in the main text. However, since one of the main reasons to use weights is some knowledge on the variance of each object, we use a generalized form of Eq. (7):

\bigl\langle \bigl[ \hat f_n - f(\vec\theta_n) \bigr]
...ec\theta_m) \bigr] \bigr\rangle = \sigma^2(u_n)
\delta_{nm} .
\end{displaymath} (C.2)

Note, in particular, that the variance is assumed to depend on un(or, equivalently, the weight is assumed to depend on the variance). Similarly to Paper I, we also assume that, for each object n, the weight un is independent of the position $\vec\theta_n$ and of the measured signal $\hat f_n$, and that each un follows a known probability distribution pu.

In Paper I we have shown that the average value of $\tilde f(\vec\theta)$ can be calculated using the equations

RX(sX) $\textstyle \equiv$ $\displaystyle \int_\Omega \rm d^2 \theta \int_0^\infty \rm du
~ p_u(u) \bigl[ {...
...-s_X u w_X(\vec\theta)} - 1 \bigr] = \int_0^\infty
p_u(u) Q_X(u s_X) ~ \rm du ,$ (C.3)
YX(sX) $\textstyle \equiv$ $\displaystyle \exp\bigl[ \rho R_X(s_X) \bigr] ,$ (C.4)
BX(vX) $\textstyle \equiv$ $\displaystyle \rho \mathcal L[Y_X](v_X) ,$ (C.5)
$\displaystyle \langle \tilde f_X \rangle$ = $\displaystyle \int_\Omega \rm d^2 \theta
\int_0^\infty \rm du ~ p_u(u) f(\vec\theta) B_X \bigl( u
w_X(\vec\theta) \bigr) u w_X(\vec\theta) .$ (C.6)

We now evaluate the covariance of the estimator (C.1) using a technique similar to the one used in Sect. 3. We have

\bigl\langle \tilde f_A \tilde f_B \bigr\rangle = \frac{1}{...
...a_n) \bigr] \bigl[ \sum_m u_m w_B(\vec\theta_m) \bigr]}
\end{displaymath} (C.7)

As usual we consider separately the cases n = m and $n \neq m$, thus obtaining the two terms T1 and T2:
T1 = $\displaystyle \frac{N}{A^N} \int_\Omega \rm d^2 \theta_1 \int_0^\infty
\rm du_1...
...um_n u_n w_A(\vec\theta_n) \bigr] \bigl[
\sum_m u_m w_B(\vec\theta_m) \bigr]} ,$ (C.8)
T2 = $\displaystyle \frac{N (N - 1)}{A^N} \int_\Omega \rm d^2 \theta_1
\int_0^\infty ...
... u_n w_A(\vec\theta_n) \bigr] \bigl[ \sum_m u_m
w_B(\vec\theta_m) \bigr]} \cdot$ (C.9)

Let us introduce new variables $v_{Xn} = u_n w_X(\vec\theta_n)$ (with $X \in \{ A, B \}$) for the combination of weights, and let us define similarly to Eq. (26) $y_X \equiv \sum_{n=2}^N v_n$. Then the probability distributions for vXn and yX can be evaluated using the set of equations
pv(vA, vB) = $\displaystyle \frac{1}{A} \int_\Omega \rm d^2 \theta
\int_0^\infty \rm du ~ p_u...
...( v_A - u
w_A(\vec\theta) \bigr) \delta \bigl( v_B - u w_B(\vec\theta) \bigr)
,$ (C.10)
py(yA, yB) = $\displaystyle \int_0^\infty \rm dv_{A2} \int_0^\infty \rm d
v_{B2} ~ p_v(v_{A2}...
...^\infty \rm dv_{BN} ~ p_v(v_{AN}, v_{BN}) \delta(y_A -
v_{A2} - \cdot - v_{AN})$  
    $\displaystyle \times \delta(y_B - v_{B2} - \cdots -
v_{BN}) .$ (C.11)

Again, it is convenient to consider the Laplace transforms of these two probability distributions:
V(sA, sB) = $\displaystyle \mathcal L[p_v](s_A, s_B) = \frac{1}{A} \int_0^\infty
\rm du ~ p_...
...theta) - s_B u w_B(\vec\theta)} = \int_0^\infty p(u) W(u
s_A, u s_B) ~ \rm du ,$ (C.12)
Y(sA, sB) = $\displaystyle \mathcal L[p_y](s_A, s_B) = \bigl[ V(s_A, s_B)
\bigr]^{N-1} .$ (C.13)

In the continuous limit we define instead
R(sA, sB) $\textstyle \equiv$ $\displaystyle \int_\Omega \rm d^2 \theta \int_0^\infty
\rm du ~ p_u(u) \bigl[ {...
w_B(\vec\theta)} - 1 \bigr] = \int_0^\infty p_u(u) Q(u s_A, u s_B) ~
\rm du ,$ (C.14)
Y(sA, sB) = $\displaystyle \exp\bigl[ \rho R(s_A, s_B) \bigr] .$ (C.15)

Finally, the equivalent of the correcting factor C(wA, wB) (cf. Eq. (11)) is, in our case, the quantity

B(v_A, v_B) \equiv \rho^2 \int_0^\infty \rm dy_A \int_0^\in...
...B)}{(v_A + y_A) (v_B + y_B)} = \rho
\mathcal L[Y](v_A, v_B) .
\end{displaymath} (C.16)

The quantity B can be used to evaluate T1: In fact, we have
T1 = $\displaystyle \frac{N}{A} \int_\Omega \rm d^2 \theta_1 \int_0^\infty
\rm du_1 ~...
...[ u_1
w_A(\vec\theta_1) + y_A \bigr] \bigl[ u_1 w_B(\vec\theta_1) + y_B
  = $\displaystyle \frac{1}{\rho} \int_\Omega \rm d^2 \theta_1 \int_0^\infty
\rm du_...
...w_B(\vec\theta_1) B \bigl(u_1
w_A(\vec\theta_1), u_1 w_B(\vec\theta_1) \bigr) .$ (C.17)

Similarly, for T2 we obtain
T2 = $\displaystyle \int_\Omega \rm d^2 \theta_1 \int_0^\infty \rm du_1
p_u(u_1) \int...
..._2) f(\vec\theta_1) f(\vec\theta_2) u_1
w_A(\vec\theta_1) u_2 w_B(\vec\theta_2)$  
    $\displaystyle \times B \bigl(u_1 w_A(\vec\theta_1) +
u_2 w_A(\vec\theta_2), u_1 w_B(\vec\theta_1) + u_2 w_B(\vec\theta_2)
\bigr) .$ (C.18)

The final evaluation of ${\rm Cov}(\tilde f)$ then proceeds similarly to what done in the main text for the estimator (1).

Appendix D: Properties of the Laplace transform

For the convenience of the reader, we summarize in this appendix some useful properties of the Laplace transform. Proofs of the results stated here can be found in any advanced analysis book (e.g. Arfken 1985). Although in this paper we have been dealing mainly with Laplace transforms of two-argument functions, we write the properties below for the case of a function of a single argument for two main reasons: (i) The generalization to functions of several arguments is in most cases trivial; (ii) Several properties can be better understood in the simpler case considered here.

Suppose that a function f(x) of a real argument x is given. Its Laplace transform is defined as

\mathcal L[f](s) \equiv {} \lim_{x \rightarrow 0^-} \int_x^...
...e}^{-s x'} = \int_{0^-}^\infty \rm dx ~ f(x) {\rm e}^{-s x}
\end{displaymath} (D.1)

Note that we use 0- as lower integration limit in this definition.

The Laplace transform is a linear operator; hence, if $\alpha$ and $\beta$ are two real numbers and g(x) is a function of real argument x, we have $\mathcal L[\alpha f + \beta g] = \alpha \mathcal L[f] +
\beta \mathcal L[g]$.

The Laplace transform of the derivative of f can be expressed in terms of the Laplace transform of f. In particular, we have

\mathcal L\bigl[ f' \bigr](s) = s \mathcal L[f](s) - f(0^-) .
\end{displaymath} (D.2)

This equation can be generalized to higher order derivatives. Calling f(n) the nth derivative of f, we have

\mathcal L\bigl[ f^{(n)} \bigr](s) = s^n \mathcal L[f](s) - \sum_{i=0}^{n-1}
s^{n-i-1} f^{(n)}(0^-) .
\end{displaymath} (D.3)

Surprisingly, this equation holds if, for n negative, we consider f(n) to be the -nth integral of f; note that in this case the summation disappears. Hence, for example, we have

\mathcal L\biggl[ \int_{0^-}^x f(x') ~ \rm dx' \biggr](s) = \frac{1}{s}
\mathcal L[f](s) .
\end{displaymath} (D.4)

Often, properties of the Laplace transform come in pairs: For every property there is a similar one where the role of f and $\mathcal L[f]$ are swapped. Here is the "dual'' of property (D.2):

\mathcal L\bigl[ x f(x) \bigr](s) = - \frac{\rm d\mathcal L[f](s)}{\rm ds} ,
\end{displaymath} (D.5)

or, more generally,

\mathcal L\bigl[ x^n f(x) \bigr](s) = (-1)^n \frac{\rm d^n \mathcal L[f](s)}{\rm d
s^n} \cdot
\end{displaymath} (D.6)

A similar equation holds for "negative'' derivatives, i.e. integrals of the Laplace transform. In this case, however, it is convenient to change the integration limits to $(s, \infty)$. In summary, we can write

\mathcal L\bigl[ f(x) / x \bigr](s) = \int_s^\infty \mathcal L[f](s') ~ \rm ds'
\end{displaymath} (D.7)

Given a positive number a, the Laplace transform of the function fshifted by a is given by

\mathcal L\bigl[ f(x - a) {\rm H}(x) \bigr](s) = \mathcal L[f](s) {\rm e}^{- s a}
\end{displaymath} (D.8)

where ${\rm H}$ is the Heaviside function defined in Eq. (40). A dual of this property can also be written:

\mathcal L\bigl[ f(x) {\rm e}^{b x} \bigr](s) = \mathcal L[f](s - b) .
\end{displaymath} (D.9)

Finally, we consider two useful relationships between limiting values of f and $\mathcal L[f]$:
$\displaystyle \lim_{x \rightarrow 0^+} f(x) = \lim_{s \rightarrow \infty} s
...lim_{x \rightarrow \infty} f(x) = \lim_{s \rightarrow 0^+} s
\mathcal L[f](s) .$     (D.10)



Copyright ESO 2002