Contents

A&A 407, 385-392 (2003)
DOI: 10.1051/0004-6361:20030849

Smooth maps from clumpy data: Generalizations[*]

M. Lombardi - P. Schneider

Institut für Astrophysik und Extraterrestrische Forschung, Universität Bonn, Auf dem Hügel 71, 53121 Bonn, Germany

Received 30 September 2002 / Accepted 28 May 2003

Abstract
In a series of papers (Lombardi & Schneider 2002, 2001) we studied in detail the statistical properties of an interpolation technique widely used in astronomy. In particular, we considered the average interpolated map and its covariance under the hypotheses that the map is obtained by smoothing unbiased measurements of an unknown field, and that the measurements are uniformly distributed on the sky. In this paper we generalize the results obtained to the case of observations carried out only on a finite field and distributed on the field with a non-uniform density. These generalizations, which are required in many astronomically relevant cases, still allow an exact, analytical solution of the problem. We also consider a number of properties of the interpolated map, and provide asymptotic expressions for the average map and the two-point correlation function which are valid at high densities.

Key words: methods: statistical - methods: analytical - methods: data analysis - gravitational lensing

   
1 Introduction

Interpolation techniques play a central role in many physical sciences. In fact, experimental data can often only be obtained at discrete points, while quantitative, global analyses can normally be performed only on a field. A classical example of such a situation is given by meteorological data, such as temperature, pressure, or humidity: these data are collected by a large number of ground-based weather stations, and then need to be interpolated in order to obtain a continuous field.

The situation is, apparently, very similar in Astronomy. Indeed, many astronomical observations are carried out "discretely,'' i.e. data are available only on some locations of the sky (typically corresponding to some astronomically significant objects, such as stars, galaxies, quasars). If there is some reason to think that the data represent discrete measurements of a continuous field, then the observer will want to interpolate the data in order to obtain a smooth map of the quantity being investigated.

In reality, astronomical observations have a characteristic that make them quite peculiar with respect to other physical experiments: in most cases, it is not possible to choose where to perform the measurements. A meteorologist, for example, can always decide to put his weather station in a particular location, or to have weather stations regularly spaced; this, clearly, is impossible for an astronomer. As a result, it is sensible to perform a statistical analysis of interpolation techniques by considering the measurement locations as random variables, i.e. by performing an ensemble average on the positions of the astronomical objects used for the analysis (this technique has been already used by several authors; see, e.g., Lombardi & Bertin 1998; van Waerbeke 2000).

In a series of previous papers (Lombardi & Schneider 2001, hereafter Paper I, and Lombardi & Schneider 2002, hereafter Paper II), we analyzed the statistical properties of a widely used interpolation technique. In particular, we considered a set of measurements  $\{ \hat f_n \}$ performed at locations  $\{ \vec\theta_n\}$. The measurements were taken to be unbiased estimates of a field  $f(\vec\theta)$ at the relative positions, i.e.

 \begin{displaymath}
\langle \hat f_n \rangle = f(\vec\theta_n) \; ,
\end{displaymath} (1)

where the brackets $\langle \cdot \rangle$ denote the expectation value of the enclosed quantity. The discrete measurements  $\{ \hat f_n \}$ were then interpolated using the following technique. First, we introduced a positive function  $w( \vec \phi )$, which describes the "influence'' of measurements performed at $\vec\theta' = \vec\theta + \vec\phi$ on the interpolated field  $\protect\tilde{f}(\vec\theta)$. This field was defined as

 \begin{displaymath}
\tilde{f} (\vec\theta) \equiv \frac{\sum_{n=1}^N \hat f_n
...
...ec\theta_n)}{\sum_{n=1}^N w(\vec\theta -
\vec\theta_n)} \; ,
\end{displaymath} (2)

where N is the total number of observations. Equation (2), indeed, is a standard interpolation technique (see, e.g. Lam 1983; Cressie 1993) called "moving weights,'' "moving average,'' or "distance weighted average'' (this last name is due to the fact that normally the weight function w used in Eq. (2) depends only on the distance $\Vert \vec\theta - \vec\theta_n \Vert$). Although other techniques are clearly available (see, e.g. Bernardeau & van de Weygaert 1996; Lombardi 2002; Schaap & van de Weygaert 2000), this is probably the interpolation method most often used in Astronomy.

In this paper we study the expectation value and the two-point correlation of the smoothed map  $\protect\tilde{f}(\vec\theta)$ under the hypotheses that:

1.
The measurements $\{ \hat f_n \}$ are unbiased estimates of the field f (cf. Eq. (1)) with errors. The errors $\epsilon_n =
\hat f_n - f(\vec\theta_n)$ are taken to be independent random variables with vanishing mean (this is clearly equivalent to the unbiasness of  $\{ \hat f_n \}$):

 \begin{displaymath}
\bigl\langle \epsilon_n \bigr\rangle {} = 0 \; , \hspace*{8...
...n_m \bigr\rangle {} = \delta_{nm} \sigma^2(\vec\theta_n) \; .
\end{displaymath} (3)

2.
The measurement locations are independent random variables distributed according to a know density field  $\rho(\vec\theta)$ inside a given observation area $\Omega$ (i.e., are a non-homogeneous Poisson process).
Hence, this work generalizes the results obtained in Papers I and II in three directions: (i) a non-constant variance $\sigma^2(\vec\theta)$ is considered; (ii) the density of measurement locations $\rho$ can vary on the field; (iii) no restriction is put on the size of the observation area $\Omega$, which can be finite. Surprisingly, although these generalizations significantly widen the applicability of our results, they do not make our method more complicated at all. Indeed, as we will see below, the problem seems to find a very natural description in the more general framework used here.

It should be stressed that the generalizations carried out here are very important in the astrophysical context. Indeed, astronomical data will normally be available only on a limited area of the sky, and so boundary effects have to be taken into account. Moreover, often the measurements will not be uniformly distributed on the observed area. This happens, for example, for data based on stars, which have a higher density when one approaches the galactic equator. However, even for astronomical objects that are, in principle, uniformly distributed on the sky (e.g., distant galaxies or quasars), we might need to deal with a non-uniform distribution because of observational effects (e.g., because of a non-constant sensitivity on the field of the detector used, of dithering patterns, or of the presence on the field of bright objects that interfere with the measurements).

The paper is organized as follow. In Sect. 2 we carry out the various generalizations in turn. The properties of the average and the two-point correlation function of the smoothed field are discussed in Sect. 3. Finally, we summarize the results of this paper in Sect. 4.

   
2 Main results

   
2.1 Position dependent weight function

Looking at Eq. (2), we can note that the interpolation point  $\vec\theta$ does not directly enter the problem, but is basically only used to "label'' the interpolated point. For our analysis, indeed, it is convenient to fix that point to, say, $\vec\theta = \vec\theta_A$, and to rewrite Eq. (2) as

 \begin{displaymath}
\tilde{f} _A \equiv \frac{\sum_{n=1}^N \hat f_n
w_A(\vec\theta_n)}{\sum_{n=1}^N w_A(\vec\theta_n)},
\end{displaymath} (4)

where we called $f_A \equiv f(\vec\theta_A)$ and $w_A(\vec\theta') \equiv w(\vec\theta_A -
\vec\theta')$. In this new notation, in the hypotheses used for Paper I (homogeneous Poisson process with uniform density $\rho$ for measurement locations, infinite field), the expectation value  $\bigl\langle \tilde
f_A\bigr\rangle$ can be evaluated exactly from the equations
    
                                             $\displaystyle Q_A(s) = \rho \int_\Omega \bigl[ {\rm e}^{-s w_A(\vec\theta)} - 1
\bigr] ~ {\rm d}^2 \theta \; ,$ (5)
    $\displaystyle Y_A(s) = \exp \bigl[ Q_A(s) \bigr] \; ,$ (6)
    $\displaystyle C_A(w_A) = \frac{1}{1 - P_A} \int_0^\infty {\rm e}^{-w_A s} Y_A(s)
~ {\rm d}s \; ,$ (7)
    $\displaystyle \bigl\langle \tilde{f} _A \bigr\rangle = \rho \int_\Omega
f(\vec\theta) w_A(\vec\theta) C_A\bigl( w_A(\vec\theta) \bigr) ~
{\rm d}^2 \theta \; ,$ (8)

where $\Omega$ is the observation area (taken to be very large compared with the typical scale of the weight function wA) and PA is the probability of having no point inside the support of wA (i.e. $P_A
= \exp(-\rho \pi_A)$, with $\pi_A$ the area of the support of wA).

Equations (4)-(8) show explicitly that the location  $\vec\theta$ does not enter our problem. As a result, looking again at Eq. (2), there is no need to take the weight function to be of the form  $w(\vec\theta - \vec\theta_n)$, but we can instead consider in the same framework the more general form $w(\vec\theta, \vec\theta_n)$. As we will see below, the trivial generalization described in this subsection is actually a fundamental step for more interesting results.

   
2.2 Finite fields

We now focus on a slightly different generalization, namely the use of finite fields. We observe that having no data outside a given field is totally equivalent to having data everywhere and using a vanishing weight for locations outside the field. In other words, even if our observations are confined on a small part of the sky, we can always imagine to have data in the whole sky, by generating arbitrary values for the data locations and values, and then discard the arbitrary data by using a vanishing weight for them. In turn, from the form of the integrands in Eqs. (5) and (8), we see that the integrals actually need to be performed only on the domain of the weight function (the integrands, indeed, vanish ot the points where wA vanish). As a result, Eqs. (5)-(8) can still be used, provided we interpret $\Omega$ as the observation field.

   
2.3 Non-uniform density

We can now finally generalize Eqs. (5)-(8) to non-constant densities. We first observe that the remaining spatial variable  $\vec\theta'$ that appears in the definition (4) is a dummy variable. Indeed, $\vec\theta'$ is basically used to "name'' locations, but does not really play any role on interpolation process. For example, performing an arbitrary bijective (i.e., one-to-one) mapping  $\vec\theta \mapsto \vec\eta$ described by the function  $\vec\eta(\vec\theta)$, will not change the value of  ${\rm f} _{\alpha}$, provided that we use the "mapped'' weight function $w_A^{(\vec\theta)}(\vec\theta) = w_A^{(\vec\eta)}\bigl(
\vec\eta(\vec\theta) \bigr)$. On the other hand, when performing the transformation  $\vec\theta \mapsto \vec\eta$, we are bound to change the density distribution of objects. More precisely, if the objects are uniformly distributed on the plane $\vec\eta$ with density  $\rho^{(\vec\eta)}$, they will be distributed according to a non-uniform density  $\rho^{(\vec\theta)}(\vec\theta)$ on the $\vec\theta$ plane. The final density, indeed, is given by

 \begin{displaymath}
\rho^{(\vec\theta)}(\vec\theta) = \left\vert \det \left(
\...
...\partial \vec\theta}
\right)\right\vert ~ \rho^{(\vec\eta)} .
\end{displaymath} (9)

This observation suggests a possible way to study non-uniform densities. Suppose that we intend to study the expectation value of  ${\rm f} _{\alpha}$ of Eq. (4) when the locations on the $\vec\theta$ plane are distributed according to a non-uniform density  $\rho^{(\vec\theta)}(\vec\theta)$. Then, we can look for a one-to-one mapping  $\vec\theta \mapsto \vec\eta$, such that the corresponding density  $\rho^{(\vec\eta)}$, evaluated according to Eq. (9), is uniform; we also modify the weight function accordingly. Now, since in the $\vec\eta$ plane the locations are uniformly distributed, we are in the position of studying the problem using the technique developed in Paper I. Moreover, since the value of  ${\rm f} _{\alpha}$ does not depend on the coordinates $\vec\eta$ or  $\vec\theta$ used, we can directly use the result obtained.

The method described above clearly allows us to solve a much more general problem but it has also two main problems. From the theoretical side, one has to show that it is possible to find a one-to-one mapping that satisfies our needs (namely, that  $\rho^{(\vec\eta)}$ is uniform). From the practical side, it might be non-trivial to find the function  $\vec\eta(\vec\theta)$; moreover, for every point  $\vec\theta_A$, one needs to transform the weight function  $w_A^{(\vec\theta)}(\vec\theta) = w(\vec\theta_A, \vec\theta)$ into a weight function on the $\vec\eta$ plane (see above). We now address both problems, showing that our equations can be reformulated in a way that naturally allows for non-uniform densities.

First, we explicitly show that, for every density distribution  $\rho^{(\vec\theta)}(\vec\theta)$ it is always possible to find a one-to-one function  $\vec\eta(\vec\theta)$ such that the corresponding  $\rho^{(\vec\eta)}$, evaluated from Eq. (9), is constant. Let us, in fact, consider the transformation

 \begin{displaymath}
\eta_i(\vec\theta) =
\left\{\begin{array}{ll}
\displayst...
... , \\
\theta_i & \textrm{otherwise} \; ,
\end{array}\right.
\end{displaymath} (10)

where M is the number of dimensions of  $\vec\theta$ (typically, in Astrophysics, M=2). This transformation satisfies the following properties: This proofs the existence of a one-to-one function  $\vec\eta(\vec\theta)$ with the required properties (note that there are many possible choices for  $\vec\eta(\vec\theta)$, but they are all equivalent for our purposes).

We now turn to the second problem, namely the practical difficulties in applying the technique discussed in this section. Suppose again that we are interested in evaluating the expectation value of  ${\rm f} _{\alpha}$ of Eq. (4) with a non-uniform density  $\rho^{(\vec\theta)}(\vec\theta)$. Then, we can use Eq. (10) to convert the problem into the $\vec\eta$ plane, so that the corresponding density is unity. We can then finally apply Eqs. (5)-(8) on $\vec\eta$, using $\rho^{(\vec\eta)}
= 1$. In particular, for Eq. (5) we have

 
                          QA(s) = $\displaystyle \rho^{(\vec\eta)} \int_{\vec\eta(\Omega)} \left[ {\rm e}^{-s
w^{(\vec\eta)}_A(\vec\eta)} - 1 \right] ~ {\rm d}^2 \eta$  
  = $\displaystyle \int_\Omega \left[ {\rm e}^{-s w^{(\vec\theta)}_A(\vec\theta)}
- ...
...)}{\partial \vec\theta} \right)\right\vert
\rho^{(\vec\eta)} ~ {\rm d}^2 \theta$  
  = $\displaystyle \int_\Omega \left[ {\rm e}^{-s w^{(\vec\theta)}_A(\vec\theta)}
- 1 \right] \rho^{(\vec\theta)}(\vec\theta) ~ {\rm d}^2 \theta \; .$ (12)

Note that in the second line we have operated a change of variable from $\vec\eta$ back to  $\vec\theta$; note also that we have used the fact that $w^{(\vec\theta)}(\vec\theta) = w^{(\vec\eta)}\bigl(
\vec\eta(\vec\theta) \bigr)$. Hence, we can still use an equation very close to Eq. (5): we just need to integrate also over the variable density  $\rho^{(\vec\theta)}$. Similarly, for Eq. (8) we find
 
                             $\displaystyle \bigl\langle \tilde{f} _A \bigr\rangle$ = $\displaystyle \rho^{(\vec\eta)} \int_{\vec\eta(\Omega)}
f^{(\vec\eta)}_A(\vec\e...
...ec\eta)}(\vec\eta) C_A\Bigl(
w_A^{(\vec\eta)}(\vec\eta) \Bigr) ~ {\rm d}^2 \eta$  
  = $\displaystyle \int_\Omega f^{(\vec\theta)}_A(\vec\theta)
w_A^{(\vec\theta)}(\ve...
...eta}{\partial \vec\theta}
\right)\right\vert \rho^{(\vec\eta)} {\rm d}^2 \theta$  
  = $\displaystyle \int_\Omega f^{(\vec\theta)}_A(\vec\theta)
w_A^{(\vec\theta)}(\ve...
...a)}(\vec\theta) \Bigr)~
\rho^{(\vec\theta)}(\vec\theta) ~ {\rm d}^2 \theta \; .$ (13)

Equations (6) and (7) remain unchanged. Hence, by simply using a transformation of variable back to  $\vec\theta$, we have been able to obtain a solution of the problem that does not involve $\vec\eta$ any more. This shows once more that the problem, as expected, does not depend on the details of the choice of the transformation  $\vec\theta \mapsto \vec\eta$ (see comment at the end of the previous paragraph). Hence, in the following we will drop the superscript  $(\vec\theta)$ used in this section, and hence we will always assume that all functions are evaluated in the coordinate system defined by  $\vec\theta$. Finally, note that the expressions in the last lines of Eqs. (12) and (13) are still valid for vanishing densities; in other words, we can either include the finite-field effects on the definition of $\Omega$, or just put  $\rho^{(\vec\theta)}(\vec\theta) = 0$ outside the observation field.

Before closing this subsection, we consider the case of a non-continuous density field  $\rho^{(\vec\theta)}$. We recall, indeed, that with Eq. (10) we have been able to provide a one-to-one, continuous transformation  $\vec\eta(\vec\theta)$ only under the hypothesis that  $\rho^{(\vec\theta)}$ be continuous. Although this hypothesis is not needed, it is non-trivial to exhibit a mapping with the require properties in the general case of a non-continuous density. In reality, this problem is only apparent. Note, in fact, that the density  $\rho^{(\vec\theta)}$ enters Eqs. (13) and (15) only as a term inside an integral and thus the continuity of this function does not play any role in our problem. For example, if we convolve a discontinuous density  $\rho^{(\vec\theta)}$ with a Gaussian,

 \begin{displaymath}
\rho^{(\vec\theta)}_{\rm s}(\vec\theta) = \int
\frac{1}{2 ...
...ec\theta)}(\vec\theta - \vec\theta')
~ {\rm d}^2 \theta' \; ,
\end{displaymath} (14)

we obtain a smooth function  $\rho^{(\vec\theta)}_{\rm s}(\vec\theta)$. This function, then, can be used in Eqs. (13) and (15) at the place of the density. Note that, since  $\rho^{(\vec\theta)}_{\rm s}(\vec\theta)$ is smooth for any a > 0, we can apply the transformation (10) without any problem. Finally, we take the limit $a \to 0^+$, so that the results of the integrations (13) and (15) are not modified by the use of  $\rho^{(\vec\theta)}_{\rm s}$ instead of  $\rho^{(\vec\theta)}$.

   
2.4 Average map


  \begin{figure}
\par\resizebox{8.8cm}{!}{%
\input fig1.tex}
\end{figure} Figure 1: The effect of a non-constant density on the effective weight. The plot shows, in the 1D case, the quantity  $\rho (x) w_{\rm eff}(x)$ for three different densities, $\rho _1(x) = 1$, $\rho _2(x) = 1 + H(x)/4$, and  $\rho_3(x) = 1 - (\cos x) / 2$. In all cases the original weight function has been chosen to be such that the combination  $w(x) \rho (x)$ is a Gaussian (see solid line plot).


  \begin{figure}
\par\resizebox{8.8cm}{!}{%
\input fig2.tex}
\end{figure} Figure 2: Effective weight function in presence of boundaries. Three Gaussian weight function (shown in solid lines) centered on different parts of the field  $\Omega = [-5, 5]$ produce significantly different effective weights (dashed lines). The weight function have been normalized according to Eq. (20); $\rho (x) = 1.5$ is constant for this plot.

We summarize here the results obtained in this section. We have shown that the expectation value of  ${\rm f} _{\alpha}$ can be evaluated from the set of equations

    
                                        $\displaystyle Q_A(s) = \int_\Omega \bigl[ {\rm e}^{-s w_A(\vec\theta)} - 1 \bigr]
\rho(\vec\theta) ~ {\rm d}^2 \theta \; ,$ (15)
    $\displaystyle Y_A(s) = \exp \bigl[ Q_A(s) \bigr] \; ,$ (16)
    $\displaystyle C_A(w_A) = \frac{1}{1 - P_A} \int_0^\infty {\rm e}^{-w_A s} Y_A(s)
~ {\rm d}s \; ,$ (17)
    $\displaystyle \bigl\langle \tilde{f} _A \bigr\rangle = \int_\Omega
f(\vec\theta...
...heta) C_A\bigl( w_A(\vec\theta) \bigr)
\rho(\vec\theta) ~ {\rm d}^2 \theta \; ,$ (18)

where the probability PA can be evaluated from

 \begin{displaymath}
P_A = \exp \biggl[ - \int_{\pi_{w_A}} \rho(\vec\theta) ~ {\rm d}^2
\theta \biggr] \; .
\end{displaymath} (19)

Appendix A reports an alternative method to derive this set of equations. In the rest of the paper we will assume, without loss of generality, that the weight function satisfies, for each point  $\vec\theta$, the (position dependent) normalization property

 \begin{displaymath}
\int_\Omega w(\vec\theta, \vec\theta') \rho(\vec\theta') ~ ...
...A(\vec\theta') \rho(\vec\theta') ~ {\rm d}^2
\theta' = 1 \; .
\end{displaymath} (20)

Indeed, since only relative values of the weight function are important in our problem, we can always suppose to deal with weight functions normalized according to Eq. (20).

Similarly to what was done in Paper I, we call $w_{{\rm eff}A} =
w_A C_A(w_A)$ the effective weight function, so that we can write Eq. (18) as

 \begin{displaymath}
\bigl\langle \tilde{f} _A \bigr\rangle = \int_\Omega
f(\ve...
...m eff}A}(\vec\theta)
\rho(\vec\theta) ~ {\rm d}^2 \theta \; .
\end{displaymath} (21)

Note that, in contrast to Paper I, in the definition of the effective weight function we have not included the density  $\rho(\vec\theta)$, which thus must be explicitly added in the integration of Eq. (21). This is convenient, because this way the effective weight is a function of the value of the weight function and not (directly) of the position  $\vec\theta$, and because this way, as we will see in Sect. 3.1, the normalization property of the effective weight function is similar to the normalization (20) of the original weight. The effects of a non-constant density and of a finite fields on the effective weight are shown in Figs. 1 and 2.

   
2.5 Final solution: Two-point correlation function

We now turn to the generalization of the results of Paper II concerning the covariance of  $\protect\tilde{f}$, i.e. its two-point correlation function. Since the generalization procedure closely follows the one used in Sect. 2 for the average, we skip here many details and mainly report the final result only.

We first recall that in Paper II we have defined $\tilde
f_B$ and wB similarly with  ${\rm f} _{\alpha}$ and wA, with the only difference that now these quantities are calculated with respect to a different point  $\vec\theta_B$. We then have defined the two-point correlation function of  $\protect\tilde{f}$ as

 \begin{displaymath}
{\rm Cov}(\tilde{f}; \theta_A, \theta_B) = \langle \tilde{f...
...\langle \tilde{f} _A \rangle \langle \tilde{f}_B \rangle \;
,
\end{displaymath} (22)

and we have shown that this quantity is composed of two terms, ${\rm Cov}(\tilde{f}; \theta_A, \theta_B) = T_\sigma + T_{\rm P}$, where $T_\sigma $ is the noise due to measurement errors and TP = TP1 + TP2 - TP3 is the Poisson noise [split, in turn, of three terms; see below Eqs. (27-29)].

Using an argument similar to the one adopted in Sect. 2, we can generalize the results of Paper II to the hypotheses discussed in the items of Sect. 1. We show here only the final results and skip the proof, which is a trivial repetition of what was done above for the average.

       
                                         $\displaystyle Q(s_A, s_B) = \int_\Omega \bigl[ {\rm e}^{-s_A w_A(\vec\theta) - s_B
w_B(\vec\theta)} - 1 \bigr]~ \rho(\vec\theta) ~ {\rm d}^2
\theta \; ,$ (23)
    $\displaystyle Y(s_A, s_B) = \exp \bigl[ Q(s_A, s_B) \bigr] \; .$ (24)
    $\displaystyle C(w_A, w_B) = \nu \int_0^\infty \! {\rm d}s_A \int_0^\infty
\! {\rm d}s_B ~ {\rm e}^{-s_A w_A - s_B w_B} Y(s_A, s_B) \; ,$ (25)
    $\displaystyle T_\sigma = \int_\Omega {\rm d}^2 \theta ~ \rho(\vec\theta)
\sigma...
...ec\theta) w_B(\vec\theta) C \bigl(
w_A(\vec\theta), w_B(\vec\theta) \bigr) \; ,$ (26)
    $\displaystyle T_{P1} = \int_\Omega {\rm d}^2 \theta \rho(\vec\theta) \bigl[
f(\...
...vec\theta) w_B(\vec\theta) C\bigl(
w_A(\vec\theta), w_B(\vec\theta) \bigr) \; ,$ (27)
    $\displaystyle T_{P2} = \int_\Omega \! {\rm d}^2 \theta_1 ~
\rho(\vec\theta_1) \...
...ec\theta_2) f(\vec\theta_1) f(\vec\theta_2)
w_A(\vec\theta_1) w_B(\vec\theta_2)$  
    $\displaystyle \phantom{T_{P1} = {}} \times C\bigl( w_A(\vec\theta_1) +
w_A(\vec\theta_2), w_B(\vec\theta_1) +
w_B(\vec\theta_2) \bigr) \; ,$ (28)
    $\displaystyle T_{P3} = \langle \tilde{f} _A \rangle \langle \tilde{f}_B
\rangle$  
    $\displaystyle \phantom{T_{P3}} = \biggl[ \int_\Omega {\rm d}^2 \theta_1 ~
\rho(...
...1) f(\vec\theta_1) w_A(\vec\theta_1) C_A\bigl(
w_A(\vec\theta_1) \bigr) \biggr]$  
    $\displaystyle \phantom{T_{P3} = } \times \biggl[ \int_\Omega {\rm d}^2
\theta_2...
...\vec\theta_2) w_B(\vec\theta_2)
C_B\bigl( w_B(\vec\theta_2) \bigr) \biggl] \; .$ (29)

We have called in Eq. (25) $\nu = 1/(1 - P_A - P_B + P_{AB})$, where the probabilities PA and PB can be evaluated from Eq. (19), and PAB (the probability of having no points inside both the supports of wA and wB) is given by

 \begin{displaymath}
P_{AB} = \exp \biggl[ - \int_{\pi_{w_A} \cup \pi_{w_B}}
\rho(\vec\theta) ~ {\rm d}^2 \theta \biggr] \; .
\end{displaymath} (30)

   
3 Properties

In this section we will consider some interesting properties of the average (Sect. 3.1) and of the two-point correlation function (Sect. 3.2) of  $\protect\tilde{f}(\vec\theta)$. Hence, here we basically generalize Sect. 5 of Paper I and Sect. 6 of Paper II.

   
3.1 Average map

   
3.1.1 Normalization

By construction, for a constant field $f(\vec\theta) = 1$ the smoothed function  ${\rm f} _{\alpha}$ defined in Eq. (4) returns on average 1, a property related to the normalization of the effective weight (see Lombardi 2002). Indeed, if $f(\vec\theta) = 1$, we find

 
                                        I $\textstyle \equiv$ $\displaystyle \langle \tilde{f} _A \rangle = \int_\Omega
w_{{\rm eff}A}(\vec\theta) \rho(\vec\theta) ~ {\rm d}^2 \theta$  
  = $\displaystyle \int_\Omega w_A(\vec\theta) C_A\bigl( w_A(\vec\theta) \bigr)
\rho(\vec\theta) ~ {\rm d}^2 \theta$  
  = $\displaystyle \frac{1}{1 - P_A} \int_0^\infty {\rm d}s ~ {\rm e}^{Q_A(s)} \int_...
...}^2 \theta ~ \rho(\vec\theta) w_A(\vec\theta) {\rm e}^{-s
w_A(\vec\theta)} \; ,$ (31)

where Eqs. (15)-(18) have been used in the last step. The last integral is just -QA'(s), so that
 
                            I = $\displaystyle - \frac{1}{1- P_A} \int_0^\infty Q_A'(s) {\rm e}^{Q_A(s)} ~
{\rm d}s$  
  = $\displaystyle \left. -\frac{1}{1 - P_A} {\rm e}^{Q_A(s)} \right\vert _0^\infty = 1 \; .$ (32)

Hence, the effective weight has the same normalization property as the weight function wA (see Eq. (20)).

   
3.1.2 Scaling

Suppose we rescale the weight function $w_A(\vec\theta)$ into  $k^2 w_A(k
\vec\theta)$, and at the same time the density  $\rho(\vec\theta)$ into  $k^2
\rho(\vec\theta)$; then we can verify using Eqs. (15)-(18) that the effective weight is rescaled similarly to wA, i.e. $w_{{\rm eff}A}(\vec\theta) \mapsto k^2 w_{{\rm eff}A}(k \vec\theta)$.

This scaling property suggests the following definition:

 \begin{displaymath}
\mathcal{N}_A \equiv \biggl[ (1 - P_A)
\int_\Omega \bigl[ ...
...bigr]^2 \rho(\vec\theta) ~
{\rm d}^2 \theta \biggr]^{-1} \; .
\end{displaymath} (33)

This quantity represents the number of "relevant'' points used in the average, i.e. the expected number of locations for which the weight wA is significantly different from zero. The (1 - PA) term in Eq. (33) is introduced in order to compensate for finite-fields effects (cf. the similar factor in Eq. (7)); for example, for the top-hat function, this guarantees that $\mathcal{N}_A = \rho \pi_{w_A} / \bigl[ 1 - \exp(-\rho \pi_{w_A}) \bigr] > 1$for any density. In any case, the above definition finds its main justification from the properties that the quantity  $\mathcal{N}_A$ so defined enjoys (see below). Following Paper I, we call  $\mathcal{N}_A$ the weight number of objects; similarly, we define the effective weight number of objects $\mathcal{N}_{{\rm eff}A}$ using the effective weight  $w_{{\rm eff}A}$ instead of wA in Eq. (33). Note that the related weight area $\mathcal{A}$ and effective weight area $\mathcal{A}_{\rm eff}$, also introduced in Paper I (see Eq. (39) there), cannot be defined for the general case of a non-uniform density.

   
3.1.3 Behavior of wA CA(wA)

A study of wA CA(wA) can be carried out using the same technique adopted in Paper I. Since YA(s) > 0 for every s, CA(wA) decreases as wA increases. Regarding  wA CA(wA), from the properties of Laplace transform (see, e.g., Arfken 1985; see also Appendix D of Paper II) we have

 \begin{displaymath}
w_{{\rm eff} A} = w_A C_A(w_A) = Y_A(0) + \mathcal L[Y'_A](w_A) \; ,
\end{displaymath} (34)

where $\mathcal L[ \cdot ]$ indicates the Laplace transform. Since Y'A(s) = Q'A(s) YA(s) < 0, we find that $w_{{\rm eff}A}(w_A)$ increases with wA. This implies that there must be a value $\bar w_A$ of the weight function wA such that CA(wA) > 1 if $w_A < \bar w_A$, and CA(wA) < 1 if $w_A > \bar w_A$. Indeed, since CA is monotonic, the equation $w_{{\rm eff}A}(w_A) = w_A$ can have at most one solution; however, this equation must have at least one solution because both wA and  $w_{{\rm eff}A}$ satisfy the same normalization property (cf. Eqs. (20) and (32)). The quantity

 \begin{displaymath}
D \equiv \int_\Omega \bigl[ w_A(\vec\theta) + w_{{\rm eff}
...
...vec\theta)
\bigr] \rho(\vec\theta) ~ {\rm d}^2 \theta \geq 0
\end{displaymath} (35)

is positive or null because the integrand is non-negative everywhere. By expanding the integrand we find
 
                             $\displaystyle 0 \leq D$ = $\displaystyle \int_\Omega \bigl[ w_A(\vec\theta) \bigr]^2
\rho(\vec\theta) ~ {\...
... \bigl[
w_{{\rm eff}A}(\vec\theta) \bigr]^2 \rho(\vec\theta) ~ {\rm d}^2
\theta$  
    $\displaystyle - 2 \bar w_A \int_\Omega \bigl[ w_A(\vec\theta) -
w_{{\rm eff}A}(\vec\theta) \bigr] \rho(\vec\theta) ~ {\rm d}^2
\theta$  
  = $\displaystyle \frac{1}{1 - P_A} \left( \frac{1}{\mathcal{N}_A} -
\frac{1}{\mathcal{N}_{{\rm eff}A}} \right) \; ,$ (36)

where the normalization of wA and of $w_{{\rm eff}A}$ has been used. Hence, we find  $\mathcal{N}_{{\rm eff}A} \geq \mathcal{N}_A$.

We now consider the limits of $w_{{\rm eff}A}(w_A)$ for small and large values of wA,

 \begin{displaymath}
\lim_{w_A \to \infty} w_A C_A(w_A) = \lim_{s \to
0^+} \frac{Y(s)}{1 - P_A} = \frac{1}{1 - P_A} \cdot
\end{displaymath} (37)

Since $w_{{\rm eff}A}(w_A)$ is monotonic, (1 - PA)-1 is a superior limit for the effective weight function. This property, in turn, can be used inside the definition of  $\mathcal{N}_{{\rm eff}A}$ to obtain
 
                         $\displaystyle \mathcal{N}_{{\rm eff}A}^{-1}$ = $\displaystyle (1 - P_A) \int_\Omega \bigl[
w_A(\vec\theta) \bigr]^2 \rho(\vec\theta) ~ {\rm d}^2 \theta$  
  < $\displaystyle \int_\Omega w_A(\vec\theta) \rho(\vec\theta) ~ {\rm d}^2 \theta = 1
\; .$ (38)

In other words, the effective number of objects will always exceed unity, independently of the weight function used.

Regarding the other limit we have

 \begin{displaymath}
\lim_{w_A \to 0^+} w_A C_A(w_A) = \lim_{s \to \infty}
\frac{1}{1 - P_A} Y_A(s) = \frac{P_A}{1 - P_A} \cdot
\end{displaymath} (39)

Hence, since $w_{{\rm eff}A}(w_A)$ vanishes if wA = 0, the effective weight has a discontinuity at 0 if  $P_A \neq 0$.

   
3.1.4 Limit of high and low densities

At high densities ( $\rho \to \infty$) only values of QA(s) close to s = 0are important, because for large s, YA(s) vanishes. Hence, we expand QA(s) by writing

 \begin{displaymath}
Q_A(s) = \sum_{k=1}^\infty \frac{(-1)^k s^k S_{Ak}}{k!} \; ,
\end{displaymath} (40)

where SAk is the kth moment of wA:

 \begin{displaymath}
S_{Ak} \equiv \int_\Omega \bigl[ w_A(\vec\theta) \bigr]^k
\rho(\vec\theta) ~ {\rm d}^2 \theta \; .
\end{displaymath} (41)

The normalization (20) implies SA1 = 1, and so to first order  $Y_A(s) \simeq {\rm e}^{-s}$. We have then

 \begin{displaymath}
C_A(w_A) \simeq \frac{1}{1 - P_A} \int_0^\infty {\rm e}^{-s...
...}^{-s} ~
{\rm d}s = \frac{1}{1 - P_A} \frac{1}{1 + w_A} \cdot
\end{displaymath} (42)

In the limit of low densities ( $\rho \to 0^+$), instead, $Y_A(s) \to 1$ and

 \begin{displaymath}
C_A(w_A) \simeq \frac{1}{1 - P_A} \frac{1}{w_A} \cdot
\end{displaymath} (43)

Expanding Eq. (19) to first order in $\rho$ we find, for wA > 0,

 \begin{displaymath}
w_{A{\rm eff}} = w_A C_A(w_A) \simeq \biggl[ \int_{\pi_{w_A}}
\rho(\vec\theta) ~ {\rm d}^2 \theta \biggr]^{-1} \; .
\end{displaymath} (44)

Hence, the effective weight converges to a top-hat function normalized to unity.

   
3.1.5 Moments expansion

At large densities $\rho$, we can expand CA(wA) in terms of the moments of wA defined in Eq. (41). Calculations are basically identical to the one provided in Paper I (see Eq. (66) of that paper), with only minor corrections due to the different definition of CA. Hence, we skip the derivation and report here only the final result (up to the fifth term):

 \begin{displaymath}
(1 - P_A) C_A(w_A) \simeq \frac{1}{1 + w_A} + \frac{S_{A2}}...
...(1 + w_A)^4}
+ \frac{S_{A4} + 3 S_{A2}^2}{(1 + w_A)^5} \cdot
\end{displaymath} (45)

   
3.2 Two-point correlation function

The generalization of the properties of the covariance terms $T_\sigma $ and  TP is, in most cases, trivial and closely follows the generalization carried out in the Sect. 3.1. Hence, here we will skip much of the proofs and just outline the main results.

   
3.2.1 Normalization

It can be shown that the Poisson noise satisfies a simple normalization property: if $f(\vec\theta) = 1$ is constant, then TP1 + TP2 = TP3 = 1, and thus the Poisson noise vanishes. A proof of this property can be carried out either with the technique described in Paper II, or, more easily, using the following argument, taken from Lombardi (2002). If $f(\vec\theta) = 1$ is constant on the field, we will on average measure $\langle \hat f_n \rangle = 1$ for each point. Let us now assume for a moment that we do not have any measurement error, so that $\protect\tilde{f}_n = 1$for every n. Then, we will always measure $\protect\tilde{f}(\vec\theta) = 1$, and thus $\bigl\langle f(\vec\theta_A) \bigr\rangle = \bigl\langle f(\vec\theta_B) \bigr\rangle = \bigl\langle
f(\vec\theta_A) f(\vec\theta_B) \bigr\rangle = 1$. In this case, thus, we find TP1 + TP2 = TP3 = 1. The situation is actually the same even if the measurements are affected by errors: These, in fact, appear only in the evaluation of $T_\sigma $, and thus the Poisson noise is left unaffected.

   
3.2.2 Small and large separations

In the limit $\vec\theta_A \equiv \vec\theta_B$, the expressions for $T_\sigma $ and TP take a particularly simple form. Indeed, since $w_A \equiv
w_B$, to evaluate $T_\sigma $, TP1, and TP2 we just need  C(wA, wA); this quantity, in turn, can be easily shown to be C(wA, wA) = -C'A(wA), where CA(wA) is given by Eq. (17).

If instead $\left\vert\vec\theta_A - \vec\theta_B \right\vert$ is large compared to the scale lengths of the weight functions wA and wB, then

   
                                $\displaystyle Q(s_A, s_B) \simeq Q_A(s_A) + Q_B(s_B) \; ,$ (46)
    $\displaystyle Y(s_A, s_B) \simeq Y_A(s_A) Y_B(s_B) \; ,$ (47)
    $\displaystyle C(w_A, w_B) \simeq C_A(w_A) C_B(w_B) \; .$ (48)

The following argument shows that in general $C(w_A, w_B) \geq C_A(w_A)
C_B(w_B)$. First, observe that, since $P_{AB} \geq P_A P_B$, one has $\nu
\geq (1 - P_A)^{-1} (1 - P_B)^{-1}$. Moreover, it can be shown that  $Q(s_A, s_B) \geq Q_A(s_A) + Q_B(s_B)$: indeed

 \begin{displaymath}
Q(s_A, s_B) - Q_A(s_A) - Q_B(s_B)
= \int_\Omega \bigl[ {\...
...)} - 1 \bigr] \rho(\vec\theta) ~
{\rm d}^2 \theta \geq 0 \; .
\end{displaymath} (49)

Finally then

 \begin{displaymath}
C(w_A, w_B) - C_A(w_A) C_B(w_B)
\geq \frac{1}{1 - P_A} \f...
...Q(s_A, s_B)} - {\rm e}^{Q_A(s_A) + Q_B(s_B)}
\right] \geq 0 ,
\end{displaymath} (50)

where the last inequality is a consequence of the monotonicity of the exponential function and of Eq. (49).

   
3.2.3 Behaviour of $T_\sigma $

The normalization of the Poisson noise terms derived in Sect. 3.2.1 can be used to derive an upper limit for $T_\sigma $. Indeed, from the expression of $T_\sigma $ one sees that this quantity is very similar to TP1, provided we replace  $f(\vec\theta)$ with  $\sigma(\vec\theta)$. On the other hand, from Sect. 3.2.1 we know that, if $f(\vec\theta) = 1$, then TP1 < 1, because in this case TP1 + TP2 = 1 and TP2 is positive. Hence we find $T_\sigma < \sigma^2_{\rm max}$, where $\sigma^2_{\rm max}$ is an upper limit for  $\sigma^2(\vec\theta)$.

A lower limit for $T_\sigma $ can be obtained from the inequality (50):

 
                              $\displaystyle T_\sigma$ $\textstyle \geq$ $\displaystyle \int_\Omega \sigma^2(\vec\theta) w_A(\vec\theta)
w_B(\vec\theta) ...
...ta) \bigr) C_B\bigl(
w_B(\vec\theta) \bigr) \rho(\vec\theta) ~ {\rm d}^2
\theta$  
  = $\displaystyle \int_\Omega \sigma^2(\vec\theta)
w_{{\rm eff}A}(\vec\theta) w_{{\rm eff}B}(\vec\theta)
\rho(\vec\theta) ~ {\rm d}^2 \theta \; .$ (51)

Note that if $\vec\theta_A \equiv \vec\theta_B$, we have $w_{{\rm eff}A}(\vec\theta) \equiv
w_{{\rm eff}B}(\vec\theta)$ and the last integral in Eq. (51) is closely related to  $\mathcal{N}_{{\rm eff}A} \equiv
\mathcal{N}_{{\rm eff}B}$. The two limits on $T_\sigma $ discussed in this section are exemplified in Fig. 3.
  \begin{figure}
\par\resizebox{8.8cm}{!}{%
\input fig3.tex}
\end{figure} Figure 3: The variance $T_\sigma $ evaluated at different positions $x_A \equiv x_B$. For this plot, we used a Gaussian weight function with variance 1/2; the measurement error $\sigma $ was kept constant on the field. Note that, in agreement with Eq. (51), the quantity $T_\sigma /\sigma ^2$ is always larger than  $1/\mathcal{N}_{\rm eff}$; curiously, the quantity $1/\mathcal{N}$ gives a good first-order approximation for $T_\sigma $ even in the complex situation shown here.

   
3.2.4 Limit of high and low densities

At high densities we can expand Q(sA, sB) as done in Sect. 3.1.4 for QA:

 \begin{displaymath}
Q(s_A, s_B) = \int \biggl[ \sum_{i,j} \frac{1}{i! j!} \bigl...
...] \rho(\vec\theta) ~ {\rm d}^2 \theta \simeq - s_A -
s_B \; ,
\end{displaymath} (52)

where in the last step we have retained only the first terms of the sum, and used the normalization of wA and wB (cf. Eq. (20)). Hence, in this limit

 \begin{displaymath}
C(w_A, w_B) \simeq \nu \frac{1}{1 + w_A} \frac{1}{1 + w_B} \cdot
\end{displaymath} (53)

Note that, because of the normalization (20), both weight functions wA and wB behave like $\rho^{-1}$ at high densities, and thus $C(w_A, w_B) \simeq 1$ to first order. We then find

 \begin{displaymath}
T_\sigma \simeq \int_\Omega \sigma^2(\vec\theta) w_A(\vec\theta)
w_B(\vec\theta) \rho(\vec\theta) ~ {\rm d}^2 \theta \; .
\end{displaymath} (54)

This expression should be compared with Eq. (36).

If instead $\rho \to 0^+$, then $Q(s_A, s_B) \to 0^-$, $Y(s_A, s_B) \to 1$ and thus $C(w_A, w_B) \to \nu / (w_A w_B)$. Note that here we are assuming $w_A(\vec\theta) \neq 0$ and $w_B(\vec\theta) \neq 0$. In the same limit, we have (see Eqs. (19) and (30))

 
                             $\displaystyle \nu^{-1}$ $\textstyle \simeq$ $\displaystyle \biggl( \int_{\pi_{w_A}} + \int_{\pi_{w_B}} -
\int_{\pi_{w_A} \cup \pi_{w_B}} \biggr)~ \rho(\vec\theta) ~ {\rm d}^2
\theta$  
  = $\displaystyle \int_{\pi_{w_A} \cap \pi_{w_B}} \rho(\vec\theta) ~ {\rm d}^2
\theta \; .$ (55)

Hence, we finally find

 \begin{displaymath}
T_\sigma \simeq \biggl[ \int_{\pi_{w_A} \cap \pi_{w_B}} \ms...
...\mskip-30mu \rho(\vec\theta)
~ {\rm d}^2
\theta \biggr] \; .
\end{displaymath} (56)

This result is consistent with the upper limit for $T_\sigma $ obtained in Sect. 3.2.3.

   
4 Conclusion

In this paper we have studied the statistical properties of a smoothing technique widely used in Astronomy and in other physical sciences. In particular, we have provided simple analytical expressions to calculate the average and the two-point correlation function of the smoothed field  $\protect\tilde{f}(\vec\theta)$ defined in Eq. (1). The results generalize what was already obtained in Paper I and II to the case where observations are carried out in a finite field, with a non-uniform spatial density for the measurements, and with non-uniform measurement errors  $\sigma^2(\vec\theta)$. These generalizations together greatly widen the range of applicability of our results in the astronomical context. Finally, we have shown several interesting properties of the average map and of the two-point correlation function, and we have considered the behavior of these quantities in some relevant limiting cases.

Acknowledgements

This work was partially supported by a grant from the Deutsche Forschungsgemeinschaft, and the TMR Network "Gravitational Lensing: New constraints on Cosmology and the Distribution of Dark Matter.''

References

  
5 Online Material

   
Appendix A: Alternative derivation

In this appendix, we derive the same results obtained in Sect. 2 using a more direct method. Although not necessary, this alternative derivation is helpful in order to fully understand the whole problem and also clarifies some of the peculiarities of the equations derived in Paper I (cf., in particular, the case of vanishing weights).

The derivation will follow quite closely the one adopted in Paper I, with the needed modifications due to the finite-field and the non-constant density. The only significant exception will be the use of a different strategy in performing the so-called "continuous limit'' (because of the finite field, we cannot perform the limit  $N
\to \infty$, but we must rather take N as a random variable). Note that throughout this appendix we will drop everywhere the index A, so that, e.g., $w_A(\vec\theta)$ will be written just as  $w(\vec\theta)$. This simplification should not create ambiguities, since anyway here we are concerned only with the value of  $\protect\tilde{f}$ at  $\vec\theta_A$.

Let us consider a field $\Omega$ and locations randomly distributed on this field with density  $\rho(\vec\theta)$. Let us assume, for simplicity, that  $w(\vec\theta)$ is strictly positive in $\Omega$; if this is not the case, we can always redefine $\Omega$ to include only points inside the support of w (cf. discussion in Sect. 2.2). The expected average number of locations in $\Omega$ is given by

 \begin{displaymath}
\bar N = \int_\Omega \rho(\vec\theta) ~ {\rm d}^2 \theta \; .
\end{displaymath} (A.1)

The actual number of points will be a random variable following a Poisson distribution with average $\bar N$. In reality, since we are accepting only cases where there is at least one point inside the support of w (we could not define  $\protect\tilde{f}$ otherwise), N will follow the probability

 \begin{displaymath}
p_N(N) = \frac{{\rm e}^{-\bar N}}{1 - {\rm e}^{-\bar N}} \frac{\bar N^N}{N!} \;
,
\end{displaymath} (A.2)

where N is assumed to be a positive integer. Note in particular that the normalization factor takes into account the "missing'' N=0probability.

Since the locations are distributed inside $\Omega$ according to the density  $\rho(\vec\theta)$, a single location follows the probability distribution $\rho(\vec\theta) / \bar N$; note that the factor $1 /
\bar N$ is needed here in order to satisfy the normalization of probabilities (the integral on $\Omega$ must be unity). Hence, the probability of having exactly N locations inside $\Omega$ at the positions $\{ \vec\theta_n\}$ (with $n \in \{ 1, \ldots, N\}$) is given by

 \begin{displaymath}
P\bigl( \{ \vec\theta_n \} \bigr) = p_N(N) \prod_{n=1}^N
\...
...ar N} - 1}
\frac{1}{N!} \prod_{n=1}^N \rho(\vec\theta_n) \; .
\end{displaymath} (A.3)

We can now use this probability distribution to evaluate the expectation value of $\protect\tilde{f}$ defined in Eq. (4):
 
                                   $\displaystyle \langle \tilde{f} \rangle$ = $\displaystyle \sum_{N=1}^\infty \int_\Omega
{\rm d}^2 \theta_1 \cdots \int_\Ome...
...ac{\sum_{n=1}^N f(\vec\theta_n) w(\vec\theta_n) }{\sum_{n=1}^N
w(\vec\theta_n)}$  
  = $\displaystyle \frac{1}{{\rm e}^{\bar N} - 1} \sum_{N=1}^\infty \frac{1}{N!}
\in...
...N) \frac{N f(\vec\theta_1)
w(\vec\theta_1)}{\sum_{n=1}^N w(\vec\theta_n)} \cdot$ (A.4)

Similarly to Paper I, we now define, for each N, the random variables

 \begin{displaymath}
y_N \equiv \sum_{n=2}^N w(\vec\theta_n) \; .
\end{displaymath} (A.5)

In order to evaluate the associated probability distribution, we consider the probability distribution for values of the weight w(i.e., we use Marcov's method; see, e.g., Deguchi & Watson 1987; Chandrasekhar 1943). The probability density for a location to have a weight w, in particular, is given by

 \begin{displaymath}
p_w(w) = \frac{1}{\bar N} \int_\Omega {\rm d}^2 \theta
\rho(\vec\theta) \delta \bigl( w - w(\vec\theta) \bigr) \; .
\end{displaymath} (A.6)

This allows us to evaluate the probability distribution for yN as
 
                                  pyN = $\displaystyle \frac{1}{\bar N^{N - 1}} \int_\Omega {\rm d}^2
\theta_2 ~ \rho(\v...
...c\theta_N) \delta\bigl(
y_N - w(\vec\theta_2) - \cdots - w(\vec\theta_N) \bigr)$  
  = $\displaystyle \int_0^\infty {\rm d}w_2 ~ p_w(w_2) \cdots
\int_0^\infty {\rm d}w_N ~ p_w(w_N) \delta(y_N - w_2 - \cdots -
w_N) .$ (A.7)

Using this probability, we can rewrite Eq. (A.4) as

 \begin{displaymath}
\langle \tilde{f} \rangle = \frac{1}{{\rm e}^{\bar N} - 1}
...
...ac{N p_{y_N}(y_N) ~ {\rm d}
y_N}{w(\vec\theta_1) + y_N} \cdot
\end{displaymath} (A.8)

The form of this expression justifies the definition

 \begin{displaymath}
C(w) \equiv \frac{1}{{\rm e}^{\bar N} - 1} \sum_{N=1}^\inft...
...nt_0^\infty \frac{N p_{y_N}(y_N) ~ {\rm d}
y_N}{w + y_N} \; ,
\end{displaymath} (A.9)

so that we have

 \begin{displaymath}
\langle \tilde{f} \rangle = \int_\Omega {\rm d}^2 \theta_1 ...
...\theta_1) f(\vec\theta_1) C\bigl(
w(\vec\theta_1) \bigr) \; .
\end{displaymath} (A.10)

In order to further simplify the definition of C, we use a technique similar to the one adopted in Paper I. Namely, we define W, the Laplace transform of pw
 
                               W(s) $\textstyle \equiv$ $\displaystyle \mathcal L[p_w](s) = \int_0^\infty {\rm e}^{-s w}
p_w(w_A) ~ {\rm d}w$  
  = $\displaystyle \frac{1}{\bar N} \int_\Omega {\rm e}^{-s w(\vec\theta)}
\rho(\vec\theta) ~ {\rm d}^2 \theta \; .$ (A.11)

We similarly define, for each N, YN, the Laplace transform of pyN, as
 
                               YN(s) = $\displaystyle \mathcal L[p_{y_N}](s) = \int_0^\infty
p_{y_N}(y_N) {\rm e}^{-s y_N} ~ {\rm d}y_N$  
  = $\displaystyle \bigl[ W(s) \bigr]^{N-1} \; .$ (A.12)

We now use the following property of Laplace transforms: if f is any function, and x0 any positive real number, we have (see Eqs. (14), (20), and (25) of Paper I for a proof)

 \begin{displaymath}
\mathcal L\bigl[ \mathcal L[f] \bigr](x_0) = \int_0^\infty \frac{f(x)}{x_0 + x} ~
{\rm d}x \; .
\end{displaymath} (A.13)

Using this in Eq. (A.9) we find
 
                               C(w) = $\displaystyle \frac{1}{{\rm e}^{\bar N} - 1} \sum_{N=1}^\infty
\frac{\bar N^{N - 1}}{N!} N \mathcal L[y_N](w)$  
  = $\displaystyle \frac{1}{{\rm e}^{\bar N} - 1} \mathcal L\biggl[ \sum_{N=1}^\infty
\frac{\bar N^{N - 1}}{(N-1)!} W^{N-1} \biggr](w)$  
  = $\displaystyle \frac{1}{{\rm e}^{\bar N} - 1} \mathcal L\biggl[ \sum_{\nu=0}^\infty
\frac{\bar N^\nu}{\nu!} W^\nu \biggr](w)$  
  = $\displaystyle \frac{1}{{\rm e}^{\bar N} - 1}
\mathcal L\bigl[ {\rm e}^{\bar N W} \bigr](w)$  
  = $\displaystyle \frac{1}{1 - {\rm e}^{-\bar N}} \mathcal L\bigl[ {\rm e}^{\bar N W - \bar
N} \bigr](w) \; .$ (A.14)

Finally, we define Q as

 \begin{displaymath}
Q(s) \equiv \bar N W(s) - \bar N = \int_\Omega \bigl[ {\rm ...
...ec\theta)} - 1 \bigr] \rho(\vec\theta) ~ {\rm d}^2 \theta \; ,
\end{displaymath} (A.15)

so that we finally have

 \begin{displaymath}
C(w) = \frac{1}{1 - {\rm e}^{-\bar N}} \mathcal L\bigl[{\rm e}^{Q(s)}
\bigr](w) \; .
\end{displaymath} (A.16)

This completes our proof.

Copyright ESO 2003