Free Access
Issue
A&A
Volume 566, June 2014
Article Number A8
Number of page(s) 16
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201220021
Published online 29 May 2014

© ESO, 2014

1. Introduction

The most basic method of cross-identifying two catalogs K and K with known circular positional uncertainties is to consider that a K-source M is the same as an object M of K if it falls within a disk centered on M and having a radius equal to a few times their combined positional uncertainty; if the disk is void, M has no counterpart, and if it contains several K-sources, the nearest one is identified to M. This solution is defective for several reasons: it does not take the density of sources into account; positional uncertainty ellipses are not properly treated; the radius of the disk is arbitrary; positional uncertainties are not always known; K and K do not play symmetrical roles; the identification is ambiguous if a K-source may be associated to several objects of K. Worst of all, it does not provide a probability of association.

Beyond this naïve method, the cross-identification problem has been studied by Condon et al. (1975), de Ruiter et al. (1977), Prestage & Peacock (1983), Sutherland & Saunders (1992), Bauer et al. (2000), and Rutledge et al. (2000), among others. As shown by the recent papers of Budavári & Szalay (2008), Brand et al. (2006), Rohde et al. (2006), Roseboom et al. (2009), and Pineau et al. (2011), this field is still very active and will be more so with the wealth of forthcoming multiwavelength data and the virtual observatory (Vignali et al. 2009). In these papers, the identification is performed using a “likelihood ratio”. For two objects (M,M′) ∈ K × K with known coordinates and positional uncertainties, and given the local surface density of K-sources, this ratio is typically computed as λ:=P(position|counterpart)P(position|chance),\begin{equation} \label{def_LR} \lambda \coloneqq \frac{ \Prob(\text{position} \mid \text{counterpart}) }{ \Prob(\text{position} \mid \text{chance}) }, \end{equation}(1)where P(position | counterpart) is the probability of finding M at some position relative to M if M is a counterpart of M, and P(position | chance) is the probability that M is there by chance. As noticed by Sutherland & Saunders (1992), there has been some confusion when defining and interpreting λ, and, more importantly, in deriving the probability 1 that M and M are the same.

To associate sources from catalogs at different wavelengths, some authors include some a priori information on the spectral energy distribution (SED) of the objects in this likelihood ratio. When this work started, our primary goal was to build template observational SEDs from the optical to the far-infrared for different types of galaxies. We initially intended to cross-identify the Iras Faint Source Survey (Moshir et al. 1992, 1993) with the Leda database (Paturel et al. 1995). Because of the high positional inaccuracy of Iras data, special care was needed to identify optical sources with infrared ones. While Iras data are by now quite outdated and have been superseded by Spitzer and Herschel observations, we think that the procedure we began to develop at that time may be valuable for other studies. Because we aimed to fit synthetic SEDs to the template observational ones, we could not and did not want to make assumptions on the SED of objects based on their type, since this would have biased the procedure. We therefore rely only on positions in what follows.

The method we use is in essence similar to that of Sutherland & Saunders (1992). Because thinking in terms of probabilities rather than of likelihood ratios highlights some implicit assumptions, we found it however useful for the sake of clarity to detail hereafter our calculations. This allows us moreover to propose a systematic way to estimate the unknown parameters required to compute the probabilities of association and to extend our work to a case not covered by the papers cited above (see Sect. 4).

After some preliminaries (Sect. 2), we compute in Sect. 3 the probability of association under the hypothesis that a K-source has at most one counterpart in K but that several objects of K may share the same one (“several-to-one” associations). We also compute the likelihood to observe all the sources at their effective positions and use it to estimate the fraction of objects with a counterpart and, if unknown, the positional uncertainty in one or both catalogs. In Sect. 4, we do the same calculations under the assumption that a K-source has at most one counterpart in K and that no other object of K has the same counterpart (“one-to-one” associations). In Sect. 5, we present a code, Aspects, implementing the results of Sects. 3 and 4, and with which we compute the likelihoods and probabilities of association under the aforementioned assumptions. We test it on simulations in Sect. 6. The probability distribution of the relative positions of associated sources is modeled in Appendix A.

2. Preliminaries

2.1. Notations

We consider two catalogs K and K defined on a common surface of the sky, of area S, and containing respectively n sources (Mi)i ∈ [[1,n]] and n\hbox{$\np$} sources (Mj)j[[1,n]]\hbox{$(\Mp_j)^{}_{\smash[t]{j\in\integinterv{1}{\np}}}$}. We define the following events:

  • ci: Mi is in the infinitesimal surface element d2ri located at ri;

  • cj\hbox{$\coordpj$}: Mj\hbox{$\Mp_j$} is in the infinitesimal surface element d2rj\hbox{$\df^2\vrpj$} located at rj\hbox{$\vrpj$};

  • C:=􏽔i=1nci\hbox{$C \coloneqq \bigcap_{i=1}^n c_i$}: the coordinates of all K-sources are known;

  • C:=􏽔j=1ncj\hbox{$C' \coloneqq \bigcap_{j=1}^\np \coordpj$}: the coordinates of all K-sources are known;

  • Ai,j, with i ≠ 0 and j ≠ 0: Mj\hbox{$\Mp_j$} is a counterpart of Mi;

  • Ai, 0: Mi has no counterpart in K, i.e. Ai,0=􏽓j0Ai,j\hbox{$A_{i\comma 0} = \overline{\bigcup_{j\neq0}A_{i\comma j}}$}, where ω\hbox{$\overline{\omega}$} is the negation of an event ω;

  • A0,j: Mj\hbox{$\Mp_j$} has no counterpart in K.

We denote by f (resp. f) the unknown a priori (i.e., not knowing the coordinates) probability that any element of K (resp. K) has a counterpart in K (resp. K). In terms of the events (Ai,j), for any (Mi,Mj)K×K\hbox{$(M_i, \Mp_j) \in K \times K'$}, P(􏽛k0Ai,k)=f;P(Ai,0)=1f;P(􏽛k0Ak,j)=f;P(A0,j)=1f.\begin{equation} \label{def_f} \Left. \begin{aligned} &\Prob\Left(\bigcup_{k\neq0} A_{i\comma k}\Right) = f; \qquad \Prob(A_{i\comma 0}) = 1-f; \\ &\Prob\Left(\bigcup_{k\neq0} A_{k\comma j}\Right) = f'; \qquad \Prob(A_{0\comma j}) = 1-f'. \end{aligned} \Right\} \end{equation}(2)We see in Sects. 3.2 and 4.2 how to estimate f and f.

The angular distance between two points Y and Z is written ψ(Y,Z). More specifically, we put ψi,j=ψ(Mi,Mj)\hbox{$\psi_{i\comma j} = \psi(M_i, \Mp_j)$}.

2.2. Assumptions

Calculations are carried out under one of three exclusive assumptions:

  • Several-to-one hypothesis: {forallMi,theevents(Ai,j)j[[1,n]]areexclusive;forallMj,theevents(Ai,j)i[[1,n]]areindependent.(Hs:o)\begin{eqnarray*} \Left\{ \begin{aligned} &\text{for all }M_i,\text{ the events }(A_{i\comma j})_{j\in\integinterv{1}{\np}} \text{ are exclusive}; \\ &\text{for all }\Mp_j,\text{ the events }(A_{i\comma j})_{i\in\integinterv{1}{n}} \text{ are independent}. \end{aligned} \Right. \quad({H_\sto}) \end{eqnarray*}Therefore, a K-source has at most one counterpart in K, but a K-source may have several counterparts in K. Since more K-sources have a counterpart in K than the converse, fnfn\hbox{$f\multspace n \geqslant f'\multspace \np$}. This assumption is reasonable if the angular resolution in K (e.g. Iras) is much poorer than in K (e.g. Leda), since several distinct objects of K may then be confused in K.

  • One-to-several hypothesis: the symmetric of assumption (Hs:o), i.e., {forallMi,theevents(Ai,j)i[[1,n]]areindependent;forallMj,theevents(Ai,j)j[[1,n]]areexclusive.(Ho:s)\begin{eqnarray*} \Left\{ \begin{aligned} &\text{for all }M_i,\text{ the events }(A_{i\comma j})_{i\in\integinterv{1}{n}} \text{ are independent}; \\ &\text{for all }\Mp_j,\text{ the events }(A_{i\comma j})_{j\in\integinterv{1}{\np}} \text{ are exclusive}. \end{aligned} \Right. \quad({H_\ots}) \end{eqnarray*}In that case, fnfn\hbox{$f\multspace n \leqslant f'\multspace \np$}. This assumption is appropriate for catalogs of extended sources that, although observed as single at the wavelength of K, may look broken up at the wavelength of K.

  • One-to-one hypothesis: any K-source has at most one counterpart in K and reciprocally, i.e. alltheevents(Ai,j)i[[1,n]],j[[1,n]]areexclusive.(Ho:o)\begin{eqnarray*} \text{all the events } (A_{i\comma j})_{i\in\integinterv{1}{n}\comma j\in\integinterv{1}{\np}} \text{ are exclusive}. \quad({H_\oto}) \end{eqnarray*}Then, fn=fn\hbox{$f\multspace n = f'\multspace \np$}. This assumption is the most relevant one for high-resolution catalogs of point sources or of well-defined extended sources.

Probabilities, likelihoods, and estimators specifically derived under either assumption (Hs:o), (Ho:s), or (Ho:o) are written with the subscript “s:o”, “o:s”, or “o:o”, respectively; the subscript “:o” is used for results valid for both (Hs:o) and (Ho:o). The “several-to-several” hypothesis where all the events (Ai,j)i[[1,n]],j[[1,n]]\hbox{$(A_{i\comma j})_{i\in\integinterv{1}{n}\comma j\in\integinterv{1}{\np}}$} are independent is not considered here.

We make two other assumptions: all the associations Ai,j with i ≠ 0 and j ≠ 0 are considered a priori as equally likely, and the effect of clustering is negligible.

2.3. Approach

Our approach is the following. For each of the assumptions (Hs:o), (Ho:o), and (Ho:s), we

  • find an expression for the probabilities of association,

  • build estimators of the unknown parameters needed to compute these probabilities, and

  • compute the likelihood of the assumption from the data.

Then, we compute the probabilities of association for the best estimators of unknown parameters and the most likely assumption.

Although (Hs:o) is less symmetrical and neutral than (Ho:o), we begin our study with this assumption: first, because computations are much simpler under (Hs:o) than under (Ho:o) and serve as a guide for the latter; second, because they provide initial values for the iterative procedure (Sect. 5.4.3) used to effectively compute probabilities under (Ho:o).

3. Several-to-one associations

In this section, we assume that hypothesis (Hs:o) holds. As shown in Sect. 3.3, this is also the assumption implicitly made by the authors cited in the introduction.

3.1. Probability of association: global computation

We want to compute2 the probability P(Ai,j | CC′) of association between sources Mi and Mj\hbox{$\Mp_j$} (j ≠ 0) or the probability that Mi has no counterpart (j = 0), knowing the coordinates of all the objects in K and K. Remembering that, for any events ω1, ω2, and ω3, P(ω1 | ω2) = P(ω1ω2) /P(ω2) and thus P(ω1ω2|ω3)=P(ω1ω2ω3)P(ω3)\begin{eqnarray} && \Prob(\omega_1 \cap \omega_2 \mid \omega_3) = \frac{\Prob(\omega_1 \cap \omega_2 \cap \omega_3)}{\Prob(\omega_3)} \nonumber\\&&= \frac{\Prob(\omega_1 \mid \omega_2 \cap \omega_3)\multspace \Prob(\omega_2 \cap \omega_3)}{\Prob(\omega_3)} = \Prob(\omega_1 \mid \omega_2 \cap \omega_3) \multspace \Prob(\omega_2 \mid \omega_3), \label{Bayes2} \end{eqnarray}(3)we have, with ω1 = Ai,j, ω2 = C, and ω3 = C, P(Ai,j|CC)=P(Ai,jC|C)P(C|C)·\begin{equation} \label{P(Aij|C,C)_gen} \Prob(A_{i\comma j} \mid C \cap C') = \frac{ \Prob(A_{i\comma j} \cap C \mid C') }{ \Prob(C \mid C') }\cdot \end{equation}(4)

3.1.1. Computation of Ps:o(C | C′)

We first compute the denominator of Eq. (4)3. The event 􏽜k=1n􏽛jk=0nAk,jk=􏽛j1=0n􏽛j2=0n···􏽛jn=0n􏽜k=1nAk,jk\begin{equation} \bigcap_{k=1}^n\bigcup_{j_k=0}^\np A_{k\comma j_k} = \bigcup_{j_1=0}^\np\bigcup_{j_2=0}^\np\cdots\bigcup_{j_n=0}^\np \bigcap_{k=1}^n A_{k\comma j_k} \end{equation}(5)is certain by definition of the Ak,jk and, under either assumption (Hs:o) or (Ho:o), Ak,Ak,m =? for all Mk if m. Consequently, using the symbol 􏽕 for mutually exclusive events instead of 􏽓, we obtain Ps:o(C|C)=Ps:o(C􏽜k=1n􏽛jk=0nAk,jk|C)=Ps:o(C􏽝j1=0n􏽝j2=0n···􏽝jn=0n􏽜k=1nAk,jk|C)=j1=0nj2=0n···jn=0nPs:o(C􏽜k=1nAk,jk|C)\begin{eqnarray} &&\Psto(C \mid C') = \Psto\left(C \cap \bigcap_{k=1}^n\bigcup_{j_k=0}^\np A_{k\comma j_k} \mid C'\right) \nonumber \\&&= \Psto\left(C \cap \biguplus_{j_1=0}^\np\biguplus_{j_2=0}^\np\cdots\biguplus_{j_n=0}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right) \nonumber \\&&= \sum_{j_1=0}^\np \sum_{j_2=0}^\np \cdots \sum_{j_n=0}^\np \Psto\left(C \cap \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right) \nonumber \\&&= \sum_{j_1=0}^\np \sum_{j_2=0}^\np \cdots \sum_{j_n=0}^\np \Psto\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C' \right) \multspace \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right), \label{P_sto(C|C)_gen} \end{eqnarray}(6)with ω1 = C, ω2=􏽔k=1nAk,jk\hbox{$\omega_2 = \bigcap_{\smash[t]{k=1}}^n A_{k\comma j_k}$}, and ω3 = C in Eq. (3).

Since C=􏽔k=1nck\hbox{$C = \bigcap_{\smash[t]{k=1}}^n c_k$}, the first factor in the product of Eq. (6)is P:o(C|􏽜k=1nAk,jkC)=P:o(c1|􏽜k=2nck􏽜k=1nAk,jkC)P:o(􏽜k=2nck|􏽜k=1nAk,jkC),\begin{eqnarray} && \Pato\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C'\right) \nonumber \\&&= \Pato\left(c_1 \Bigm| \bigcap_{k=2}^n c_k \cap \bigcap_{k=1}^n A_{k\comma j_k} \cap C'\right) \multspace \Pato\left(\bigcap_{k=2}^n c_k \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C' \right), \end{eqnarray}(7)with ω1 = c1, ω2=􏽔k=2nck\hbox{$\omega_2 = \bigcap_{\smash[t]{k=2}}^n c_k$}, and ω3 = Ak,jkC in Eq. (3). Doing the same with 􏽔k=2nck\hbox{$\bigcap_{\smash[t]{k=2}}^n c_k$} instead of C, we obtain P:o(C|􏽜k=1nAk,jkC)=􏽙=1nP:o(c|􏽜k=+1nck􏽜k=1nAk,jkC)\begin{equation} \Pato\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C'\right) = \prod_{\ell=1}^n \Pato\left(c_\ell \Bigm| \bigcap_{k=\ell+1}^n c_k \cap \bigcap_{k=1}^n A_{k\comma j_k} \cap C'\right) \label{P_sto(C|A,C)_gen} \end{equation}(8)by iteration.

If j ≠ 0, M is only associated with Mj\hbox{$\Mp_{j_\ell}$}. Consequently, P:o(c|􏽜k=+1nck􏽜k=1nAk,jkC)=P:o(c|A,jcj)=\begin{eqnarray} \Pato\Left(c_\ell \Bigm| \bigcap_{k=\ell+1}^n c_k \cap \bigcap_{k=1}^n A_{k\comma j_k} \cap C'\Right) &=& \Pato(c_\ell \mid A_{\ell\comma j_\ell} \cap c'_{\smash[t]{j_\ell}}) \notag \\ &=& \xi_{\ell\comma j_\ell} \multspace \df^2\vec r_\ell, \label{jl_non_nul} \end{eqnarray}(9)where, denoting by r,j:=rjr\hbox{$\vec r_{\ell\comma j_\ell} \coloneqq \vec r'_{\smash[t]{j_\ell}} - \vec r_\ell$} the position vector of Mj\hbox{$\Mp_{j_\ell}$} relative to M and by Γ,j the covariance matrix of r,j (cf. Appendix A.2), ξ,j=exp(12r,jt·Γ,j-1·r,j)2ßdetΓ,j·\begin{equation} \xi_{\ell\comma j_\ell} = \frac{ \exp\left( -\frac{1}{2}\multspace \transpose{\vec r}_{\smash[t]{\ell\comma j_\ell}} \cdot \Gamma_{\smash[t]{\ell\comma j_\ell}}^{-1} \cdot \vec r_{\ell\comma j_\ell} \right) }{ 2\multspace \piup\multspace \!\sqrt{\det \Gamma_{\ell\comma j_\ell}} }\cdot \end{equation}(10)If j = 0, M is not associated with any source in K. Since clustering is neglected, P:o(c|􏽜k=+1nck􏽜k=1nck􏽜k=1nAk,jk)=P:o(c|A,0)=ξ,0d2r,\begin{equation} \Pato\left(c_\ell \Bigm| \bigcap_{k=\ell+1}^n c_k \cap \bigcap_{k=1}^\np c'_{\smash[t]{k}} \cap \bigcap_{k=1}^n A_{k\comma j_k}\right) = \Pato(c_\ell \mid A_{\ell\comma 0}) = \xi_{\ell\comma 0}\multspace \df^2\vec r_\ell, \label{jl_nul} \end{equation}(11)where the last equality defines the spatial probability density ξ, 0; for the uninformative prior of a uniform a priori probability distribution of K-sources without counterpart, ξ, 0 = 1 /S. From Eqs. (8), (9), and (11), it follows that P:o(C|􏽜k=1nAk,jkC)=Ξ􏽙k=1nξk,jk,\begin{equation} \label{prod_xi} \Pato\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C' \right) = \Xi \multspace \prod_{k=1}^n \xi_{k\comma j_k}, \end{equation}(12)where Ξ:=􏽙k=1nd2rk.\begin{equation} \label{lambda} \Xi \coloneqq \prod_{k=1}^n \df^2\vec r_k. \end{equation}(13)

We now compute the second factor in the product of Eq. (6). Knowing the coordinates of K-sources alone, without those of any in K, does not change the likelihood of the associations (Ak,jk); in other words, C and 􏽔k=1nAk,jk\hbox{$\bigcap_{\smash[t]{k=1}}^n A_{k\comma j_k}$} are mutually unconditionally independent (but conditionally dependent on C). Therefore, Ps:o(􏽜k=1nAk,jk|C)=Ps:o(􏽜k=1nAk,jk).\begin{equation} \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C'\right) = \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k}\right). \end{equation}(14)Let q := # { k ∈ [[1,n]] | jk ≠ 0 }, where #E denotes the number of elements of any set E. Since the events (Ak,jk)k ∈ [[1,n]] are independent by assumption (Hs:o), Ps:o(􏽜k=1nAk,jk)=􏽙k=1nPs:o(Ak,jk).\begin{equation} \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k}\right) = \prod_{k=1}^n \Psto(A_{k\comma j_k}). \end{equation}(15)Using definition (2), and on the hypothesis that all associations (Ak,)[[1,n]]\hbox{$(A_{k\comma\ell})_{\ell\in\integinterv{1}{\np}}$} are a priori equally likely if k ≠ 0 (Sect. 2.2), we get Ps:o(Ak,jk)=Ps:o(􏽓0Ak,)#K=fnforjk0.\begin{equation} \Psto(A_{k\comma j_k}) = \frac{\Psto(\bigcup_{\ell\neq0} A_{k\comma\ell})}{\card K'} = \frac{f}\np \quad\text{for } j_k \neq 0. \end{equation}(16)Since Ps:o(Ak, 0) = 1 − f, we have Ps:o(􏽜k=1nAk,jk)=(fn)q(1f)nq.\begin{equation} \label{P_sto(A)} \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k}\right) = \Biggl(\frac{f}\np\Biggr)^q\multspace (1-f)^{n-q}. \end{equation}(17)Hence, from Eqs. (6), (12)and (17), Ps:o(C|C)=Ξj1=0nj2=0n···jn=0n(fn)q(1f)nq􏽙k=1nξk,jk.\begin{equation} \Psto(C \mid C') = \Xi\multspace \sum_{j_1=0}^\np \sum_{j_2=0}^\np \cdots \sum_{j_n=0}^\np {\Biggl(\frac{f}\np\Biggr)^q \multspace (1-f)^{n-q} \multspace \prod_{k=1}^n \xi_{k\comma j_k}}. \label{P_sto(C|C)_xi} \end{equation}(18)By the definition of q, there are q strictly positive indices jk (as many as the factors “f/n\hbox{$f/\np$}” in Eq. (18)) and nq null ones (as many as the factors “(1 − f)”). Therefore, with ζk,0:=(1f)ξk,0andζk,jk:=fξk,jknforjk0,\begin{equation} \label{def_zeta} \zeta_{k\comma 0} \coloneqq (1-f)\multspace \xi_{k\comma 0} \qquad\text{and}\qquad \zeta_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{k\comma j_k}}{\np} \quad\text{for }j_k \neq 0, \end{equation}(19)Eq. (18)reduces to Ps:o(C|C)=Ξj1=0nj2=0n···jn=0n􏽙k=1nζk,jk=Ξ􏽙k=1njk=0nζk,jk,\begin{equation} \Psto(C \mid C') = \Xi\multspace \sum_{j_1=0}^\np\sum_{j_2=0}^\np \cdots \sum_{j_n=0}^\np\prod_{k=1}^n\zeta_{k\comma j_k} = \Xi\multspace \prod_{k=1}^n\sum_{j_k=0}^\np\zeta_{k\comma j_k}, \label{P_sto(C|C)_res} \end{equation}(20)where the last equality is derived by induction from the distributivity of multiplication over addition.

3.1.2. Computation of Ps:o(Ai,j C | C′)

The computation of the numerator of Eq. (4)is similar to that of Ps:o(C | C′): Ps:o(Ai,jC|C)=Ps:o(CAi,j􏽝j1=0n···􏽝ji1=0n􏽝ji+1=0n···􏽝jn=0n􏽜k=1kinAk,jk|C)=Ps:o(C􏽝j1=0n···􏽝ji1=0n􏽝ji+1=0n···􏽝jn=0n􏽜k=1nAk,jk|C)=j1=0n···ji1=0nji+1=0n···jn=0nPs:o(C|􏽜k=1nAk,jkC)Ps:o(􏽜k=1nAk,jk|C),\begin{eqnarray} \label{P_sto(Aij,C|C)_gen} &&\Psto(A_{i\comma j} \cap C \mid C') \nonumber\\&&= \Psto\left(C \cap A_{i\comma j} \cap \biguplus_{j_1=0}^\np \cdots \biguplus_{j_{i-1}=0}^\np \biguplus_{j_{i+1}=0}^\np \cdots \biguplus_{j_n=0}^\np \bigcap_{\substack{k=1\\ k\neq i}}^n A_{k\comma j_k}\! \Bigm|\! C' \right) \nonumber\\&&= \Psto\left(C \cap \biguplus_{j_1=0}^\np \cdots \biguplus_{j_{i-1}=0}^\np \biguplus_{j_{i+1}=0}^\np \cdots \biguplus_{j_n=0}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right) \nonumber \\&&= \sum_{j_1=0}^\np\!\! \cdots\!\! \sum_{j_{i-1}=0}^\np \sum_{j_{i+1}=0}^\np \!\!\cdots\!\! \sum_{j_n=0}^\np\! \Psto\left(C \!\!\Bigm|\!\! \bigcap_{k=1}^n A_{k\comma j_k} \! \cap\! C' \right) \multspace \Psto\left(\bigcap_{k=1}^n A_{k\comma j_k} \!\!\Bigm|\!\! C' \right),\!\! \end{eqnarray}(21)where we put ji := j.

Let q := # { k ∈ [[1,n]] | jk ≠ 0 } (indices jk are now those of Eq. (21)). As for Ps:o(C | C′), Ps:o(Ai,jC|C)=Ξj1=0n···ji1=0nji+1=0n···jn=0n(fn)q(1f)nq􏽙k=1nξk,jk\begin{eqnarray} && \Psto(A_{i\comma j} \cap C \mid C') \nonumber \\&&= \Xi\multspace \sum_{j_1=0}^\np \cdots \sum_{j_{i-1}=0}^\np \sum_{j_{i+1}=0}^\np \cdots \sum_{j_n=0}^\np {\Biggl(\frac{f}\np\Biggr)^{q^\star}\multspace (1-f)^{n-q^\star} \prod_{k=1}^n \xi_{k\comma j_k}} \nonumber\\&&= \Xi\multspace \zeta_{i\comma j_i}\multspace \sum_{j_1=0}^\np\cdots \sum_{j_{i-1}=0}^\np\sum_{j_{i+1}=0}^\np\cdots \sum_{j_n=0}^\np\prod_{\substack{k=1\\ k\neq i}}^n \zeta_{k\comma j_k} = \Xi\multspace \zeta_{i\comma j}\multspace \prod_{\substack{k=1\\ k\neq i}}^n \sum_{j_k=0}^\np\zeta_{k\comma j_k}. \label{P_sto(Aij,C|C)_res} \end{eqnarray}(22)

3.1.3. Final results

Finally, from Eqs. (4), (20), and (22), fori ≠ 0, Ps:o(Ai,j|CC)=ζi,j􏽑nk=1kijk=0nζk,jk􏽑k=1njk=0nζk,jk=ζi,jk=0nζi,k={fξi,j(1f)nξi,0+fk=1nξi,kforj0,(1f)nξi,0(1f)nξi,0+fk=1nξi,kforj=0.\begin{eqnarray} \label{P_sto(Aij|C,C)_res1} && \Psto(A_{i\comma j} \mid C \cap C') = \frac{ \zeta_{i\comma j}\multspace \prod_{\leftsubstack{k=1\\ k\neq i}}^n \sum_{j_k=0}^\np\zeta_{k\comma j_k} }{ \prod_{k=1}^n\sum_{j_k=0}^\np\zeta_{k\comma j_k} } = \frac{\zeta_{i\comma j}}{\sum_{k=0}^\np\zeta_{i\comma k}} \\ \label{P_sto(Aij|C,C)_res2} &&= \Left\{ \begin{aligned} \frac{f\multspace \xi_{i\comma j}}{ (1-f)\multspace \np\multspace \xi_{i\comma 0} + f\multspace \sum_{k=1}^\np\xi_{i\comma k}} & \quad \text{for } j \neq 0, \\ \frac{(1-f)\multspace \np\multspace \xi_{i\comma 0}}{ (1-f)\multspace \np\multspace \xi_{i\comma 0} + f\multspace \sum_{k=1}^\np\xi_{i\comma k}} & \quad \text{for } j = 0. \end{aligned} \Right. \end{eqnarray}

As to the probability Ps:o(A0,j | CC′) that Mj\hbox{$\Mp_j$} has no counterpart in K, it can be computed in this way: Ps:o(A0,jC|C)=Ps:o(CA0,j􏽝j1=0n􏽝j2=0n···􏽝jn=0n􏽜k=1nAk,jk|C)=Ps:o(Cn􏽝j1=0j1jn􏽝j2=0j2j···n􏽝jn=0jnj􏽜k=1nAk,jk|C)=nj1=0j1jnj2=0j2j···njn=0jnjPs:o(C􏽜k=1nAk,jk|C)=Ξnj1=0j1jnj2=0j2j···njn=0jnj􏽙k=1nζk,jk=Ξ􏽙k=1nnjk=0jkjζk,jk\begin{eqnarray} \Psto(A_{0\comma j} \cap C \mid C') \! =\! \Psto\left(C\! \cap\! A_{0\comma j} \!\cap\! \biguplus_{j_1=0}^\np \biguplus_{j_2=0}^\np \cdots \biguplus_{j_n=0}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C'\!\right) \nonumber \\= \Psto\left(C \cap \biguplus_{\substack{j_1=0\\ j_1\neq j}}^\np \biguplus_{\substack{j_2=0\\ j_2\neq j}}^\np \cdots \biguplus_{\substack{j_n=0\\ j_n\neq j}}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C'\right) \nonumber\\= \sum_{\substack{j_1=0\\ j_1\neq j}}^\np \sum_{\substack{j_2=0\\ j_2\neq j}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\neq j}}^\np \Psto\left(C \cap \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C'\right) \nonumber\\= \Xi\multspace \sum_{\substack{j_1=0\\ j_1\neq j}}^\np \sum_{\substack{j_2=0\\ j_2\neq j}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\neq j}}^\np \prod_{k=1}^n\zeta_{k\comma j_k} = \Xi\multspace \prod_{k=1}^n\sum_{\substack{j_k=0\\ j_k\neq j}}^\np\zeta_{k\comma j_k} \end{eqnarray}(25)and, using Eqs. (20), (23), and (3), Ps:o(A0,j|CC)=Ps:o(A0,jC|C)Ps:o(C|C)=Ξ􏽑k=1nnjk=0jkjζk,jkΞ􏽑k=1njk=0nζk,jk=􏽙k=1njk=0nζk,jkζk,jjk=0nζk,jk=􏽙k=1n(1ζk,jkjk=0nζk,jk)=􏽙k=1n(1Ps:o[Ak,j|CC])forj0.\begin{eqnarray} \Psto(A_{0\comma j} \mid C \cap C') = \frac{ \Psto(A_{0\comma j} \cap C \mid C') }{ \Psto(C \mid C') } = \frac{\Xi\multspace \prod_{k=1}^n\sum_{\leftsubstack{j_k=0\\ j_k\neq j}}^\np \zeta_{k\comma j_k}}{\Xi\multspace \prod_{k=1}^n\sum_{j_k=0}^\np\zeta_{k\comma j_k}} \nonumber\\ = \prod_{k=1}^n\frac{\sum_{j_k=0}^\np\zeta_{k\comma j_k} - \zeta_{k\comma j}}{ \sum_{j_k=0}^\np\zeta_{k\comma j_k}} = \prod_{k=1}^n{\Biggl( 1-\frac{\zeta_{k\comma j_k}}{\sum_{j_k=0}^\np\zeta_{k\comma j_k}} \Biggr)} \nonumber\\= \prod_{k=1}^n{\left(1 - \Psto[A_{k\comma j} \mid C \cap C']\right)} \quad \text{for } j \neq 0. \label{P_sto(A0j|C,C)}\!\! \end{eqnarray}(26)

3.2. Likelihood and estimation of unknown parameters

3.2.1. General results

Various methods have been proposed for estimating the fraction of sources with a counterpart (Kim et al. 2012; Fleuren et al. 2012; McAlpine et al. 2012; Haakonsen & Rutledge 2009). Pineau et al. (2011), for instance, fit f to the overall distribution of the likelihood ratios. We propose a more convenient and systematic method in this section.

Besides f, the probabilities P(Ai,j | CC′) may depend on other unknowns, such as the parameters ˚σ\hbox{$\sigmatot$} and ˚ν\hbox{$\nutot$} modeling the positional uncertainties (cf. Appendices A.2.2 and A.2.3). We write here x1, x2, etc., for all these parameters, and put x := (x1,x2,...). An estimate xˆ\hbox{$\hat{\vec x}$} of x may be obtained by maximizing with respect to x (and with the constraint \hbox{$\hat f \in[0, 1]$}) the overall likelihood L:=P(CC)(􏽑i=1nd2ri)􏽑j=1nd2rj\begin{equation} \label{def_Lh} \Lh \coloneqq \frac{\Prob(C \cap C')}{ (\prod_{i=1}^n \df^2\vec r_i)\multspace \prod_{j=1}^\np\df^2\vec r'_{\smash[t]{j}}} \end{equation}(27)to observe all the K- and K-sources at their effective positions. Unless the result is outside the possible domain for x (i.e., if L reaches its maximum on the boundary of this domain), the maximum likelihood estimator xˆ\hbox{$\hat{\vec x}$} is a solution to (lnLx)x=xˆ=0.\begin{equation} \label{max_Lh} \Biggl(\frac{\partial\ln\Lh}{\partial\vec x}\Biggr)_{\vec x=\hat{\vec x}} = 0. \end{equation}(28)From now on, all quantities calculated at x=xˆ\hbox{$\vec x = \hat{\vec x}$} bear a circumflex.

We have P(CC)=P(C|C)P(C),\begin{equation} \label{P(C,C)} \Prob(C \cap C') = \Prob(C \mid C')\multspace \Prob(C'), \end{equation}(29)and, since clustering is neglected, P(C)=􏽙j=1nP(cj)=􏽙j=1nξ0,jd2rj,\begin{equation} \label{P(C)} \Prob(C') = \prod_{j=1}^\np \Prob(\coordpj) = \prod_{j=1}^\np \xi_{0\comma j}\multspace \df^2\vec r'_{\smash[t]{j}}, \end{equation}(30)where ξ0,j is the spatial probability density defined by P(cj)=ξ0,jd2rj\hbox{$\Prob(\coordpj) = \xi_{0\comma j}\multspace \df^2\vec r'_{\smash[t]{j}}$}; for the uninformative prior of a uniform a priori probability distribution of K-sources, ξ0,j = 1 /S. From Eqs. (27), (29), (30), and (13), we obtain L=P(C|C)Ξ􏽙j=1nξ0,j.\begin{equation} \label{Lh_gen} \Lh = \frac{\Prob(C \mid C')}{\Xi}\multspace \prod_{j=1}^\np \xi_{0\comma j}. \end{equation}(31)

In particular, under assumption (Hs:o), Eqs. (31), (20), and (13)give Ls:o=(􏽙i=1nk=0nζi,k)􏽙j=1nξ0,j.\begin{equation} \label{Lh_sto} \Lhsto = \left(\prod_{i=1}^n\sum_{k=0}^\np\zeta_{i\comma k}\right)\multspace \prod_{j=1}^\np \xi_{0\comma j}. \end{equation}(32)Therefore, for any parameter xp and because the ξ0,j are independent of x, lnLs:oxp=i=1nlnk=0nζi,kxp=i=1nj=0nζi,j/xpk=0nζi,k=i=1nj=0nlnζi,jxpζi,jk=0nζi,k\begin{eqnarray} && \frac{\partial\ln\Lhsto}{\partial x_p} = \sum_{i=1}^n\frac{\partial\ln\sum_{k=0}^\np\zeta_{i\comma k}}{\partial x_p} \nonumber \\&&= \sum_{i=1}^n\sum_{j=0}^\np\frac{\partial\zeta_{i\comma j}/\partial x_p}{ \sum_{k=0}^\np\zeta_{i\comma k}} = \sum_{i=1}^n\sum_{j=0}^\np\frac{\partial\ln\zeta_{i\comma j}}{\partial x_p} \multspace \frac{\zeta_{i\comma j}}{\sum_{k=0}^\np\zeta_{i\comma k}} \nonumber \\&&= \sum_{i=1}^n\sum_{j=0}^\np\frac{\partial\ln\zeta_{i\comma j}}{\partial x_p} \multspace \Psto(A_{i\comma j} \mid C \cap C'). \label{der(Lh_sto)/x} \end{eqnarray}(33)(For reasons highlighted just after Eq. (73), it is convenient to express most results as a function of the probabilities P(Ai,j | CC′).)

Uncertainties on the unknown parameters may be computed from the covariance matrix V of xˆ\hbox{$\hat{\vec x}$}. For large numbers of sources, V is asymptotically given (Kendall & Stuart 1979) by (V-1)p,q=(2lnLxpxq)xˆ=x·\begin{equation} \label{cov_x} \left(V^{-1}\right)_{p\comma q} = -\Left( \frac{\partial^2\ln\Lh}{\partial x_p\multspace \partial x_q} \Right)_{\hat{\vec x} = \vec x}\cdot \end{equation}(34)

3.2.2. Fraction of sources with a counterpart

Consider, in particular, the case xp = f. We note that lnζi,0∂f=11fandlnζi,j∂f=1fforj0.\begin{equation} \label{der(zeta)} \frac{\partial\ln\zeta_{i\comma 0}}{\partial f} = -\frac{1}{1-f} \qquad\text{and}\qquad \frac{\partial\ln\zeta_{i\comma j}}{\partial f} = \frac{1}{f} \quad\text{for } j \neq 0. \end{equation}(35)Under the assumption (Hs:o) or (Ho:o) (but not under (Ho:s)), j=0nP:o(Ai,j|CC)=P:o(􏽝j=0nAi,j|CC)=1,\begin{equation} \label{somme_prob} \sum_{j=0}^\np \Pato(A_{i\comma j} \mid C \cap C') = \Pato\left(\biguplus_{j=0}^\np A_{i\comma j} \Bigm| C \cap C'\right) = 1, \end{equation}(36)so, using Eq. (35), j=0nlnζi,j∂fP:o(Ai,j|CC)=P:o(Ai,0|CC)1f+j=1nP:o(Ai,j|CC)f=P:o(Ai,0|CC)1f+1P:o(Ai,0|CC)f=(1f)P:o(Ai,0|CC)f(1f).\begin{eqnarray} \sum_{j=0}^\np\frac{\partial\ln\zeta_{i\comma j}}{\partial f}\multspace \Pato(A_{i\comma j} \mid C \cap C') \nonumber \\= -\frac{\Pato(A_{i\comma 0} \mid C \cap C')}{1-f} + \sum_{j=1}^\np \frac{\Pato(A_{i\comma j} \mid C \cap C')}{f} \nonumber\\= -\frac{\Pato(A_{i\comma 0} \mid C \cap C')}{1-f} + \frac{1-\Pato(A_{i\comma 0} \mid C \cap C')}{f} \nonumber \\= \frac{(1-f) - \Pato(A_{i\comma 0} \mid C \cap C') }{ f\multspace (1-f)}. \label{somme_j} \end{eqnarray}(37)Summing Eq. (37)on i, we obtain from Eq. (33)that lnLs:o∂f=n(1f)i=1nPs:o(Ai,0|CC)f(1f)·\begin{equation} \label{der(Lh_sto)/f} \frac{\partial\ln\Lhsto}{\partial f} = \frac{ n\multspace (1-f) - \sum_{i=1}^n\Psto(A_{i\comma 0} \mid C \cap C') }{ f\multspace (1-f) }\cdot \end{equation}(38)Consequently, the maximum likelihood estimator of the fraction f of K-sources with a counterpart in K is s:o==\begin{eqnarray} \hat f_\sto &=& 1 - \frac{1}{n}\multspace \sum_{i=1}^n\expandafter\hat\Psto(A_{i\comma 0} \mid C \cap C') \label{f_est_sto1} \\ &=& \frac{1}{n}\multspace \sum_{i=1}^n\sum_{j=1}^\np\expandafter\hat\Psto(A_{i\comma j} \mid C \cap C'). \label{f_est_sto2} \end{eqnarray}After some tedious calculations, it can be shown that 2lnLs:of2=i=1n([1f]Ps:o[Ai,0|CC])2f2(1f)2<0\begin{equation} \label{concave} \frac{\partial^2\ln\Lhsto}{\partial f^2} = -\frac{ \sum_{i=1}^n{\left([1-f] - \Psto[A_{i\comma 0} \mid C \cap C']\right)^2} }{ f^2\multspace (1-f)^2 } < 0 \end{equation}(41)for all f, so lnLs:o/∂f has at most one zero in [0,1]: \hbox{$\hat f_\sto$} is unique.

Since \hbox{$\hat f_\sto$} appears on the two sides of Eq. (39)(remember that \hbox{$\expandafter\hat\Psto$} is the value of Ps:o at \hbox{$f = \hat f_\sto$}), we may try to determine it through an iterative back and forth computation between the lefthand and the righthand sides of this equation. (A similar idea was also proposed by Benn 1983.) We prove in Sect. 5.3 that this procedure converges for any starting value f ∈] 0,1 [.

An estimate s:o\hbox{$\hat f'_\sto$} of the fraction f of K-sources with a counterpart is given by s:o=11nj=1ns:o(A0,j|CC).\begin{equation} \hat f'_\sto = 1 - \frac{1}{\np}\multspace \sum_{j=1}^\np\expandafter\hat\Psto(A_{0\comma j} \mid C \cap C'). \label{f_est_sto} \end{equation}(42)It can be checked from Eqs. (40), (42), and (26)that, as expected if assumption (Hs:o) is valid (cf. Sect. 2.2), s:ons:on\hbox{$\hat f_\sto\multspace n \geqslant \hat f'_\sto\multspace \np$}. (Just notice that, for any numbers yi ∈ [0,1], 􏽑i=1n(1yi)1i=1nyi\hbox{$\prod_{i=1}^n {(1 - y_i)} \geqslant 1 - \sum_{i=1}^n y_i$}, which is obvious by induction; apply this to \hbox{$y_i = \expandafter\hat\Psto(A_{i\comma j} \mid C \cap C')$} and then sum on j.)

3.3. Probability of association: local computation

Under assumption (Hs:o), a purely local computation (subscript “loc” hereafter) of the probabilities of association is also possible. Consider a region Ui of area Si\hbox{$\Si$} containing the position of Mi, and such that we can safely hypothesize that the counterpart in K of Mi, if any, is inside. We assume that the local surface density ρi\hbox{$\rhopi$} of K-sources unrelated to Mi is uniform on Ui. To avoid biasing the estimate if Mi has a counterpart, ρi\hbox{$\rhopi$} may be evaluated from the number of K-sources in a region surrounding Ui, but not overlapping it (an annulus around a disk Ui centered on Mi, for instance).

Besides the Ai,j, we consider the following events:

  • Ni\hbox{$\Npi$}: Ui contains ni\hbox{$\npi$} sources;

  • Ci:=􏽔jJicj\hbox{$\COORDpi \coloneqq \bigcap_{j \in J_i} \coordpj$}, where Ji:={j|MjUi}\hbox{$J_i \coloneqq \{j \mid \Mp_j \in U_i\}$}.

We want to compute the probability that a source Mj\hbox{$\Mp_j$} in Ui is the counterpart of Mi, given the positions relative to Mi of all its possible counterparts (Mk)kJi\hbox{$(\Mp_k)^{}_{\smash[t]{k\in J_i}}$}, i.e. Ploc(Ai,j|CiNi)\hbox{$\Ploc(A_{i\comma j} \mid \COORDpi\cap \Npi)$}. Using Eq. (3)with ω1 = Ai,j, ω2=Ci\hbox{$\omega_2 = \COORDpi$}, and ω3=Ni\hbox{$\omega_3 = \Npi$} in the first equality below, and then with ω1=Ci\hbox{$\omega_1 = \COORDpi$}, ω2 = Ai,k, and ω3 unchanged in the last one, we obtain Ploc(Ai,j|CiNi)=Ploc(Ai,jCi|Ni)Ploc(Ci|Ni)=Ploc(CiAi,j|Ni)Ploc(Ci􏽕kJi{0}Ai,k|Ni)=Ploc(CiAi,j|Ni)kJi{0}Ploc(CiAi,k|Ni)=Ploc(Ci|Ai,jNi)Ploc(Ai,j|Ni)kJi{0}Ploc(Ci|Ai,kNi)Ploc(Ai,k|Ni)·\begin{eqnarray} \Ploc(A_{i\comma j} \mid \COORDpi\cap \Npi) = \frac{\Ploc(A_{i\comma j} \cap \COORDpi \mid \Npi)}{ \Ploc(\COORDpi \mid \Npi)} \nonumber\\[1mm] = \frac{\Ploc(\COORDpi \cap A_{i\comma j} \mid \Npi)}{ \Ploc(\COORDpi \cap \biguplus_{k\in J_i\cup\{0\}} A_{i\comma k} \mid \Npi)} = \frac{\Ploc(\COORDpi \cap A_{i\comma j} \mid \Npi)}{ \sum_{k\in J_i\cup\{0\}} \Ploc(\COORDpi \cap A_{i\comma k} \mid \Npi)} \nonumber\\[1mm]= \frac{\Ploc(\COORDpi\mid A_{i\comma j} \cap \Npi)\multspace \Ploc(A_{i\comma j} \mid \Npi)}{ \sum_{k\in J_i\cup\{0\}} \Ploc(\COORDpi\mid A_{i\comma k} \cap \Npi) \multspace \Ploc(A_{i\comma k} \mid \Npi)}\cdot \label{P_loc(Aij|C,N)} \end{eqnarray}(43)Now, Ploc(Ai,0|Ni)=Ploc(NiAi,0)Ploc(Ni)=Ploc(Ni|Ai,0)Ploc(Ai,0)Ploc(Ni|Ai,0)Ploc(Ai,0)+Ploc(Ni|Ai,0)Ploc(Ai,0)\begin{eqnarray} \label{P_loc(Ai0|N)} &&\Ploc(A_{i\comma 0} \mid \Npi) = \frac{\Ploc( \Npi \cap A_{i\comma 0})}{\Ploc(\Npi)} \nonumber\\&&= \frac{\Ploc(\Npi\mid A_{i\comma 0})\multspace \Ploc(A_{i\comma 0})}{ \Ploc(\Npi\mid A_{i\comma 0})\multspace \Ploc(A_{i\comma 0}) + \Ploc(\Npi\mid \overline{A_{i\comma 0}})\multspace \Ploc(\overline{A_{i\comma 0}})} \end{eqnarray}(44)and Ploc(Ai,j|Ni)=Ploc(Ai,0|Ni)ni=1Ploc(Ai,0|Ni)niforj0.\begin{equation} \label{P_loc(Aij|N)} \Ploc(A_{i\comma j} \mid \Npi) = \frac{\Ploc(\overline{A_{i\comma 0}} \mid \Npi)}{\npi} = \frac{1-\Ploc(A_{i\comma 0} \mid \Npi)}{\npi} \quad \text{for } j \neq 0. \end{equation}(45)(The probability Ploc(Ai,j) itself could not have been computed as Ploc(Ai,0)/ni\hbox{$\Ploc(\overline{A_{i\comma 0}})/\npi$} because ni\hbox{$\npi$} would be undefined, which is why event Ni\hbox{$\Npi$} was introduced.) If clustering is negligible, the number of K-sources randomly distributed with a mean surface density ρi\hbox{$\rhopi$} in an area Si\hbox{$\Si$} follows a Poissonian distribution, so Ploc(Ni|Ai,0)=(ρiSi)ni1exp(ρiSi)(ni1)!\begin{equation} \label{P_loc(N|nonAi0)} \Ploc(\Npi \mid \overline{A_{i\comma 0}}) = \frac{(\rhopi\multspace \Si)^{\npi-1}\multspace \exp(-\rhopi\multspace \Si)}{ (\npi-1)!} \end{equation}(46)(one counterpart and ni1\hbox{$\npi-1$} sources by chance in Si\hbox{$\Si$}) and Ploc(Ni|Ai,0)=(ρiSi)niexp(ρiSi)ni!\begin{equation} \label{P_loc(N|Ai0)} \Ploc(\Npi\mid A_{i\comma 0}) = \frac{(\rhopi\multspace \Si)^{\npi}\multspace \exp(-\rhopi\multspace \Si)}{ \npi!} \end{equation}(47)(no counterpart and ni\hbox{$\npi$} sources by chance in Si\hbox{$\Si$}). Thus, from Eqs. (45)–(47), and (2), Ploc(Ai,j|Ni)={fnif+(1f)ρiSiforj0,(1f)ρiSinif+(1f)ρiSiforj=0.\begin{equation} \label{P_loc(Aij|N)_res} \Ploc(A_{i\comma j} \mid \Npi) = \Left\{ \begin{aligned} \frac{f}{\npi\multspace f+(1-f)\multspace \rhopi\multspace \Si} & \quad \text{for } j \neq 0, \\ \frac{(1-f)\multspace \rhopi\multspace \Si}{ \npi\multspace f+(1-f)\multspace \rhopi\multspace \Si} & \quad \text{for } j = 0. \end{aligned} \Right. \end{equation}(48)We have Ploc(Ci|Ai,0Ni)=􏽙kJid2rkSiandPloc(Ci|Ai,jNi)=ξi,jd2rj􏽙kJikjd2rkSiforj0\begin{eqnarray} \label{P_loc(C|Aij,N)} \Ploc(\COORDpi\mid A_{i\comma 0}\cap \Npi) = \prod_{k\in J_i} \frac{\df^2\vec r'_{\smash[t]{k}}}{\Si} \qquad\text{and}\qquad\nonumber\\ \Ploc(\COORDpi\mid A_{i\comma j} \cap \Npi) = \xi_{i\comma j}\multspace \df^2\vrpj\multspace \prod_{\substack{k\in J_i\\ k\neq j}} \frac{\df^2\vec r'_{\smash[t]{k}}}{\Si} \quad\text{for } j \neq 0 \end{eqnarray}(49)(rigorously, ξi,j should be replaced by ξi,j/Ploc(MjUi|Ai,j)\hbox{$\xi_{i\comma j}/\Ploc(\Mp_j\in U_i\mid A_{i\comma j})$}, but Ploc(MjUi|Ai,j)\hbox{$\Ploc(\Mp_j\not\in U_i \mid A_{i\comma j})$} is negligible by definition of Ui), so, using Eqs. (43), (48), and (49), we obtain Ploc(Ai,j|CiNi)={fλi,j(1f)+fkJiλi,kforj0,(1f)(1f)+fkJiλi,kforj=0,\begin{equation} \label{P_loc} \Ploc(A_{i\comma j}\mid \COORDpi\cap \Npi) = \Left\{ \begin{aligned} \frac{f\multspace \lambda_{i\comma j}}{ (1-f)+f\multspace \sum_{k\in J_i}\lambda_{i\comma k}} & \quad \text{for } j \neq 0, \\ \frac{(1-f)}{(1-f)+f\multspace \sum_{k\in J_i}\lambda_{i\comma k}} & \quad \text{for } j = 0, \end{aligned} \Right. \end{equation}(50)where λi,k:=ξi,k/ρi\hbox{$\lambda_{i\comma k} \coloneqq \xi_{i\comma k}/\rhopi$} is the likelihood ratio (cf. Eq. (1)). Mutatis mutandis, we obtain the same result as Eq. (14) of Pineau et al. (2011) and the aforementioned authors. When the computation is extended from Ui to the whole surface covered by K, ρi\hbox{$\rhopi$} is replaced by n/S\hbox{$\np\!/\Stot$} in Eq. (50), kJi by k=1n\hbox{$\sum_{k=1}^\np$}, and we recover Eq. (24)since ξi, 0 = 1 /S for a uniform distribution.

The index jMLC(i) of the most likely counterpart MjMLC(i)\hbox{$\Mp_{\jMLC(i)}$} of Mi is the value of j ≠ 0 maximizing λi,j. Very often, λi,jMLC(i) ≫ ∑ kJi; kjMLC(i)λi,k, so Ps:o(Ai,jMLC(i)|CiNi)fλi,jMLC(i)(1f)+fλi,jMLC(i)·\begin{equation} \Psto(A_{i\comma \jMLC(i)}\mid \COORDpi\cap \Npi) \approx \frac{f\multspace \lambda_{i\comma \jMLC(i)}}{ (1-f)+f\multspace \lambda_{i\comma \jMLC(i)}}\cdot \end{equation}(51)As a “poor man’s” recipe, if the value of f is unknown and not too close to either 0 or 1, an association may be considered as true if λi,jMLC(i) ≫ 1 and as false if λi,jMLC(i) ≪ 1. Where to set the boundary between true associations and false ones is somewhat arbitrary (Wolstencroft et al. 1986). For a large sample, however, f can be estimated from the distribution of the positions of all sources, as shown in Sect. 3.2.

4. One-to-one associations

Under (Hs:o) (Sect. 3), a given Mj\hbox{$\Mp_j$} can be associated with several Mi: there is no symmetry between K and K under this assumption and, while j=0nPs:o(Ai,j|CC)=1\hbox{$\sum_{\smash[t]{j=0}}^{\smash[t]{\np}} \Psto(A_{i\comma j}\mid C \cap C') = 1$} for all Mi, i=1nPs:o(Ai,j|CC)\hbox{$\sum_{\smash[t]{i=1}}^n \Psto(A_{i\comma j}\mid C \cap C')$} could be strictly larger than 1 for some sources Mj\hbox{$\Mp_j$}. We assume here that the much more constraining assumption (Ho:o) holds. As far as we know and despite some attempt by Rutledge et al. (2000), this problem has not been solved previously (see also Bartlett & Egret 1998 for a simple statement of the question).

Since a K-potential counterpart Mj\hbox{$\Mp_j$} of Mi within some neighborhood Ui of Mi might in fact be the true counterpart of another source Mkoutside of Ui, there is no obvious way to adapt the exact local several-to-one computation of Sect. 3.3 to the case of the one-to-one assumption. We therefore have to consider all the K- and K-sources, as in Sect. 3.1.

Under assumption (Ho:o), catalogs K and K play symmetrical roles; in particular, Po:o(Ai,j)=fn=fnifi0andj0.\begin{equation} \label{f_f_oto} \Poto(A_{i\comma j}) = \frac{f}\np = \frac{f'}n \quad \text{if }i \neq 0\text{ and }j \neq 0. \end{equation}(52)For practical reasons (cf. Eq. (61)), we nonetheless name K the catalog with the fewer objects and K the other one, so nn\hbox{$n \leqslant \np$} in the following.

4.1. Probability of association

4.1.1. Computation of Po:o(C | C′)

The denominator of Eq. (4)is Po:o(C|C)=Po:o(C􏽝j1=0n􏽝j2=0n···􏽝jn=0n􏽜k=1nAk,jk|C)\begin{equation} \Poto(C \mid C') = \Poto\left(C \cap \biguplus_{j_1=0}^\np\biguplus_{j_2=0}^\np\cdots\biguplus_{j_n=0}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right) \end{equation}(53)(same reasons as for Eq. (6)). Because Ak,mA,m =? if k and m ≠ 0 by assumption (Ho:o), this reduces to Po:o(C|C)=Po:o(Cn􏽝j1=0j1X0n􏽝j2=0j2X1···n􏽝jn=0jnXn1􏽜k=1nAk,jk|C),\begin{equation} \label{P_oto(C|C)_union} \Poto(C \mid C') = \Poto\left(C \cap \biguplus_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \biguplus_{\substack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \biguplus_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right), \end{equation}(54)where, to ensure that each K-source is associated with at most one of K, the sets Xk of excluded counterparts are defined iteratively by X0:=?andXk:=(Xk1{jk})\{0}forallk[[1,n]].\begin{equation} \label{def_J} X_0 \coloneqq \varnothing \qquad \text{and}\qquad X_k \coloneqq (X_{k-1} \cup \{j_k\}) \setminus \{0\} \quad\text{for all } k \in \integinterv{1}{n}. \end{equation}(55)As a result, Po:o(C|C)=nj1=0j1X0nj2=0j2X1···njn=0jnXn1Po:o(C􏽜k=1nAk,jk|C)=nj1=0j1X0nj2=0j2X1···njn=0jnXn1Po:o(C|􏽜k=1nAk,jkC)Po:o(􏽜k=1nAk,jk|C).\begin{eqnarray} \Poto(C \mid C') = \sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \Poto\left(C \cap \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right) \nonumber \\= \sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np \!\! \cdots\!\! \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \!\! \Poto\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k}\! \cap\! C' \right) \multspace \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \!\Bigm|\! C' \right). \label{P_oto(C|C)_gen} \end{eqnarray}(56)The first factor in the product of Eq. (56)is still given by Eq. (12), so we just have to compute the second factor, Po:o(􏽜k=1nAk,jk|C)=Po:o(􏽜k=1nAk,jk).\begin{equation} \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C'\right) = \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k}\right). \end{equation}(57)Let q:=Xn#\hbox{$q \coloneqq \card X_n$} and Q be a random variable describing the number of associations between K and K: Po:o(􏽜k=1nAk,jk)=Po:o(􏽜k=1nAk,jk|Q=q)Po:o(Q=q)+Po:o(􏽜k=1nAk,jk|Qq)Po:o(Qq).\begin{eqnarray} \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k}\right) = \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| Q = q\right) \multspace \Poto(Q = q) \nonumber\\+ \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| Q \neq q\right) \multspace \Poto(Q \neq q). \end{eqnarray}(58)Since Po:o(􏽔k=1nAk,jk|Qq)=0\hbox{$\Poto(\bigcap_{k=1}^n A_{k\comma j_k} \mid Q \neq q) = 0$} by definition of q, we only have to compute Po:o(􏽔k=1nAk,jk|Q=q)\hbox{$\Poto(\bigcap_{k=1}^n A_{k\comma j_k} \mid Q = q)$} and Po:o(Q = q).

There are n ! /(q ! [nq] !) choices of q elements among n in K, and n!/(q![nq]!)\hbox{$\np!/(q!\multspace [\np-q]!)$} choices of q elements among n\hbox{$\np$} in K. The number of permutations of q elements is q !, so the total number of one-to-one associations of q elements from K to q elements of K is q!n!q!(nq)!n!q!(nq)!·\begin{equation} q!\multspace \frac{n!}{q!\multspace (n-q)!}\multspace \frac{\np!}{q!\multspace (\np-q)!}\cdot \end{equation}(59)The inverse of this number is Po:o(􏽜k=1nAk,jk|Q=q)=q!(nq)!(nq)!n!n!·\begin{equation} \label{P_oto(A|m)} \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| Q = q\right) = \frac{q!\multspace (n-q)!\multspace (\np-q)!}{n!\multspace \np!}\cdot \end{equation}(60)With our definition of K and K, nn\hbox{$n \leqslant \np$}, so all the elements of K may have a counterpart in K jointly. Therefore, Po:o(Q = q) is given by the binomial law: Po:o(Q=q)=n!q!(nq)!fq(1f)nq.\begin{equation} \label{binom} \Poto(Q = q) = \frac{n!}{q!\multspace (n-q)!}\multspace f^q\multspace (1-f)^{n-q}. \end{equation}(61)From Eqs. (56), (12), (60), and (61), we obtain Po:o(C|C)=Ξnj1=0j1X0nj2=0j2X1···njn=0jnXn1(nq)!n!fq(1f)nq􏽙k=1nξk,jk\begin{eqnarray} &&\Poto(C \mid C') \notag\\ &&= \Xi\multspace \sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np {\frac{(\np-q)!}{\np!} \multspace f^q \multspace (1-f)^{n-q}\multspace \prod_{k=1}^n \xi_{k\comma j_k}} ~~~~~~~~~~~~~~~~~ \\ &&= \Xi\multspace \sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np {\Left(\prod_{\ell=1}^q\frac{f}{\np-\ell+1}\Right)\multspace \Left(\prod_{\ell=1}^{n-q}[1-f]\Right)\multspace \prod_{k=1}^n \xi_{k\comma j_k}}.~~~~~~~~~~~~~~~~~ \label{P_oto(C|C)_eta} \end{eqnarray}There are q factors “f/(n+1)\hbox{$f/(\np-\ell+1)$}” in the above equation, one for each index jk ≠ 0. There are also nq factors “(1 − f)”, one for each null jk. For every jk ≠ 0, Xk#=Xk1#+1\hbox{$\card X_k = \card X_{k-1} + 1$}; and, since q=Xn#\hbox{$q = \card X_n$}, a different jk corresponds to each ∈ [[1,q]], so =Xk#\hbox{$\ell = \card X_k$}. With ηk,0:=ζk,0andηk,jk:=fξk,jkn#Xk1forjk0,\begin{equation} \label{def_eta} \eta_{k\comma 0} \coloneqq \zeta_{k\comma 0} \qquad \text{and} \qquad \eta_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{k\comma j_k}}{\np-\card X_{k-1}} \quad\text{for } j_k \neq 0, \end{equation}(64)Eq. (63)therefore simplifies to Po:o(C|C)=Ξnj1=0j1X0nj2=0j2X1···njn=0jnXn1􏽙k=1nηk,jk.\begin{equation} \Poto(C \mid C') = \Xi\multspace \sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np\cdots \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \prod_{k=1}^n \eta_{k\comma j_k}. \label{P_oto(C|C)_res} \end{equation}(65)

4.1.2. Computation of Po:o(Ai,  j C | C′)

The denominator of Eq. (4)is computed in the same way as Po:o(C | C′): Po:o(Ai,jC|C)=Po:o(CAi,jn􏽝j1=0j1X0···n􏽝ji1=0ji1Xi2n􏽝ji+1=0ji+1Xi···n􏽝jn=0jnXn1􏽜k=1kinAk,jk|C)=Po:o(Cn􏽝j1=0j1X0···n􏽝ji1=0ji1Xi2n􏽝ji+1=0ji+1Xi···n􏽝jn=0jnXn1􏽜k=1nAk,jk|C),\begin{eqnarray} && \Poto(A_{i\comma j} \cap C \mid C') \nonumber\\&&= \Poto\left(C \cap A_{i\comma j} \cap \biguplus_{\substack{j_1=0\\ j_1\not\in X^\star_0}}^\np \cdots \biguplus_{\substack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \biguplus_{\substack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \cdots \biguplus_{\substack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \bigcap_{\substack{k=1\\ k\neq i}}^n A_{k\comma j_k} \Bigm| C' \right) \nonumber \\&&~~~~~= \Poto\left(C \cap \biguplus_{\substack{j_1=0\\ j_1\not\in X^\star_0}}^\np \cdots \biguplus_{\substack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \biguplus_{\substack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \cdots \biguplus_{\substack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right), \end{eqnarray}(66)where X0:={j}\{0},ji:=jandXk:=(Xk1{jk})\{0}forallk[[1,n]],\begin{eqnarray} \label{def_J*} X^\star_0 \coloneqq \{j\} \setminus \{0\}, \qquad j_i \coloneqq j \qquad\text{and}\qquad \nonumber\\ X^\star_{k} \coloneqq (X^\star_{k-1} \cup \{j_k\}) \setminus \{0\} \quad\text{for all } k\in \integinterv{1}{n}, \end{eqnarray}(67)so Po:o(Ai,jC|C)=nj1=0j1X0···nji1=0ji1Xi2nji+1=0ji+1Xi···njn=0jnXn1Po:o(C|􏽜k=1nAk,jkC)×Po:o(􏽜k=1nAk,jk|C).\begin{eqnarray} &&\Poto(A_{i\comma j} \cap C \mid C') \nonumber \\&&= \sum_{\substack{j_1=0\\ j_1\not\in X^\star_0}}^\np \cdots\sum_{\substack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \sum_{\substack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \Poto\left(C \Bigm| \bigcap_{k=1}^n A_{k\comma j_k} \cap C' \right) \nonumber\\&&~~~~~~~~~~\times \Poto\left(\bigcap_{k=1}^n A_{k\comma j_k} \Bigm| C' \right). \end{eqnarray}(68)Let q:=Xn#\hbox{$q^\star \coloneqq \card X^\star_n$}. As for Po:o(C | C′), Po:o(Ai,jC|C)=Ξnj1=0j1X0···nji1=0ji1Xi2nji+1=0ji+1Xi···njn=0jnXn1(nq)!n!fq(1f)nq􏽙k=1nξk,jk\begin{eqnarray} && \Poto(A_{i\comma j} \cap C \mid C') \nonumber\\&&= \Xi\multspace \sum_{\substack{j_1=0\\ j_1\not\in X^\star_0}}^\np \!\!\cdots\!\! \sum_{\substack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \sum_{\substack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \! \!\cdots\! \!\sum_{\substack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \!\!\! {\frac{(\np-q^\star)!}{\np!} \multspace f^{q^\star} \multspace (1-f)^{n-q^\star}\multspace \!\prod_{k=1}^n \xi_{k\comma j_k}} \! \nonumber \\&&= \Xi\multspace \zeta_{i\comma j}\multspace \sum_{\substack{j_1=0\\ j_1\not\in X^\star_0}}^\np \cdots \sum_{\substack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \sum_{\substack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \cdots \sum_{\substack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \prod_{\substack{k=1\\ k\neq i}}^n \eta^\star_{k\comma j_k}, \label{P_oto(Aij,C|C)_res} \end{eqnarray}(69)where ηk,0:=ζk,0andηk,jk:=fξk,jkn#Xk1forjk0.\begin{equation} \label{def_eta*} \eta^\star_{k\comma 0} \coloneqq \zeta_{k\comma 0} \qquad\text{and}\qquad \eta^\star_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{k\comma j_k}}{\np-\card X^\star_{k-1}} \quad \text{for } j_k \neq 0. \end{equation}(70)

4.1.3. Final results

Finally, from Eqs. (4), (65), and (69), fori ≠ 0, Po:o(Ai,j|CC)=ζi,jnj1=0j1X0···nji1=0ji1Xi2nji+1=0ji+1Xi···njn=0jnXn1􏽑nk=1kiηk,jknj1=0j1X0nj2=0j2X1···njn=0jnXn1􏽑k=1nηk,jk·\begin{eqnarray} \label{P_oto(Aij|C,C)_res} \Poto(A_{i\comma j} \mid C \cap C') = \frac{ \zeta_{i\comma j}\multspace \sum_{\leftsubstack{j_1=0\\ j_1\not\in X^\star_0}}^\np \cdots \sum_{\leftsubstack{j_{i-1}=0\\ j_{i-1}\not\in X^\star_{i-2}}}^\np \sum_{\leftsubstack{j_{i+1}=0\\ j_{i+1}\not\in X^\star_{i}}}^\np \cdots \sum_{\leftsubstack{j_n=0\\ j_n\not\in X^\star_{n-1}}}^\np \prod_{\leftsubstack{k=1\\ k\neq i}}^n \eta^\star_{k\comma j_k} }{ \sum_{\leftsubstack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\leftsubstack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \sum_{\leftsubstack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \prod_{k=1}^n \eta_{k\comma j_k} }\cdot \end{eqnarray}(71)

The probability that a source Mj\hbox{$\Mp_j$} has no counterpart in K is simply given by Po:o(A0,j|CC)=1k=1nPo:o(Ak,j|CC).\begin{equation} \Poto(A_{0\comma j} \mid C \cap C') = 1-\sum_{k=1}^n \Poto(A_{k\comma j}\mid C \cap C'). \end{equation}(72)

4.2. Likelihood and estimation of unknown parameters

As in Sect. 3.2, an estimate xˆo:o\hbox{$\hat{\vec x}_\oto$} of the set x of unknown parameters may be obtained by solving Eq. (28). Under assumption (Ho:o), we obtain from Eqs. (65), (31), and (13)that Lo:o=(nj1=0j1X0nj2=0j2X1···njn=0jnXn1􏽙k=1nηk,jk)􏽙k=1nξ0,k.\begin{equation} \label{Lh_oto_brut} \Lhoto = \left(\sum_{\substack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\substack{j_2=0\\ j_2\not\in X_1}}^\np\cdots \sum_{\substack{j_n=0\\ j_n\not\in X_{n-1}}}^\np \prod_{k=1}^n \eta_{k\comma j_k}\right)\multspace \prod_{k=1}^\np\xi_{0\comma k}. \end{equation}(73)Because the number of terms in Eq. (73)grows exponentially with n and n\hbox{$\np$}, this equation seems useless. In fact, the prior computation of Lo:o is not necessary if the probabilities Po:o(Ai,j | CC′) are calculable (we see how to evaluate these in Sect. 5.4).

Indeed, for any parameter xp, we get the same result (Eq. (33)) as under assumption (Hs:o). First, we note that, since the ξ0,j are independent of x, we obtain from Eq. (31)that lnLxp=1P(C|C)∂P(C|C)xp·\begin{equation} \label{der(Lh)} \frac{\partial\ln\Lh}{\partial x_p} = \frac{1}{\Prob(C \mid C')}\multspace \frac{\partial\Prob(C \mid C')}{\partial x_p}\cdot \end{equation}(74)Now, for any set Υ of indices and any product of strictly positive functions hk of some variable y, 􏽑kΥhk∂y=Υh∂y􏽙kΥkhk=Υlnh∂y􏽙kΥhk.\begin{equation} \label{der(prod_g)} \frac{\partial\prod_{k\in \Upsilon} h_k}{\partial y} = \sum_{\ell\in \Upsilon}{\frac{\partial h_\ell}{\partial y}\multspace \prod_{\substack{k\in \Upsilon\\ k\neq\ell}} h_k} = \sum_{\ell\in \Upsilon}{\frac{\partial\ln h_\ell}{\partial y}\multspace \prod_{k\in \Upsilon} h_k}. \end{equation}(75)With hk = ηk,jk, y = xp and Υ = [[1,n]], we therefore obtain from Eq. (65)that Po:o(C|C)xp=Ξnj1=0j1X0nj2=0j2X1···njn=0jnXn1i=1nlnηi,jixp􏽙k=1nηk,jk.\begin{equation} \label{der(P_oto)_gauche} \frac{\partial \Poto(C \mid C')}{\partial x_p} = \Xi \multspace \sum_{\substack{j_1=0\\j_1\notin X_0}}^\np\sum_{\substack{j_2=0\\j_2\notin X_1}}^\np \cdots\sum_{\substack{j_n=0\\j_n\notin X_{n-1}}}^\np\sum_{i=1}^n{ \frac{\partial\ln\eta_{i\comma j_i}}{\partial x_p}\multspace \prod_{k=1}^n\eta_{k\comma j_k}}. \end{equation}(76)The expression of Po:o(Ai,jC | C′) (Eq. (69)) may also be written Po:o(Ai,jC|C)=Ξnj1=0j1X0nj2=0j2X1···njn=0jnXn1χ(ji=j)􏽙k=1nηk,jk,\begin{equation} \Poto(A_{i\comma j} \cap C \mid C') = \Xi \multspace \sum_{\substack{j_1=0\\j_1\notin X_0}}^\np\sum_{\substack{j_2=0\\j_2\notin X_1}}^\np \cdots\sum_{\substack{j_n=0\\j_n\notin X_{n-1}}}^\np {\chi(j_i = j)\multspace \prod_{k=1}^n \eta_{k\comma j_k}}, \end{equation}(77)where χ is the indicator function (i.e. χ(ji = j) = 1 if proposition “ji = j” is true and χ(ji = j) = 0 otherwise), so i=1nj=0nlnζi,jxpPo:o(Ai,jC|C)=Ξi=1nnj1=0j1X0nj2=0j2X1···njn=0jnXn1j=0nχ(ji=j)lnζi,jxp􏽙k=1nηk,jk=Ξi=1nnj1=0j1X0nj2=0j2X1···njn=0jnXn1lnζi,jixp􏽙k=1nηk,jk.\begin{eqnarray} \label{der(P_oto)_droite} \sum_{i=1}^n\sum_{j=0}^\np{\frac{\partial\ln\zeta_{i\comma j}}{\partial x_p} \multspace \Poto(A_{i\comma j} \cap C \mid C')} \nonumber \\= \Xi\multspace \sum_{i=1}^n\sum_{\substack{j_1=0\\j_1\notin X_0}}^\np \sum_{\substack{j_2=0\\j_2\notin X_1}}^\np \cdots\sum_{\substack{j_n=0\\j_n\notin X_{n-1}}}^\np\sum_{j=0}^\np {\chi(j_i = j)\multspace \frac{\partial\ln\zeta_{i\comma j}}{ \partial x_p}\multspace \prod_{k=1}^n\eta_{k\comma j_k}} \nonumber \\= \Xi\multspace \sum_{i=1}^n\sum_{\substack{j_1=0\\j_1\notin X_0}}^\np \sum_{\substack{j_2=0\\j_2\notin X_1}}^\np \cdots\sum_{\substack{j_n=0\\j_n\notin X_{n-1}}}^\np {\frac{\partial\ln\zeta_{i\comma j_i}}{\partial x_p}\multspace \prod_{k=1}^n\eta_{k\comma j_k}}. \end{eqnarray}(78)If ji = 0, then ηi,ji = ζi,ji; and if ji ≠ 0, the numerators of ηi,ji and ζi,ji are the same and their denominators do not depend on xp: in all cases, lnηi,ji/xp = lnζi,ji/xp. The righthand sides of Eqs. (76)and (78)are therefore identical. Dividing their lefthand sides by Po:o(C | C′) and using Eqs. (74)and (4), we obtain, as announced, lnLo:oxp=i=1nj=0nlnζi,jxpPo:o(Ai,j|CC).\begin{equation} \label{der(Lh_oto)/x} \frac{\partial\ln\Lhoto}{\partial x_p} = \sum_{i=1}^n\sum_{j=0}^\np{\frac{\partial\ln\zeta_{i\comma j}}{ \partial x_p}\multspace \Poto(A_{i\comma j} \mid C \cap C')}. \end{equation}(79)

For xp = f in particular, because of Eq. (37), and as under assumption (Hs:o), Eq. (79)reduces to lnLo:o∂f=n(1f)i=1nPo:o(Ai,0|CC)f(1f)·\begin{equation} \label{der(Lh_oto)/f} \frac{\partial\ln\Lhoto}{\partial f} = \frac{ n\multspace (1-f) - \sum_{i=1}^n \Poto(A_{i\comma 0} \mid C \cap C') }{ f\multspace (1-f) }\cdot \end{equation}(80)From Eq. (28), a maximum likelihood estimator of f is thus o:o=11ni=1no:o(Ai,0|CC),\begin{equation} \hat f_\oto = 1 - \frac{1}{n}\multspace \sum_{i=1}^n\expandafter\hat\Poto(A_{i\comma 0} \mid C \cap C'), \label{f_est_oto} \end{equation}(81)where \hbox{$\expandafter\hat\Poto$} is the value of Po:o at \hbox{$f = \hat f_\oto$}.

To compare assumptions (Hs:o), (Ho:o), and (Hs:o) and to select the most appropriate one to compute P(Ai,j | CC′), an expression is needed for Lo:o. If probabilities Po:o(Ai, 0 | CC′) are calculable, Lo:o may be obtained for any f by integrating Eq. (80)with respect to f. Since all K- and K-sources are unrelated and randomly distributed for f = 0, the integration constant is (cf. Eq. (73)) (lnLo:o)f=0=i=1nlnξi,0+j=1nlnξ0,j.\begin{equation} \label{Lh_oto(0)} {\left(\ln\Lhoto\right)}^{}_{f=0} = \sum_{i=1}^n \ln\xi_{i\comma0} + \sum_{j=1}^\np \ln\xi_{0\comma j}. \end{equation}(82)

5. Practical implementation: the Aspects code

5.1. Overview

To implement the results established in Sects. 3.1, 3.2, 4.1, and 4.2, we have built a Fortran 95 code, Aspects – a French acronym (pronounced [aspε] in International Phonetic Alphabet, not [æspekts]) for “Association positionnelle/probabiliste de catalogues de sources”, or “probabilistic positional association of source catalogs” in English. The source files are freely available 4 at www2.iap.fr/users/fioc/Aspects/ . The code compiles with IFort and GFortran.

Given two catalogs of sources with their positions and the uncertainties on these, Aspects computes, under assumptions (Hs:o), (Ho:o), and (Ho:s), the overall likelihood L, estimates of f and f, and the probabilities P(Ai,j | CC′). It may also simulate all-sky catalogs for various association models (cf. Sect. 6.1).

We provide hereafter explanations of general interest for the practical implementation in Aspects of Eqs. (23), (39), (32), (71), (81), and (73). Some more technical points (such as the procedures used to search for nearby objects, simulate the positions of associated sources and integrate Eq. (80)) are only addressed in appendices to the documentation of the code (Fioc 2014). The latter also contains the following complements: another (but equivalent) expression for Lo:o, formulae derived under Ho:s, computations under Ho:o for n>n\hbox{$n > \np$}, a calculation of the uncertainties on unknown parameters under Hs:o, and a proof of Eq. (41).

5.2. Elimination of unlikely counterparts

Under assumption (Hs:o), computing the probability of association Ps:o(Ai,j | CC′) between Mi and Mj\hbox{$\Mp_j$} from Eq. (23)is straightforward if f and the positional uncertainties are known. However, the number of calculations for the whole sample or for determining xˆ\hbox{$\hat{\vec x}$} is on the order of nn\hbox{$n\multspace \np$}, a huge number for the catalogs available nowadays. We must therefore try to eliminate all unnecessary computations.

Since ξi,k is given by a normal law if i ≠ 0 and k ≠ 0, it rapidly drops to almost 0 when we consider sources Mk\hbox{$\Mp_k$} at increasing angular distance ψi,k from Mi. Therefore, there is no need to compute Ps:o(Ai,j | CC′) for all couples (Mi,Mj)\hbox{$(M_i, \Mp_j)$} or to sum on all k from 1 to n\hbox{$\np$} in Eq. (24). More explicitly, let R be some angular distance such that, for all (Mi,Mk)\hbox{$(M_i, \Mp_k)$}, if \hbox{$\psi_{i\comma k} \geqslant R'$} then ξi,k ≈ 0, say R5max[[1,n]]a2+max[[1,n]]a2,\begin{equation} \label{def_R} R' \ga 5\multspace \!\sqrt{ \smash[b]{ \max_{\ell\in\integinterv{1}{n}} a_\ell^2 + \max_{\ell\in\integinterv{1}{\np}} a_{\ell}'^2 } }, \vphantom{ \max_{\ell\in\integinterv{1}{n}} a_\ell^2 + \max_{\ell\in\integinterv{1}{\np}} a_{\ell}'^2 } \end{equation}(83)where the a and a\hbox{$a'_{\smash[t]{\ell}}$} are the semi-major axes of the positional uncertainty ellipses of K- and K-sources (cf. Appendix A.2.1; the square root in Eq. (83)is thus the maximal possible uncertainty on the relative position of associated sources). We may set Ps:o(Ai,j | CC′) to 0 if ψi,j>R, and replace the sums k=1n\hbox{$\smash[t]{\sum_{\smash[t]{k=1}}^\np}$} by k=1;ψi,kRn\hbox{$\smash[t]{\sum_{\smash[t]{k=1{;}\, \psi_{i\comma k}\leqslant R'}}^\np}$} in Eq. (24): only nearby K-sources matter.

5.3. Fraction of sources with a counterpart

All the probabilities depend on f and, possibly, on other unknown parameters like ˚σ\hbox{$\sigmatot$} and ˚ν\hbox{$\nutot$} (cf. Appendices A.2.2 and A.2.3). Under assumption (Hs:o), estimates of these parameters may be found by solving Eq. (28)using Eq. (33).

If the fraction of sources with a counterpart is the only unknown, the ξi,j need to be computed only once and \hbox{$\hat f_\sto$} may easily be determined from Eq. (39)by an iterative procedure. Denoting by g the function g:f[0,1]11ni=1nPs:o(Ai,0|CC),\begin{equation} \label{fonction_g} g\colon f \in [0, 1] \longmapsto 1-\frac{1}{n}\multspace \sum_{i=1}^n\Psto(A_{i\comma 0} \mid C \cap C'), \end{equation}(84)we now prove that, for any f0 ∈] 0,1 [, the sequence (fk)k ∈? defined by fk + 1 := g(fk) tends to \hbox{$\hat f_\sto$}.

As is obvious from Eq. (24b), Ps:o(Ai, 0 | CC′) decreases for all i when f increases: g is consequently an increasing function. Note also that, from Eqs. (38)and (84), g(f)=f+f(1f)nlnLs:o∂f·\begin{equation} \label{expression_g} g(f) = f + \frac{f\multspace (1-f)}n \multspace \frac{\partial\ln\Lhsto}{\partial f}\cdot \end{equation}(85)The only fixed points of g are thus 0, 1 and the unique solution \hbox{$\hat f_\sto$} to lnLs:o/∂f = 0. Because 2lnLs:o/f2< 0 (cf. Eq. (41)) and \hbox{$\hat\partial\ln\Lhsto/\hat\partial f = 0$}, we have \hbox{$\partial\ln\Lhsto/\partial f \geqslant 0$} if \hbox{$f \in [0, \hat f_\sto]$}, so \hbox{$g(f) \geqslant f$} in this interval by Eq. (85). Similarly, if \hbox{$f \in [\hat f_\sto, 1]$}, then \hbox{$\partial\ln\Lhsto/\partial f \leqslant 0$} and thus \hbox{$g(f) \leqslant f$}.

Consider the case \hbox{$f_0 \in \mathopen{]}0, \hat f_\sto]$}. If \hbox{$f_k \leqslant \hat f_\sto$}, then as just shown, \hbox{$g(f_k) \geqslant f_k$}; we also have \hbox{$g(f_k) \leqslant g(\hat f_\sto) = \hat f_\sto$}, because g is an increasing function and \hbox{$\hat f_\sto$} is a fixed point of it. Since g(fk) = fk + 1, the sequence (fk)k ∈? is increasing and bounded from above by \hbox{$\hat f_\sto$}: it therefore converges in \hbox{$[f_0, \hat f_\sto]$}. Because g is continuous and \hbox{$\hat f_\sto$} is the only fixed point in this interval, (fk)k ∈? tends to \hbox{$\hat f_\sto$}. Similarly, if \hbox{$f_0 \in [\hat f_\sto, 1\mathclose{[}$}, then (fk)k ∈? is a decreasing sequence converging to \hbox{$\hat f_\sto$}.

Because of Eq. (81), this procedure also works in practice under assumption (Ho:o) (with Ps:o replaced by Po:o in Eq. (84)), although it is not obvious that Po:o(Ai, 0 | CC′) decreases for all i when f increases, nor that 2lnLo:o/f2< 0. A good starting value f0 may be \hbox{$\hat f_\sto$}.

5.4. Computation of one-to-one probabilities of association

What was said in Sect. 5.2 about eliminating unlikely counterparts in the calculation of probabilities under Hs:o still holds under Ho:o. However, because of the combinatorial explosion of the number of terms in Eq. (71), computing Po:o(Ai,j | CC′) exactly is still clearly hopeless. Yet, after some wandering (Sects. 5.4.1 and 5.4.2), we found a working solution (Sect. 5.4.3).

5.4.1. A first try

Our first try was inspired by the (partially wrong) idea that, although all K-sources are involved in the numerator and denominator of Eq. (71), only those close to Mi should matter in their ratio. A sequence of approximations converging to the true value of Po:o(Ai,j | CC′) might then be built as follows (all quantities defined or produced in this first try are written with the superscript “w” for “wrong”).

To make things clear, consider M1 and some possible counterpart Mj\hbox{$\Mp_j$} within its neighborhood (\hbox{$\psi_{1\comma j} \leqslant R'$}) and assume that M2 is the first nearest neighbor of M1 in K, M3 its second nearest neighbor, etc. For any d ∈ [[1,n]], define pdw(1,j):=ζ1,jnj2=0j2X1···njd=0jdXd1􏽑k=2dηk,jknj1=0j1X0nj2=0j2X1···njd=0jdXd1􏽑k=1dηk,jk·\begin{equation} p^\wrong_{\smash[t]{d}}(1, j) \coloneqq \frac{ \zeta_{1\comma j}\multspace \sum_{\leftsubstack{j_2=0\\ j_2\not\in X^{\star}_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in X^{\star}_{d-1}}}^\np \prod_{k=2}^d \eta^{\star}_{k\comma j_k} }{ \sum_{\leftsubstack{j_1=0\\ j_1\not\in X_0}}^\np \sum_{\leftsubstack{j_2=0\\ j_2\not\in X_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in X_{d-1}}}^\np \prod_{k=1}^d \eta_{k\comma j_k} }\cdot \end{equation}(86)The quantity pdw(1,j)\hbox{$p^\wrong_{\smash[t]{d}}(1, j)$} thus depends only on M1 and its d − 1 nearest neighbors in K. As pnw(1,j)\hbox{$p^\wrong_n(1, j)$} is the one-to-one probability of association between M1 and Mj\hbox{$\Mp_j$} (cf. Eq. (71)), the sequence (pdw[1,j])\hbox{$(p^\wrong_{\smash[t]{d}}[1, j])$} tends to Po:o(A1,j | CC′) when the depth d of the recursive sums tends to n. After some initial fluctuations, pdw(1,j)\hbox{$p^\wrong_{\smash[t]{d}}(1, j)$} enters a steady state. This occurs when ψ(M1,Md + 1) exceeds a distance R equal to a few times R (at least 2 R). We may therefore think that the convergence is then achieved and stop the recursion at this d. It is all the more tempting that p1w(1,j)=Ps:o(A1,j|CC)\hbox{$p^\wrong_1(1, j) = \Psto(A_{1\comma j} \mid C \cap C')$} and that the several-to-one probability looks like a first-order approximation to Po:o...

thumbnail Fig. 1

One-to-one simulations for f = 1/2, n=105\hbox{$\np = 10^5$}, and circular positional uncertainty ellipses with ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$} (see Sects. 6.1 and 6.2 for details).  a) Mean value of different estimators \hbox{$\hat f$} of f as a function of n. The dotted line indicates the input value of f.  b) Normalized average maximum value \hbox{$\hat\Lh$} of different likelihoods as a function of n, compared to wo:o\hbox{$\expandafter\hat\Lhoto^\wrong$}.

More formally and generally, for any Mi, let φ be a permutation on K ordering the elements Mφ(1), Mφ(2), ..., Mφ(n) by increasing angular distance to Mi (in particular, Mφ(1) = Mi). For j = 0 or Mj\hbox{$\Mp_j$} within a distance R (cf. Sect. 5.2) from Mi, and for any d ∈ [[1,n]], define pdw(i,j):=ζi,jnj2=0j2􏽥X1···njd=0jd􏽥Xd1􏽑k=2dηk,jkw􏽥nj1=0j1􏽥X0nj2=0j2􏽥X1···njd=0jd􏽥Xd1􏽑k=1d􏽥ηk,jkw,\begin{equation} \label{P_oto_iter_w} p^\wrong_{\smash[t]{d}}(i, j) \coloneqq \frac{ \zeta_{i\comma j}\multspace\sum_{\leftsubstack{j_2=0\\ j_2\not\in \widetilde X^{\star}_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in \widetilde X^{\star}_{d-1}}}^\np \prod_{k=2}^d \widetilde\eta^{\,\star\,\wrong}_{k\comma j_k} }{ \sum_{\leftsubstack{j_1=0\\ j_1\not\in \widetilde X_0}}^\np \sum_{\leftsubstack{j_2=0\\ j_2\not\in \widetilde X_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in \widetilde X_{d-1}}}^\np \prod_{k=1}^d \widetilde\eta^{\,\wrong}_{k\comma j_k} }, \vspace*{3mm} \end{equation}(87)where, as in Eqs. (55), (67), (64), and (70),

􏽥Xk:=Xkforallk[[0,n]];􏽥X1:={j}\{0};􏽥Xk:=(􏽥Xk1{jk})\{0}forallk[[2,n]];\begin{equation} \Left. \begin{aligned} &\widetilde X_k \coloneqq X_k \quad \text{for all } k \in \integinterv{0}{n}; \qquad \widetilde X^{\star}_1 \coloneqq \{j\} \setminus \{0\}; && \\ &\widetilde X^{\star}_k \coloneqq (\widetilde X^{\star}_{k-1} \cup \{j_k\}) \setminus \{0\} \quad\text{for all } k \in \integinterv{2}{n}; && \end{aligned} \Right\} \end{equation}(88)􏽥ηk,0w:=ηk,0w􏽥:=ζφ(k),0;􏽥ηk,jkw:=fξφ(k),jkn#􏽥Xk1andηk,jkw􏽥:=fξφ(k),jkn#􏽥Xk1forjk0.\begin{equation} \label{tilde} \Left. \begin{aligned} \widetilde\eta^{\,\wrong}_{k\comma 0} \coloneqq \widetilde\eta^{\,\star\,\wrong}_{k\comma 0} \coloneqq \zeta_{\phi(k)\comma 0}; \qquad \widetilde\eta^{\,\wrong}_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{\phi(k)\comma j_k}}{ \np-\card \widetilde X_{k-1}} \\ \text{and}\quad \widetilde\eta^{\,\star\,\wrong}_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{\phi(k)\comma j_k}}{ \np-\card \widetilde X^{\star}_{k-1}} \quad \text{for } j_k \neq 0. \end{aligned} \Right\} \end{equation}(89)Let dmin(i):=min(d[[1,n]]|ψ[Mi,Mφ(d+1)]>R).\begin{equation} \label{min_prof} \depthmin(i) \coloneqq \min\left(d \in \integinterv{1}{n} \bigm| \psi[M_i, M_{\phi(d+1)}] > R\right). \end{equation}(90)Given above considerations, Po:o(Ai,j | CC′) can be evaluated as po:ow(i,j):=pdmin(i)w(i,j)\hbox{$p^\wrong_\oto(i, j) \coloneqq p^\wrong_{\smash[t]{\depthmin(i)}}(i, j)$}.

The computation of pdw(i,j)\hbox{$p^\wrong_{\smash[t]{d}}(i, j)$} may be further restricted (and in practice, because of the recursive sums in Eq. (87), must be) to sources Mjk\hbox{$\Mp_{j_k}$} in the neighborhood of the objects (Mφ(k))k ∈ [[1,d]], as explained in Sect. 5.2.

5.4.2. Failure of the first try

To test the reliability of the evaluation of Po:o(Ai,j | CC′) by po:ow(i,j)\hbox{$p^\wrong_\oto(i, j)$}, we simulated all-sky mock catalogs for one-to-one associations and analyzed them with a first version of Aspects. Simulations were run for f = 1/2, n=105\hbox{$\np = \cramped{10^5}$}, n[[103,105]]\hbox{$n \in \integinterv{\cramped{10^3}}{\cramped{10^5}}$}, and known circular positional uncertainties with ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$} (see Sects. 6.1 and 6.2 for a detailed description).

Three estimators of f were compared to the input value:

  • \hbox{$\hat f_\sto$}, the value maximizing Ls:o (Eq. (39));

  • wo:o\hbox{$\hat f_\oto^\wrong$}, the value maximizing the one-to-one likelihood Lo:ow\hbox{$\Lhoto^\wrong$} derived from the po:ow\hbox{$p^\wrong_\oto$}. This estimator is computed from Eq. (81)with po:ow(i,0)\hbox{$p^\wrong_\oto(i, 0)$} instead of Po:o(Ai, 0 | CC′);

  • \hbox{$\hat f_\ots$}, an estimator built from the one-to-several assumption in the following way: because (Ho:s) is fully symmetric to (Hs:o), we just need to swap K and K (i.e., swap f and f, n and n\hbox{$\np$}, etc.) in Eqs. (24), (26), and (39)to obtain o:s\hbox{$\hat f'_\ots$} instead of \hbox{$\hat f_\sto$}, and then, from Eq. (42), \hbox{$\hat f_\ots$} instead of s:o\hbox{$\hat f'_\sto$}. The one-to-several likelihood Lo:s is computed from Eq. (32)in the same way.

The mean values of these estimators are plotted as a function of n in Fig. 1a (error bars are smaller than the size of the points). As is obvious, the ad hoc estimator wo:o\hbox{$\hat f_\oto^\wrong$} diverges from f when n increases. This statistical inconsistency 5 seems surprising for a maximum likelihood estimator since the model on which it is based is correct by construction. However, all the demonstrations of consistency of maximum likelihood estimators we found in the literature (e.g., in Kendall & Stuart 1979) rest on the assumption that the overall likelihood is the product of the probabilities of each datum, which is not the case for Lo:o (cf. Eq. (73)). Since \hbox{$\hat f_\sto$} is a good estimator of f, it might be used to compute Po:o(Ai,j | CC′) from po:ow(i,j)\hbox{$p^\wrong_\oto(i, j)$} – if the latter correctly approximates the former. By itself, the inconsistency of wo:o\hbox{$\hat f^\wrong_\oto$} is therefore not a problem.

More embarrassing is that (Ho:o) is not the most likely assumption (see Fig. 1b): the mean value of wo:o\hbox{$\expandafter\hat\Lhoto^\wrong$} is less than that of \hbox{$\expandafter\hat\Lhsto$} over the full interval of n ! These two failures hint that the sequence (pdw[i,j])\hbox{$(p^\wrong_{\smash[t]{d}}[i, j])$} has not yet converged to Po:o(Ai,j | CC′) at d = dmin(i).

To check this, we ran simulations with small numbers of sources (n and n\hbox{$\np$} less than 10), so that we could compute pnw(i,j)\hbox{$p^\wrong_n(i, j)$} exactly and study how (pdw[i,j])\hbox{$(p^\wrong_{\smash[b]d}[i, j])$} tends to it. To test whether source confusion might be the reason for the problem, we created mock catalogs with very large positional uncertainties 6˚σ\hbox{$\sigmatot$}, comparable to the distance between unrelated sources. Because the expressions given in Appendix A for ξi,j are for planar normal laws and become wrong when the distance between Mi and Mj\hbox{$\Mp_j$} is more than a few degrees because of the curvature, we ran simulations on a whole circle instead of a sphere; nevertheless, we took ˚σ30\hbox{$\sigmatot \la 30^\circ$} because the linear normal law is inappropriate on a circle for higher values, due to its finite extent. What we found is that, after the transient phase where it oscillates, (pdw[i,j])\hbox{$(p^\wrong_{\smash[t]{d}}[i, j])$} slowly drifts to Po:o(Ai,j | CC′) and only converges at d = n ! This drift was imperceptible for the high values of n and n\hbox{$\np$} used in Sect. 5.4.1.

5.4.3. Reconsideration and solution

To understand where the problem comes from, we consider the simplest case of interest: n=n=2\hbox{$n = \np = 2$}. We assume moreover that ξ1, 2ξ2, 1 ≈ 0. We then have Po:o(C|C)([1f]2ξ1,0ξ2,0+[1f]f2[ξ1,0ξ2,2+ξ1,1ξ2,0]+f22ξ1,1ξ2,2)d2r1d2r2,\begin{eqnarray} \Poto(C \mid C') \approx \Biggl([1-f]^2\multspace \xi_{1\comma 0}\multspace \xi_{2\comma 0} + \frac{[1-f]\multspace f}{2}\multspace [\xi_{1\comma 0}\multspace \xi_{2\comma 2} + \xi_{1\comma 1}\multspace \xi_{2\comma 0}] \nonumber \\ +\frac{f^2}{2}\multspace \xi_{1\comma 1}\multspace \xi_{2\comma 2}\Biggr) \multspace \df^2\vec r_1\multspace \df^2\vec r_2, \end{eqnarray}(91)Po:o(A1,0C|C)(1f)ξ1,0([1f]ξ2,0+f2ξ2,2)d2r1d2r2,Po:o(A1,1C|C)f2ξ1,1([1f]ξ2,0+fξ2,2)d2r1d2r2.\begin{eqnarray} && \Poto(A_{1\comma 0} \cap C \mid C') \approx (1\!-\!f)\multspace \xi_{1\comma 0}\multspace \Biggl([1\!-\!f]\multspace \xi_{2\comma 0} + \frac{f}{2}\multspace \xi_{2\comma 2}\Biggr) \multspace \df^2\vec r_1\multspace \df^2\vec r_2, ~~~~~~~~~~~~~~~~~~~~~~~\\ &&\Poto(A_{1\comma 1} \cap C \mid C') \approx \frac{f}{2}\multspace \xi_{1\comma 1}\multspace \left([1-f]\multspace \xi_{2\comma 0} + f\multspace \xi_{2\comma 2}\right)\multspace \df^2\vec r_1 \multspace \df^2\vec r_2.~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray}The probabilities Po:o(A1,j | CC′) = Po:o(A1,jC | C′) /Po:o(C | C′) obviously depend on ξ2, 2. In particular, ifξ2, 2ξ2, 0, Po:o(A1,0|CC)(1f)ξ1,0(1f)ξ1,0+fξ1,1/2,Po:o(A1,1|CC)fξ1,1/2(1f)ξ1,0+fξ1,1/2;\begin{equation} \label{<<} \Left. \begin{aligned} \Poto(A_{1\comma 0} \mid C \cap C') &\approx \frac{(1-f)\multspace \xi_{1\comma 0}}{ (1-f)\multspace \xi_{1\comma 0} + f\multspace \xi_{1\comma 1}/2}, \\ \Poto(A_{1\comma 1} \mid C \cap C') &\approx \frac{f\multspace \xi_{1\comma 1}/2}{ (1-f)\multspace \xi_{1\comma 0} + f\multspace \xi_{1\comma 1}/2}; \end{aligned} \Right\} \end{equation}(94)in that case, Po:o(A2, 2 | CC′) ≈ 0, and both M1\hbox{$\Mp_1$} and M2\hbox{$\Mp_2$} are free for M1. On the other hand, ifξ2, 2ξ2, 0, Po:o(A1,0|CC)(1f)ξ1,0(1f)ξ1,0+fξ1,1/1,Po:o(A1,1|CC)fξ1,1/1(1f)ξ1,0+fξ1,1/1;\begin{equation} \label{>>} \Left. \begin{aligned} \Poto(A_{1\comma 0} \mid C \cap C') &\approx \frac{(1-f)\multspace \xi_{1\comma 0}}{ (1-f)\multspace \xi_{1\comma 0} + f\multspace \xi_{1\comma 1}/1}, \\ \Poto(A_{1\comma 1} \mid C \cap C') &\approx \frac{f\multspace \xi_{1\comma 1}/1}{ (1-f)\multspace \xi_{1\comma 0} + f\multspace \xi_{1\comma 1}/1}; \end{aligned} \Right\} \end{equation}(95)in that case, Po:o(A2, 2 | CC′) ≈ 1: M2 and M2\hbox{$\Mp_{2}$} are almost certainly bound, so M2\hbox{$\Mp_{2}$} may not be associated to M1, and M1\hbox{$\Mp_{1}$} is the only possible counterpart of M1.

thumbnail Fig. 2

Mean value of different estimators \hbox{$\hat f$} of f as a function of n for f = 1/2 (dotted line), n=105\hbox{$\np = 10^5$}, and circular positional uncertainty ellipses with ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$} (see Sects. 6.1 and 6.2 for details).  a) Several-to-one simulations.  b) One-to-one simulations (\hbox{$\hat f_\sto$} and \hbox{$\hat f_\oto$} overlap).

The difference between the results obtained for ξ2, 2ξ2, 0 and ξ2, 2ξ2, 0 shows that probabilities Po:o(A1,j | CC′) depend on the relative positions of M2 and M2\hbox{$\Mp_{2}$}, even when both M2 and M2\hbox{$\Mp_{2}$} are distant from M1 and M1\hbox{$\Mp_{1}$}: unlike the idea stated in Sect. 5.4.1, distant K-sources do matter for Po:o probabilities! However, as highlighted by the “/ 2” and “/ 1” factors in Eqs. (94)and (95), the distant K-source M2 only changes the number of K-sources (two for ξ2, 2ξ2, 0, one for ξ2, 2ξ2, 0) that may be identified to M1: its exact position is unimportant.

This suggests the following solution: replace n\hbox{$\np$} in Eq. (89)by the number neff(i,d)\hbox{$\npeff(i,d)$} of K-sources that may effectively be associated to Mi and its d − 1 nearest neighbors in K; i.e., dropping the superscript “w”, define pd(i,j):=ζi,jnj2=0j2􏽥X1···njd=0jd􏽥Xd1􏽑k=2dηk,jk􏽥nj1=0j1􏽥X0nj2=0j2􏽥X1···njd=0jd􏽥Xd1􏽑k=1d􏽥ηk,jk,\begin{equation} \label{P_oto_iter} p_d(i, j) \coloneqq \frac{\zeta_{i\comma j}\multspace \sum_{\leftsubstack{j_2=0\\ j_2\not\in \widetilde X^{\star}_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in \widetilde X^{\star}_{d-1}}}^\np \prod_{k=2}^d \widetilde\eta^{\,\star}_{k\comma j_k} }{ \sum_{\leftsubstack{j_1=0\\ j_1\not\in \widetilde X_0}}^\np \sum_{\leftsubstack{j_2=0\\ j_2\not\in \widetilde X_1}}^\np \cdots \sum_{\leftsubstack{j_d=0\\ j_d\not\in \widetilde X_{d-1}}}^\np \prod_{k=1}^d \widetilde\eta_{k\comma j_k} }, \end{equation}(96)where 􏽥ηk,0:=ηk,0􏽥:=ζφ(k),0;ηk,jk􏽥:=fξφ(k),jkneff(i,d)#􏽥Xk1and􏽥ηk,jk:=fξφ(k),jkneff(i,d)#􏽥Xk1forjk0,\begin{equation} \Left. \begin{aligned} \widetilde\eta_{k\comma 0} \coloneqq \widetilde\eta^{\,\star}_{k\comma 0} \coloneqq \zeta_{\phi(k)\comma 0}; \qquad \widetilde\eta^{\,\star}_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{\phi(k)\comma j_k}}{ \npeff(i{,}\,d)-\card \widetilde X^{\star}_{k-1}} \\ \text{and}\quad \widetilde\eta_{k\comma j_k} \coloneqq \frac{f\multspace \xi_{\phi(k)\comma j_k}}{ \npeff(i{,}\,d)-\card \widetilde X_{k-1}} \quad\text{for } j_k \neq 0, \end{aligned}\Right\} \end{equation}(97)and use po:o(i,j) := pdmin(i)(i,j), where dmin(i) is defined by Eq. (90), to evaluate Po:o(Ai,j | CC′).

An estimate of neff\hbox{$\npeff$} is given by 7neff(i,d)=nk=d+1n(1Po:o[Aφ(k),0|CC]).\begin{equation} \label{n_eff} \npeff(i,d) = \np - \sum_{k=d+1}^n{\left(1 - \Poto[A_{\phi(k)\comma0} \mid C \cap C']\right)}. \end{equation}(98)The sum in Eq. (98)is nothing but the typical number of counterparts in K associated to distant K-sources. Note that neff(i,d=n)=n\hbox{$\npeff(i, d = n) = \np$}, so we recover the theoretical result for Po:o(Ai,j | CC′) when all sources are considered. As Po:o depends on neff\hbox{$\npeff$} which in turn depends on Po:o, both may be computed with a back and forth iteration; this procedure converges in a few steps if, instead of Po:o, the value of Ps:o is taken to initiate the sequence.

5.5. Tests of Aspects

As computations made under assumption (Ho:o) are complex (they involve recursive sums for instance), we made several consistency checks of the code. In particular, we swapped K and K for nn\hbox{$n \neq \np$} and compared quantities resulting from this swap (written with the superscript “”) to original ones: within numerical errors, o:o=o:o\hbox{$\hat f'^\leftrightarrow_\oto = \hat f_\oto$} and, for f′ ↔ = f, we get Lo:o=Lo:o\hbox{$\Lhoto^\leftrightarrow = \Lhoto$} and Po:o(Aj,i|CC)=Po:o(Ai,j|CC)\hbox{$\Poto^\leftrightarrow(A_{j\comma i} \mid C' \cap C) = \Poto(A_{i\comma j} \mid C \cap C')$} for all (Mi,Mj)\hbox{$(M_i, \Mp_j)$}.

We moreover numerically checked for small n and n\hbox{$\np$} (5) that Eq. (73)and the integral of Eq. (80)with respect to f are consistent and that Aspects returns the same value as Mathematica (Wolfram 1996). For even smaller n and n\hbox{$\np$} (\hbox{$\leqslant $}3), we confirmed that manual analytical expressions, obtained from the enumeration of all possible associations between K and K, are identical to Mathematica’s symbolic calculations. For the large n and n\hbox{$\np$} of practical interest, although we did not give a formal proof of the solution of Sect. 5.4.3, the analysis of simulations (Sect. 6) makes us confident in the code.

6. Simulations

In this section, we analyze various estimators of the unknown parameters. Because of the complexity of the expressions we obtained, we did not try to do it analytically but used simulations. We also compare the likelihood of the assumptions (Hs:o), (Ho:o), and (Ho:s), given the data.

6.1. Creation of mock catalogs

We have built all-sky mock catalogs with Aspects in the cases of several- and one-to-one associations. To do this, we first selected the indices of fn objects in K, and associated randomly the index of a counterpart in K to each of them; for one-to-one simulations, a given K-source was associated at most once. We then drew the true positions of K-sources uniformly on the sky. The true positions of K-sources without counterpart were also drawn in the same way; for sources with a counterpart, we took the true position of their counterpart. The observed positions of K- and K-sources were finally computed from the true positions for given parameters (ai,bi,βi) and (aj,bj,βj)\hbox{$(a'_{\smash[t]{j}}, b'_{\smash[t]{j}}, \betapj)$} of the positional uncertainty ellipses (see Appendix A.2.1).

6.2. Estimation of f if positional uncertainty ellipses are known and circular

Mock catalogs were created with ai = bi = σ (see notations in Appendix A.2.1) for all MiK and with aj=bj=σ\hbox{$a'_{\smash[t]{j}} = b'_{\smash[t]{j}} = \sigma'$} for all MjK\hbox{$\Mp_j \in K'$}. Positional uncertainty ellipses are therefore circular here. Only two parameters matter in that case: f and ˚σ:=σ2+σ2.\begin{equation} \sigmatot \coloneqq \!\sqrt{\sigma^2+\sigma'^2}. \end{equation}(99)Hundreds of simulations were run for f = 1/2, n=105\hbox{$\np = 10^5$}, ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$}, and n ∈ [[103, 105]]. We analyzed them with Aspects, knowing positional uncertainties, and plot the mean value of the estimators of f listed in Sect. 5.4.2 as a function of n in Fig. 2. This time, however, we replaced wo:o\hbox{$\hat f_\oto^\wrong$} by the estimator \hbox{$\hat f_\oto$} computed from the po:o.

For several-to-one simulations, \hbox{$\hat f_\sto$} is by far the best estimator of f and does not show any significant bias, whatever the value of n. Estimators \hbox{$\hat f_\oto$} and \hbox{$\hat f_\ots$} do not recover the input value of f, which is not surprising since they are not built from the right assumption here; moreover, while \hbox{$\hat f_\sto$}, o:s\hbox{$\hat f'_\ots$}, and \hbox{$\hat f_\oto$} are obtained by maximizing Ls:o, Lo:s, and Lo:o, respectively, \hbox{$\hat f_\ots$} is not directly fitted to the data.

For one-to-one simulations, and unlike wo:o\hbox{$\hat f_\oto^\wrong$}, \hbox{$\hat f_\oto$} is a consistent estimator of f, as expected. Puzzlingly, \hbox{$\hat f_\sto$} also works very well, maybe because (Hs:o) is a more relaxed assumption than (Ho:o); whatever the reason, this is not a problem.

6.3. Simultaneous estimation of f and ˚σ\hbox{$\sigmatot$}

6.3.1. Circular positional uncertainty ellipses

How do different estimators of f and ˚σ\hbox{$\sigmatot$} behave when the true values of positional uncertainties are also ignored? We show in Fig. 3 the result of simulations with the same input as in Sect. 6.2, except that n=n=2×104\hbox{$n = \np = 2\times10^4$}. The likelihood Ls:o peaks very close to the input value of x:=(f,˚σ)\hbox{$\vec x \coloneqq (f, \sigmatot)$} for both types of simulations: xˆs:o\hbox{$\hat{\vec x}_\sto$} is still an unbiased estimator of x. For one-to-one simulations, Lo:o is also maximal near the input value of x, so xˆo:o\hbox{$\hat{\vec x}_\oto$} is unbiased, too.

thumbnail Fig. 3

Contour lines of Ls:o (solid) and Lo:o (dashed) in the (f,˚σ)\hbox{$(f, \sigmatot)$} plane. Input parameters are the same as in Fig. 2, except that n=n=2×104\hbox{$n = \np = 2\times10^4$}; the input values of f and ˚σ\hbox{$\sigmatot$} are indicated by dotted lines (see Sect. 6.3.1 for details).  a) Several-to-one simulations.  b) One-to-one simulations.

thumbnail Fig. 4

Contour lines of Ls:o (solid) and Lo:o (dashed) in the (f,˚σ)\hbox{$(f, \sigmatot)$} plane. Input parameters are the same as in Fig. 2, except that positional uncertainty ellipses are elongated and randomly oriented (see Sect. 6.3.2 for details); the input value of f is indicated by a dotted line.  a) Several-to-one simulations.  b) One-to-one simulations.

6.3.2. Elongated positional uncertainty ellipses

To test the robustness of estimators of f, we ran simulations with the same parameters, but with elongated positional uncertainty ellipses: we took ai=aj=1.5×10-3rad\hbox{$a_i = a'_{\smash[t]{j}} = 1.5\times10^{-3}\,\radian$} and bi=bj=ai/3\hbox{$b_i = b'_{\smash[t]{j}} = a_i/3$} for all (Mi,Mj)K×K\hbox{$(M_i, \Mp_j) \in K \times K'$}. These ellipses were randomly oriented; i.e., position angles (cf. Appendix A.2.1) βi and βj\hbox{$\betapj$} have uniform random values in [0,ß [. We then estimated f, but ignoring these positional uncertainties (see Fig. 4).

Although the model from which the parameters are fitted is inaccurate here (the ξi,j are computed assuming circular positional uncertainties instead of the unknown elliptical ones), the input value of f is still recovered by \hbox{$\hat f_\sto$} for both types of simulations and by \hbox{$\hat f_\oto$} for one-to-one simulations. The fitting also provides the typical positional uncertainty ˚σ\hbox{$\sigmatot$} on the relative positions of associated sources.

6.4. Choice of association model

Now, given the two catalogs, which assumption should we adopt to compute the probabilities P(Ai,j | CC′): several-to-one, one-to-one or one-to-several? As shown in Fig. 5, for known positional uncertainties and a given n\hbox{$\np$}, source confusion is rare at low values of n (there is typically at most one possible counterpart) and all assumptions are equally likely. At larger n, \hbox{$\expandafter\hat\Lhsto > \expandafter\hat\Lhoto > \expandafter\hat\Lhots$} for several-to-one simulations; as expected, for one-to-one simulations, \hbox{$\expandafter\hat\Lhoto > \expandafter\hat\Lhsto$} and \hbox{$\expandafter\hat\Lhoto > \expandafter\hat\Lhots$}, with \hbox{$\expandafter\hat\Lhsto \approx \expandafter\hat\Lhots$} for n=n\hbox{$n = \np$}. In all cases, on average, the right assumption is the most likely. This is also true when positional uncertainties are ignored (Sect. 6.3).

The calculation of Lo:o is lengthy, and as a substitute to the comparison of the likelihoods, the following procedure may be applied to select the most appropriate assumption to compute the probabilities of association: if s:ono:sn\hbox{$\hat f_\sto\multspace n \approx \hat f'_\ots\multspace \np$}, use (Ho:o); if s:ono:sn\hbox{$\hat f_\sto\multspace n \not\approx \hat f'_\ots\multspace \np$}, then use (Hs:o) if s:on>o:sn\hbox{$\hat f_\sto\multspace n > \hat f'_\ots\multspace \np$}, and (Ho:s) otherwise.

thumbnail Fig. 5

Normalized average maximum value \hbox{$\hat\Lh$} of different likelihoods as a function of n, compared to \hbox{$\expandafter\hat\Lhoto$}. Simulations are the same as in Fig. 2.  a) Several-to-one simulations.  b) One-to-one simulations.

7. Conclusion

In this paper, we computed the probabilities of positional association of sources between two catalogs K and K under two different assumptions: first, the easy case where several K-objects may share the same counterpart in K, then the more natural but numerically intensive case of one-to-one associations only between K and K.

These probabilities depend on at least one unknown parameter: the fraction of sources with a counterpart. If the positional uncertainties are unknown, other parameters are required to compute the probabilities. We calculated the likelihood of observing all the K- and K-sources at their effective positions under each of the two assumptions described above, and estimated the unknown parameters by maximizing these likelihoods. The latter are also used to select the best association model.

These relations were implemented in a code, Aspects, which we make public and with which we analyzed all-sky several-to-one and one-to-one simulations. In all cases, the assumption with the highest likelihood is the right one, and estimators of unknown parameters obtained for it do not show any bias.

In the simulations, we assumed that the density of K- and K-sources was uniform on the sky area S: the quantities ξi, 0 and ξ0,j used to compute the probabilities are then equal to 1 /S. If the density of objects is not uniform, we might take ξi, 0 = ρ(Mi) /n and ξ0,j=ρ(Mj)/n\hbox{$\xi_{0\comma j} = \rho'\mkern-1mu(\Mp_j)/\np$}, where ρ and ρ are, respectively, the local surface densities of K- and K-sources; but if the ρrho ratio varies on the sky, so will the fraction of sources with a counterpart – something we did not try to model. Considering clustering or the side effects 8 due to a small S, as well as taking priors on the SED of objects into account was also beyond the scope of this paper.

In spite of these limitations, Aspects is a robust tool that should help astronomers cross-identify astrophysical sources automatically, efficiently and reliably.


1

For instance, de Ruiter et al. (1977) wrongly state that, if there is a counterpart, the closest object is always the right one.

2

For the sake of clarity, we mention that we adopt the same decreasing order of precedence for operators as in Mathematica (Wolfram 1996): × and /; Π; ; + and .

3

Computing Ps:o(C | C′) is easier than for Ps:o(C′ | C): the latter would require calculating Ps:o(c|􏽔k=1;jk=n[ckAk,jk])\hbox{$\Psto(c'_{\smash[t]{\ell}} \mid \bigcap_{\smash[t]{k=1{;}\, j_k=\ell}}^n {[c_k \cap A_{k\comma j_k}]})$} (cf. Eq. (9)) because several Mk might be associated with the same M\hbox{$\Mp_\ell$}. This does not matter for computations made under assumption (Ho:o).

4

Fortran 90 routines from Numerical Recipes (Press et al. 1992) are used to sort arrays and locate a value in an ordered table. Because of license constraints, we cannot provide them, but they may easily be replaced by free equivalents.

5

A consistent estimator is a statistic converging to the true value of a parameter when the size of the sample from which it is derived increases. The concept of consistency is not very clear in the context of this paper, since there are two sample sizes, n and n\hbox{$\np$}.

6

Small positional uncertainties could also be used if sources were distributed on a small fraction of the sky, but there might be side effects.

7

Equation (98)is valid for any f ∈ [0,1]. When \hbox{$f \approx \hat f_\oto$}, it is more efficient to make the approximation neff(i,d)nf(nd)\hbox{$\npeff(i,d) \approx \np -f\multspace (n-d)$}: this expression accelerates the convergence to \hbox{$\hat f_\oto$} of the sequence (fk) defined in Sect. 5.3.

8

The impact of clustering or of side effects on estimators of unknown parameters might however easily be tested through simulations.

9

None of the results established outside of Appendix A depends on this assumption.

10

If it were not the case, the probability of and might be modeled using Kent (1982) distributions (an adaptation to the sphere of the planar normal law), but no result like Eq. (A.8)would then hold: unlike Gaussians, Kent distributions are not stable.

11

We seize this opportunity to correct Eqs. (A.8) to (A.11) of Pineau et al. (2011): and should be replaced by their squares in these formulae.

12

However, as noticed by de Vaucouleurs & Head (1978) in a different context, if three samples with unknown uncertainties σi (i ∈ [[1, 3]]) are available and if the combined uncertainties σi,j:=(σi2+σj2)1/2\hbox{$\sigma_{i\comma j} \coloneqq (\sigma_i^2+\sigma^2_j)^{1/2}$} may be estimated for all the pairs (i,j)ji ∈ [[1, 3]]2, as in our case, then σi may be determined for each sample. Paturel & Petit (1999) used this technique to compute the accuracy of galaxy coordinates.

Acknowledgments

The initial phase of this work took place at the NASA/ Goddard Space Flight Center, under the supervision of Eli Dwek, and was supported by the National Research Council through the Resident Research Associateship Program. We acknowledge them sincerely. We also thank Stéphane Colombi for the discussions we had on the properties of maximum likelihood estimators.

References

  1. Bartlett, J. G., & Egret, D. 1998, in New Horizons from Multi-Wavelength Sky Surveys, eds. B. J. McLean, D. A. Golombek, J. J. E. Hayes, & H. E. Payne, IAU Symp., 179, 437 [Google Scholar]
  2. Bauer, F. E., Condon, J. J., Thuan, T. X., & Broderick, J. J. 2000, ApJS, 129, 547 [NASA ADS] [CrossRef] [Google Scholar]
  3. Benn, C. R. 1983, The Observatory, 103, 150 [NASA ADS] [Google Scholar]
  4. Brand, K., Brown, M. J. I., Dey, A., et al. 2006, ApJ, 641, 140 [NASA ADS] [CrossRef] [Google Scholar]
  5. Budavári, T., & Szalay, A. S. 2008, ApJ, 679, 301 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  6. Condon, J. J., Balonek, T. J., & Jauncey, D. L. 1975, AJ, 80, 887 [NASA ADS] [CrossRef] [Google Scholar]
  7. Condon, J. J., Anderson, E., & Broderick, J. J. 1995, AJ, 109, 2318 [NASA ADS] [CrossRef] [Google Scholar]
  8. de Ruiter, H. R., Arp, H. C., & Willis, A. G. 1977, A&AS, 28, 211 [NASA ADS] [Google Scholar]
  9. de Vaucouleurs, G., & Head, C. 1978, ApJS, 36, 439 [NASA ADS] [CrossRef] [Google Scholar]
  10. de Vaucouleurs, G., de Vaucouleurs, A., Corwin, Jr., H. G., et al. 1991, Third Reference Catalogue of Bright Galaxies (New York: Springer) [Google Scholar]
  11. Fioc, M. 2014, Aspects: code documentation and complements [arXiv:1404.4224] [Google Scholar]
  12. Fleuren, S., Sutherland, W., Dunne, L., et al. 2012, MNRAS, 423, 2407 [NASA ADS] [CrossRef] [Google Scholar]
  13. Haakonsen, C. B., & Rutledge, R. E. 2009, ApJS, 184, 138 [NASA ADS] [CrossRef] [Google Scholar]
  14. Kendall, M., & Stuart, A. 1979, The advanced theory of statistics. Vol. 2: Inference and relationship (London: Griffin) [Google Scholar]
  15. Kent, J. T. 1982, J. Roy. Stat. Soc. Ser. B, Stat. Methodol., 44, 71 [Google Scholar]
  16. Kim, S., Wardlow, J. L., Cooray, A., et al. 2012, ApJ, 756, 28 [NASA ADS] [CrossRef] [Google Scholar]
  17. Kuchinski, L. E., Freedman, W. L., Madore, B. F., et al. 2000, ApJS, 131, 441 [NASA ADS] [CrossRef] [Google Scholar]
  18. McAlpine, K., Smith, D. J. B., Jarvis, M. J., Bonfield, D. G., & Fleuren, S. 2012, MNRAS, 423, 132 [NASA ADS] [CrossRef] [Google Scholar]
  19. Moshir, M., Kopman, G., & Conrow, T. A. O. 1992, IRAS Faint Source Survey, Explanatory supplement version 2 (IPAC) [Google Scholar]
  20. Moshir, M., Copan, G., Conrow, T., et al. 1993, VizieR Online Data Catalog: II/156 [Google Scholar]
  21. Paturel, G., & Petit, C. 1999, A&A, 352, 431 [NASA ADS] [Google Scholar]
  22. Paturel, G., Bottinelli, L., & Gouguenheim, L. 1995, Astrophys. Lett. Commun., 31, 13 [Google Scholar]
  23. Paturel, G., Petit, C., Prugniel, P., et al. 2003, VizieR Online Data Catalog: VII/237 [Google Scholar]
  24. Pineau, F.-X., Motch, C., Carrera, F., et al. 2011, A&A, 527, A126 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  25. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical recipes in Fortran. The art of scientific computing (Cambridge: University press) [Google Scholar]
  26. Prestage, R. M., & Peacock, J. A. 1983, MNRAS, 204, 355 [NASA ADS] [Google Scholar]
  27. Rohde, D. J., Gallagher, M. R., Drinkwater, M. J., & Pimbblet, K. A. 2006, MNRAS, 369, 2 [NASA ADS] [CrossRef] [Google Scholar]
  28. Roseboom, I. G., Oliver, S., Parkinson, D., & Vaccari, M. 2009, MNRAS, 400, 1062 [NASA ADS] [CrossRef] [Google Scholar]
  29. Rutledge, R. E., Brunner, R. J., Prince, T. A., & Lonsdale, C. 2000, ApJS, 131, 335 [NASA ADS] [CrossRef] [Google Scholar]
  30. Sutherland, W., & Saunders, W. 1992, MNRAS, 259, 413 [NASA ADS] [CrossRef] [Google Scholar]
  31. Vignali, C., Fiore, F., Comastri, A., et al. 2009, in Multi-wavelength Astronomy and Virtual Observatory (European Space Agency), eds. D. Baines, & P. Osuna, 53 [Google Scholar]
  32. Wolfram, S. 1996, The Mathematica book (Cambridge University Press) [Google Scholar]
  33. Wolstencroft, R. D., Savage, A., Clowes, R. G., et al. 1986, MNRAS, 223, 279 [NASA ADS] [Google Scholar]

Appendix A: Probability distribution of the observed relative positions of associated sources

Appendix A.1: Properties of normal laws

We first recall a few standard results. The probability that an m-dimensional normally distributed random vector W of mean μ and variance Γ falls in some domain Ω is P(WΩ)=wΩexp(12[wμ]Bt·ΓB-1·[wμ]B)(2ß)m/2detΓBdmwB,\appendix \setcounter{section}{1} \begin{equation} \Prob(\vec W \in \Omega) = \int_{\vec w \in \Omega} \frac{ \exp\left(-\frac{1}{2}\multspace \transpose{[\vec w-\vec\mu]_B} \cdot \Gamma_B^{-1} \cdot [\vec w-\vec\mu]_B\right) }{ (2\multspace \piup)^{m/2} \multspace \!\sqrt{\det\Gamma_B} } \multspace \df^m\vec w_B, \end{equation}(A.1)where B := (u1,...,um) is a basis, w is a vector, wB = (w1,...,wm)t (resp. μB) is the column vector expression of w (resp. μ) in B, dmwB:=􏽑i=1mdwi\hbox{$\df^m\vec w_B \coloneqq \prod_{i=1}^m \df w_i$}, and ΓB is the covariance matrix of W (i.e. the matrix representation of Γ) in B. We denote this by W ~ Gm(μ,Γ).

In another basis B:=(u1,...,um)\hbox{$B' \coloneqq (\vec u'_{\smash[t]{1}}, \ldots, \vec u'_{\smash[t]{m}})$}, we have wB = TBB·wB, where TBB is the transformation matrix from B to B (i.e. uj=i=1m(TBB)i,jui\hbox{$\vec u'_{\smash[t]{j}} = \sum_{i=1}^m {(T_{B\rightarrow B'})_{i\comma j}\multspace \vec u_i}$}). Since dmwB = | detTBB | dmwB and (wμ)Bt·ΓB-1·(wμ)B=(wμ)Bt·(TBB-1·ΓB·[TBB-1]t)-1·(wμ)B,\appendix \setcounter{section}{1} \begin{eqnarray} \transpose{(\vec w-\vec\mu)_B} \cdot \Gamma_B^{-1} \cdot (\vec w-\vec\mu)_B \nonumber\\ = \transpose{(\vec w-\vec\mu)_{B'}} \cdot \left(T_{B\rightarrow B'}^{-1} \cdot \Gamma_B \cdot \transpose{[T_{B\rightarrow B'}^{-1}]}\right)^{-1} \cdot (\vec w-\vec\mu)_{B'}, \end{eqnarray}(A.2)we still obtain P(WΩ)=wΩexp(12[wμ]Bt·ΓB-1·[wμ]B)(2ß)m/2detΓBdmwB,\appendix \setcounter{section}{1} \begin{equation} \Prob(\vec W \in \Omega) = \int_{\vec w \in \Omega}\!\!\!\!\! \frac{ \exp\left(-\frac{1}{2}\multspace \transpose{[\vec w\!-\!\vec\mu]_{B'}} \cdot \Gamma_{B'}^{-1} \cdot [\vec w\!-\!\vec\mu]_{B'}\right) }{ (2\multspace \piup)^{m/2} \multspace \!\sqrt{\det\Gamma_{B'}} } \multspace \df^m\vec w_{B'}, \end{equation}(A.3)where ΓB:=TBB-1·ΓB·(TBB-1)t\hbox{$\Gamma_{B'} \coloneqq T_{B\rightarrow B'}^{-1} \cdot \Gamma_B \cdot \transpose{(T_{B\rightarrow B'}^{-1})}$} is the covariance matrix of W in B. In the following, B and B are orthonormal bases, so TBB is a rotation matrix. From TBBt=TBB-1\hbox{$\transpose{T_{B\rightarrow B'}} = T_{B\rightarrow B'}^{-1}$}, we get ΓB=TBBt·ΓB·TBB.\appendix \setcounter{section}{1} \begin{equation} \Gamma_{B'} = \transpose{T_{B\rightarrow B'}} \cdot \Gamma_B \cdot T_{B\rightarrow B'}. \end{equation}(A.4)

For independent random vectors W1 ~ Gm(μ11) and W2 ~ Gm(μ22), we have W1±W2~Gm(μ1±μ2,Γ1+Γ2).\appendix \setcounter{section}{1} \begin{equation} \label{somme_gaussiennes} \vec W_1 \pm \vec W_2 \sim G_m(\vec\mu_1 \pm \vec\mu_2, \Gamma_1+\Gamma_2). \end{equation}(A.5)

Appendix A.2: Covariance matrix of the probability distribution of relative positions

We now use these results to derive the probability distribution of vector ri,j:=rjri\hbox{$\vec r_{i\comma j} \coloneqq \vrpj-\vec r_i$}, where ri and rj\hbox{$\vrpj$} are, respectively, the observed positions of source Mi of K and of its counterpart Mj\hbox{$\Mp_j$} in K. Introducing the true positions r0,i and r0,j\hbox{$\vrpzj$} of Mi and Mj\hbox{$\Mp_j$}, we have ri,j=(rjr0,j)+(r0,jr0,i)+(r0,iri).\appendix \setcounter{section}{1} \begin{equation} \label{rel_pos} \vec r_{i\comma j} = (\vrpj - \vrpzj) + (\vrpzj - \vrzi) + (\vrzi -\vec r_i). \end{equation}(A.6)

Appendix A.2.1: Covariance matrix for identical true positions and known positional uncertainties

Assume9, as is usual, that rir0,i~G2(0,Γi)andrjr0,j~G2(0,Γj).\appendix \setcounter{section}{1} \begin{equation} \label{r_i,r_j} \vec r_i - \vrzi \sim G_2(\vec 0,\Gamma_i) \qquad\text{and}\qquad \vrpj - \vrpzj \sim G_2(\vec 0,\Gammapj). \end{equation}(A.7)If the true positions of Mi and Mj\hbox{$\Mp_j$} are identical (case of point sources), then, from Eqs. (A.5)–(A.7), ri,j~G2(0,Γi,j),whereΓi,j:=Γi+Γj.\appendix \setcounter{section}{1} \begin{equation} \label{rel_pos_ident} \vec r_{i\comma j} \sim G_2(\vec 0, \Gamma_{i\comma j}), \quad\text{where } \Gamma_{i\comma j} \coloneqq \Gamma_i + \Gammapj. \end{equation}(A.8)(See also Condon et al. 1995.) In Eqs. (A.7), rir0,i and rjr0,j\hbox{$\vrpj - \vrpzj$} must be considered as the projections (gnomonic ones, for instance) of these vectors on the planes tangent to the sphere at Mi and Mj\hbox{$\Mp_j$}, respectively; Eqs. (A.7)are approximations, valid only because positional uncertainties are small 10. Equation (A.8)is also an approximation: it is appropriate because the observed positions of associated sources Mi and Mj\hbox{$\Mp_j$} are close, so the tangent planes to the sphere at both points nearly coincide.

To use Eq. (A.8), we now compute the column vector expression of ri,j and the covariance matrices associated to Γi, Γj\hbox{$\Gammapj$}, and Γi,j in some common basis. For convenience, we drop the subscript and the “prime” symbol in the following whenever an expression only depends on either Mi or Mj\hbox{$\Mp_j$}.

Let (ux,uy,uz) be a direct orthonormal basis, with uz oriented from the Earth’s center O to the North Celestial Pole and ux from O to the Vernal Point. At a point M of right ascension α and declination δ, a direct orthonormal basis (ur,uα,uδ) is defined by uruαuδ\appendix \setcounter{section}{1} \begin{eqnarray} \vec u_r &&\coloneqq \frac{\vec{OM}}{\lVert\vec{OM}\rVert} = \cos\delta\multspace \cos\alpha\multspace \vec u_x + \cos\delta\multspace \sin\alpha\multspace \vec u_y + \sin\delta\multspace \vec u_z,~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \label{u_r} \\[2mm] \vec u_\alpha &&\coloneqq \frac{\partial\vec u_r/\partial\alpha}{ \lVert\partial\vec u_r/\partial\alpha\rVert} = -\!\sin\alpha\multspace \vec u_x + \cos\alpha\multspace \vec u_y,~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \label{u_alpha} \\[2mm] \vec u_\delta &&\coloneqq \frac{\partial\vec u_r/\partial\delta}{ \lVert\partial\vec u_r/\partial\delta\rVert} = -\!\sin\delta\multspace \cos\alpha\multspace \vec u_x - \sin\delta\multspace \sin\alpha\multspace \vec u_y + \cos\delta\multspace \vec u_z.~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \label{u_delta} \end{eqnarray}The uncertainty ellipse on the position of M is characterized by the lengths a and b of its semi-major and semi-minor axes, and by the position angle β between the north and the semi-major axis. Let ua and ub be unit vectors directed along the major and the minor axes, respectively, and such that (ur,ua,ub) is a direct orthonormal basis and that β := ∠(uδ,ua) is in [0,ß [ when counted eastward. Since (uα,uδ) is obtained from (ua,ub) by a (β − ß/2)-counterclockwise rotation in the plane oriented by + ur, we have T(ua,ub) → (uα,uδ) = Rot(β − ß/2), where, for any angle τ, Rotτ:=(cosτsinτsinτcosτ).\appendix \setcounter{section}{1} \begin{equation} \Rot\tau \coloneqq \Left(\begin{matrix} \cos\tau & -\!\sin\tau \\ \sin\tau & \cos\tau \end{matrix}\Right). \end{equation}(A.12)Using notation Diag(d1,d2):=(d100d2)\appendix \setcounter{section}{1} \begin{equation} \Diag\left(d_1, d_2\right) \coloneqq \Left(\begin{matrix} d_1 & 0 \\ 0 & d_2 \end{matrix}\Right) \end{equation}(A.13)for diagonal matrices, we have 11Γ(ua,ub)=Diaga2(,b2)\hbox{$\Gamma_{(\vec u_a{,}\, \vec u_b)} = \Diag\left(a^2, b^2\right)$} and Γ(uα,uδ)=Rott(βß/2)·Diag(a2,b2)·Rot(βß/2).\appendix \setcounter{section}{1} \begin{equation} \Gamma_{(\vec u_\alpha{,}\, \vec u_\delta)} = \transpose{\Rot}(\beta-\piup/2) \cdot \Diag\left(a^2, b^2\right) \cdot \Rot(\beta-\piup/2). \end{equation}(A.14)As noticed by Pineau et al. (2011), around the Poles, even for sources Mi and Mj\hbox{$\Mp_j$} close to each other, we may have (uα,i,uδ,i) ≉ (uα′,j,uδ′,j): the covariance matrices i)(uα,i,uδ,i) and (Γj)(uα,j,uδ,j)\hbox{$(\Gammapj)_{(\uapj,\,\udpj)}$} must therefore be first converted to a common basis before their summation in Eq. (A.8). We use the same basis as Pineau et al. (2011), denoted by (t,n) below. While the results we get are intrinsically the same, some people may find our expressions more convenient.

Denote by n := ur,i × ur′,j/ ∥ ur,i × ur′,j a unit vector perpendicular to the plane (O,Mi,Mj)\hbox{$(O, M_i, \Mp_j)$}. Because ψi,j := ∠(ur,i,ur′,j) ∈ [0,ß], we have ur,i·ur′,j = cosψi,j and ur,i × ur′,j ∥ = sinψi,j, so ψi,j=arccos(cosδicosδjcos[αjαi]+sinδisinδj),\appendix \setcounter{section}{1} \begin{equation} \label{psi_arccos} \psi_{i\comma j} = \arccos\left( \cos\delta_i\multspace \cos\deltapj\multspace \cos[\alphapj-\alpha_i] + \sin\delta_i\multspace \sin\deltapj \right), \end{equation}(A.15)and n=ur,i×ur,jsinψi,j·\appendix \setcounter{section}{1} \begin{equation} \label{vec_n} \vec n = \frac{\uri \times \urpj}{\sin\psi_{i\comma j}}\cdot \end{equation}(A.16)Let γi := ∠(n,uδ,i) and γj:=(n,uδ,j)\hbox{$\gammapj \coloneqq \angle(\vec n, \udpj)$} be angles oriented clockwise around + ur,i and + ur′,j, respectively. Angle γi is fully determined by the following expressions (cf. Eqs. (A.16)and (A.9)–(A.11)): cosγi=n·uδ,i=(ur,i×ur,j)·uδ,isinψi,j=(uδ,i×ur,i)·ur,jsinψi,j=uα,i·ur,jsinψi,j=cosδjsin(αjαi)sinψi,j;sinγi=n·uα,i=(ur,i×ur,j)·uα,isinψi,j=(uα,i×ur,i)·ur,jsinψi,j=uδ,i·ur,jsinψi,j=cosδisinδjsinδicosδjcos(αjαi)sinψi,j·\appendix \setcounter{section}{1} \begin{eqnarray} \cos\gamma_i &=& \vec n\cdot \udi = \frac{(\uri \times \urpj) \cdot \udi}{\sin\psi_{i\comma j}} = \frac{(\udi \times \uri) \cdot \urpj}{\sin\psi_{i\comma j}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \notag \\& =& \frac{\uai\cdot \urpj}{\sin\psi_{i\comma j}} = \frac{\cos\deltapj\multspace \sin(\alphapj-\alpha_i)}{\sin\psi_{i\comma j}};~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ \sin\gamma_i &=& -\vec n\cdot \uai = -\frac{(\uri \times \urpj) \cdot \uai}{\sin\psi_{i\comma j}} = -\frac{(\uai \times \uri) \cdot \urpj}{\sin\psi_{i\comma j}}~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \notag \\& =& \frac{\udi\cdot \urpj}{\sin\psi_{i\comma j}} = \frac{\cos\delta_i\multspace \sin\deltapj - \sin\delta_i\multspace \cos\deltapj\cos(\alphapj-\alpha_i)}{ \sin\psi_{i\comma j}}\cdot~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray}Similarly, cosγj=cosδisin(αjαi)sinψi,jandsinγj=cosδisinδjcos(αjαi)sinδicosδjsinψi,j·\appendix \setcounter{section}{1} \begin{eqnarray} \cos\gammapj = \frac{\cos\delta_i\multspace \sin(\alphapj-\alpha_i)}{\sin\psi_{i\comma j}} \qquad \text{and}\nonumber \\ \sin\gammapj = \frac{\cos\delta_i\multspace \sin\deltapj\cos(\alphapj-\alpha_i) - \sin\delta_i\multspace \cos\deltapj}{\sin\psi_{i\comma j}}\cdot \end{eqnarray}(A.19)Let t:=n×ur,i\appendix \setcounter{section}{1} \begin{equation} \vec t \coloneqq \vec n \times \uri \end{equation}(A.20)(n × ur′,j since Mi and Mj\hbox{$\Mp_j$} are close): vector t is a unit vector tangent in Mi to the minor arc of great circle going from Mi to Mj\hbox{$\Mp_j$}. Project the sphere on the plane (Mi,t,n) tangent to the sphere in Mi (the specific projection does not matter since we consider only K-sources in the neighborhood of Mi). We have ri,jψi,jt,\appendix \setcounter{section}{1} \begin{equation} \vec r_{i\comma j} \approx \psi_{i\comma j}\multspace \vec t, \end{equation}(A.21)and the basis (t,n) is obtained from (ua,ub) by a (β + γ − ß/2)-counterclockwise rotation around + ur, so, (Γi)(t,n)=Rott(βi+γiß/2)·Diag(ai2,bi2)·Rot(βi+γiß/2),(Γj)(t,n)=Rott(βj+γjß/2)·Diag(aj2,bj2)·Rot(βj+γjß/2).\appendix \setcounter{section}{1} \begin{eqnarray} (\Gamma_i)_{(\vec t,\,\vec n)} &=& \transpose{\Rot}(\beta_i+\gamma_i-\piup/2) \!\cdot\! \Diag\left(a_i^2, b_i^2\right) \! \cdot\! \Rot(\beta_i+\gamma_i-\piup/2), ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \\ (\Gammapj)_{(\vec t,\,\vec n)} &=& \transpose{\Rot}(\betapj+\gammapj-\piup/2)\! \cdot\! \Diag\left(a_{\smash[t]{j}}'^2, b_{\smash[t]{j}}'^2\right) \! \cdot\! \Rot(\betapj+\gammapj\!-\!\piup/2).~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{eqnarray}

Appendix A.2.2: Case of unknown positional uncertainties

If the positional uncertainty on Mi is unknown, we may model it with i)(t,n) = σ2 Diag(1,1), using the same σ for all K-sources, and derive an estimate of ˚σ:=σ\hbox{$\sigmatot \coloneqq \sigma$} by maximizing the likelihood to observe the distribution of K- and K-sources (see Sects. 3.2 and 4.2). For a galaxy, however, the positional uncertainty on its center is likely to increase with its size. If the position angle θi (counted eastward from the north) and the major and minor diameters Di and di of the best-fitting ellipse of some isophote are known for Mi (for instance, parameters PA, D25 and d25 := D25/R25 taken from the RC3 catalog (de Vaucouleurs et al. 1991) or HyperLeda (Paturel et al. 2003)), we may model the positional uncertainty with (Γi)(t,n)=Rott(θi+γiß/2)·Diag(σ2+[νDi]2,σ2+[νdi]2)·Rot(θi+γiß/2)=σ2Diag(1,1)+ν2Rott(θi+γiß/2)·Diag(Di2,di2)·Rot(θi+γiß/2),\appendix \setcounter{section}{1} \begin{eqnarray} (\Gamma_i)_{(\vec t,\,\vec n)} &=& \transpose{\Rot}(\theta_i+\gamma_i-\piup/2) \cdot \Diag\left(\sigma^2 + [\nu\multspace D_i]^2, \sigma^2 + [\nu\multspace d_i]^2\right) \notag \\&&\quad\cdot \Rot(\theta_i+\gamma_i-\piup/2) \notag \\ &=& \sigma^2\multspace \Diag(1, 1) + \nu^2\multspace \transpose{\Rot}(\theta_i+\gamma_i-\piup/2) \notag \\&&\quad\cdot \Diag\left(D_i^2, d_i^2\right) \cdot \Rot(\theta_i+\gamma_i-\piup/2), \end{eqnarray}(A.24)and derive estimates of ˚σ:=σ\hbox{$\sigmatot \coloneqq \sigma$} and ˚ν:=ν\hbox{$\nutot \coloneqq \nu$} from the likelihood. Such a technique might indeed be used to estimate the accuracy of coordinates in some catalog (see Paturel & Petit 1999 for another method).

If the positional uncertainty on Mj\hbox{$\Mp_j$} is unknown too, we can also put (Γj)(t,n)=σ2Diag(1,1)+ν2Rott(θj+γjß/2)·Diag(Di2,di2)·Rot(θj+γjß/2),\appendix \setcounter{section}{1} \begin{eqnarray} (\Gammapj)_{(\vec t,\,\vec n)} &=& \sigma'^2\multspace \Diag(1, 1) \nonumber \\&&+ \nu'^2\multspace \transpose{\Rot}(\thetapj+\gammapj-\piup/2) \cdot \Diag\left(D_i^2, d_i^2\right) \cdot \Rot(\thetapj+\gammapj-\piup/2), \end{eqnarray}(A.25)with the same σ and ν for all K-sources. As γj+θj=γi+θi\hbox{$\gammapj + \thetapj = \gamma_i + \theta_i$}, only estimates of ˚σ:=(σ2+σ2)1/2\hbox{$\sigmatot \coloneqq (\sigma^2+\sigma'^2)^{1/2}$} and ˚ν:=(ν2+ν2)1/2\hbox{$\nutot \coloneqq (\nu^2+\nu'^2)^{1/2}$} may be obtained 12 by maximizing the likelihood, not the values of σ, σ, ν or ν themselves.

Appendix A.2.3: Possibly different true positions

A similar technique can be applied if the true centers of K-sources and of their counterparts in K sometimes differ. This might be useful in particular when associating galaxies from an optical catalog and from a ultraviolet or far-infrared one, because, while the optical is dominated by smoothly-distributed evolved stellar populations, the ultraviolet and the far-infrared mainly trace star-forming regions. Observations of galaxies (e.g., Kuchinski et al. 2000) have indeed shown that galaxies are very patchy in the ultraviolet, and the same has been observed in the far-infrared.

Since the angular distance between the true centers should increase with the size of the galaxy, we might model this as r0,jr0,i~G2(0,Γ0,i),where(Γ0,i)(t,n)=ν02Rott(θi+γiß/2)·Diag(Di2,di2)·Rot(θi+γiß/2).\appendix \setcounter{section}{1} \begin{eqnarray} && \vrpzj-\vrzi\sim G_2(\vec 0, \Gammazi), \quad\text{where } \nonumber\\&& (\Gammazi)_{(\vec t,\,\vec n)} \! =\! \nu_0^2\multspace \transpose{\Rot}(\theta_i+\gamma_i-\piup/2) \cdot \Diag\left(D_i^2, d_i^2\right) \cdot \Rot(\theta_i+\gamma_i-\piup/2). \end{eqnarray}(A.26)We then have ri,j~G2(0,Γi,j),withΓi,j:=Γi+Γj+Γ0,i.\appendix \setcounter{section}{1} \begin{equation} \vec r_{i\comma j} \sim G_2(\vec 0, \Gamma_{i\comma j}), \quad \text{with } \Gamma_{i\comma j} \coloneqq \Gamma_i+\Gammapj+\Gammazi. \end{equation}(A.27)

Once again, if σ, σ, ν, ν and ν0 are unknown, only ˚σ:=(σ2+σ2)1/2\hbox{$\sigmatot \coloneqq (\sigma^2+\sigma'^2)^{1/2}$} and ˚ν:=(ν2+ν2+ν02)1/2\hbox{$\nutot \coloneqq (\nu^2+\nu'^2+\nu_0^2)^{1/2}$} may be estimated through likelihood maximization.

All Figures

thumbnail Fig. 1

One-to-one simulations for f = 1/2, n=105\hbox{$\np = 10^5$}, and circular positional uncertainty ellipses with ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$} (see Sects. 6.1 and 6.2 for details).  a) Mean value of different estimators \hbox{$\hat f$} of f as a function of n. The dotted line indicates the input value of f.  b) Normalized average maximum value \hbox{$\hat\Lh$} of different likelihoods as a function of n, compared to wo:o\hbox{$\expandafter\hat\Lhoto^\wrong$}.

In the text
thumbnail Fig. 2

Mean value of different estimators \hbox{$\hat f$} of f as a function of n for f = 1/2 (dotted line), n=105\hbox{$\np = 10^5$}, and circular positional uncertainty ellipses with ˚σ=10-3rad\hbox{$\sigmatot = 10^{-3}\,\radian$} (see Sects. 6.1 and 6.2 for details).  a) Several-to-one simulations.  b) One-to-one simulations (\hbox{$\hat f_\sto$} and \hbox{$\hat f_\oto$} overlap).

In the text
thumbnail Fig. 3

Contour lines of Ls:o (solid) and Lo:o (dashed) in the (f,˚σ)\hbox{$(f, \sigmatot)$} plane. Input parameters are the same as in Fig. 2, except that n=n=2×104\hbox{$n = \np = 2\times10^4$}; the input values of f and ˚σ\hbox{$\sigmatot$} are indicated by dotted lines (see Sect. 6.3.1 for details).  a) Several-to-one simulations.  b) One-to-one simulations.

In the text
thumbnail Fig. 4

Contour lines of Ls:o (solid) and Lo:o (dashed) in the (f,˚σ)\hbox{$(f, \sigmatot)$} plane. Input parameters are the same as in Fig. 2, except that positional uncertainty ellipses are elongated and randomly oriented (see Sect. 6.3.2 for details); the input value of f is indicated by a dotted line.  a) Several-to-one simulations.  b) One-to-one simulations.

In the text
thumbnail Fig. 5

Normalized average maximum value \hbox{$\hat\Lh$} of different likelihoods as a function of n, compared to \hbox{$\expandafter\hat\Lhoto$}. Simulations are the same as in Fig. 2.  a) Several-to-one simulations.  b) One-to-one simulations.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.