Probabilistic positional association of catalogs of astrophysical sources: the Aspects code ^{⋆,}^{⋆⋆}
Institut d’Astrophysique de Paris, UPMC – Univ. Paris 6, CNRS, UMR 7095, 98bis boulevard Arago, 75014 Paris, France
email: Michel.Fioc@iap.fr
Received: 16 July 2012
Accepted: 1 November 2013
We describe a probabilistic method of crossidentifying astrophysical sources in two catalogs from their positions and positional uncertainties. The probability that an object is associated with a source from the other catalog, or that it has no counterpart, is derived under two exclusive assumptions: first, the classical case of severaltoone associations, and then the more realistic but more difficult problem of onetoone associations. In either case, the likelihood of observing the objects in the two catalogs at their effective positions is computed and a maximum likelihood estimator of the fraction of sources with a counterpart – a quantity needed to compute the probabilities of association – is built. When the positional uncertainty in one or both catalogs is unknown, this method may be used to estimate its typical value and even to study its dependence on the size of objects. It may also be applied when the true centers of a source and of its counterpart at another wavelength do not coincide. To compute the likelihood and association probabilities under the different assumptions, we developed a Fortran 95 code called Aspects ([aspε], “Association positionnelle/probabiliste de catalogues de sources” in French); its source files are made freely available. To test Aspects, allsky mock catalogs containing up to 10^{5} objects were created, forcing either severaltoone or onetoone associations. The analysis of these simulations confirms that, in both cases, the assumption with the highest likelihood is the right one and that estimators of unknown parameters built for the appropriate association model are reliable.
Key words: methods: statistical / catalogs / galaxies: statistics / stars: statistics / astrometry
Available at www2.iap.fr/users/fioc/Aspects/
The Aspects code is available at the CDS via anonymous ftp to cdsarc.ustrasbg.fr (130.79.128.5) or via http://cdsarc.ustrasbg.fr/vizbin/qcat?J/A+A/566/A8
© ESO, 2014
1. Introduction
The most basic method of crossidentifying two catalogs K and K′ with known circular positional uncertainties is to consider that a K′source M′ is the same as an object M of K if it falls within a disk centered on M and having a radius equal to a few times their combined positional uncertainty; if the disk is void, M has no counterpart, and if it contains several K′sources, the nearest one is identified to M. This solution is defective for several reasons: it does not take the density of sources into account; positional uncertainty ellipses are not properly treated; the radius of the disk is arbitrary; positional uncertainties are not always known; K and K′ do not play symmetrical roles; the identification is ambiguous if a K′source may be associated to several objects of K. Worst of all, it does not provide a probability of association.
Beyond this naïve method, the crossidentification problem has been studied by Condon et al. (1975), de Ruiter et al. (1977), Prestage & Peacock (1983), Sutherland & Saunders (1992), Bauer et al. (2000), and Rutledge et al. (2000), among others. As shown by the recent papers of Budavári & Szalay (2008), Brand et al. (2006), Rohde et al. (2006), Roseboom et al. (2009), and Pineau et al. (2011), this field is still very active and will be more so with the wealth of forthcoming multiwavelength data and the virtual observatory (Vignali et al. 2009). In these papers, the identification is performed using a “likelihood ratio”. For two objects (M,M′) ∈ K × K′ with known coordinates and positional uncertainties, and given the local surface density of K′sources, this ratio is typically computed as (1)where P(position  counterpart) is the probability of finding M′ at some position relative to M if M′ is a counterpart of M, and P(position  chance) is the probability that M′ is there by chance. As noticed by Sutherland & Saunders (1992), there has been some confusion when defining and interpreting λ, and, more importantly, in deriving the probability ^{1} that M and M′ are the same.
To associate sources from catalogs at different wavelengths, some authors include some a priori information on the spectral energy distribution (SED) of the objects in this likelihood ratio. When this work started, our primary goal was to build template observational SEDs from the optical to the farinfrared for different types of galaxies. We initially intended to crossidentify the Iras Faint Source Survey (Moshir et al. 1992, 1993) with the Leda database (Paturel et al. 1995). Because of the high positional inaccuracy of Iras data, special care was needed to identify optical sources with infrared ones. While Iras data are by now quite outdated and have been superseded by Spitzer and Herschel observations, we think that the procedure we began to develop at that time may be valuable for other studies. Because we aimed to fit synthetic SEDs to the template observational ones, we could not and did not want to make assumptions on the SED of objects based on their type, since this would have biased the procedure. We therefore rely only on positions in what follows.
The method we use is in essence similar to that of Sutherland & Saunders (1992). Because thinking in terms of probabilities rather than of likelihood ratios highlights some implicit assumptions, we found it however useful for the sake of clarity to detail hereafter our calculations. This allows us moreover to propose a systematic way to estimate the unknown parameters required to compute the probabilities of association and to extend our work to a case not covered by the papers cited above (see Sect. 4).
After some preliminaries (Sect. 2), we compute in Sect. 3 the probability of association under the hypothesis that a Ksource has at most one counterpart in K′ but that several objects of K may share the same one (“severaltoone” associations). We also compute the likelihood to observe all the sources at their effective positions and use it to estimate the fraction of objects with a counterpart and, if unknown, the positional uncertainty in one or both catalogs. In Sect. 4, we do the same calculations under the assumption that a Ksource has at most one counterpart in K′ and that no other object of K has the same counterpart (“onetoone” associations). In Sect. 5, we present a code, Aspects, implementing the results of Sects. 3 and 4, and with which we compute the likelihoods and probabilities of association under the aforementioned assumptions. We test it on simulations in Sect. 6. The probability distribution of the relative positions of associated sources is modeled in Appendix A.
2. Preliminaries
2.1. Notations
We consider two catalogs K and K′ defined on a common surface of the sky, of area S, and containing respectively n sources (M_{i})_{i ∈ [[1,n]]} and sources . We define the following events:

c_{i}: M_{i} is in the infinitesimal surface element d^{2}r_{i} located at r_{i};

: is in the infinitesimal surface element located at ;

: the coordinates of all Ksources are known;

: the coordinates of all K′sources are known;

A_{i,j}, with i ≠ 0 and j ≠ 0: is a counterpart of M_{i};

A_{i, 0}: M_{i} has no counterpart in K′, i.e. , where is the negation of an event ω;

A_{0,j}: has no counterpart in K.
We denote by f (resp. f′) the unknown a priori (i.e., not knowing the coordinates) probability that any element of K (resp. K′) has a counterpart in K′ (resp. K). In terms of the events (A_{i,j}), for any , (2)We see in Sects. 3.2 and 4.2 how to estimate f and f′.
The angular distance between two points Y and Z is written ψ(Y,Z). More specifically, we put .
2.2. Assumptions
Calculations are carried out under one of three exclusive assumptions:

Severaltoone hypothesis: Therefore, a Ksource has at most one counterpart in K′, but a K′source may have several counterparts in K. Since more Ksources have a counterpart in K′ than the converse, . This assumption is reasonable if the angular resolution in K′ (e.g. Iras) is much poorer than in K (e.g. Leda), since several distinct objects of K may then be confused in K′.

Onetoseveral hypothesis: the symmetric of assumption (H_{s:o}), i.e., In that case, . This assumption is appropriate for catalogs of extended sources that, although observed as single at the wavelength of K, may look broken up at the wavelength of K′.

Onetoone hypothesis: any Ksource has at most one counterpart in K′ and reciprocally, i.e. Then, . This assumption is the most relevant one for highresolution catalogs of point sources or of welldefined extended sources.
Probabilities, likelihoods, and estimators specifically derived under either assumption (H_{s:o}), (H_{o:s}), or (H_{o:o}) are written with the subscript “s:o”, “o:s”, or “o:o”, respectively; the subscript “:o” is used for results valid for both (H_{s:o}) and (H_{o:o}). The “severaltoseveral” hypothesis where all the events are independent is not considered here.
We make two other assumptions: all the associations A_{i,j} with i ≠ 0 and j ≠ 0 are considered a priori as equally likely, and the effect of clustering is negligible.
2.3. Approach
Our approach is the following. For each of the assumptions (H_{s:o}), (H_{o:o}), and (H_{o:s}), we

find an expression for the probabilities of association,

build estimators of the unknown parameters needed to compute these probabilities, and

compute the likelihood of the assumption from the data.
Then, we compute the probabilities of association for the best estimators of unknown parameters and the most likely assumption.
Although (H_{s:o}) is less symmetrical and neutral than (H_{o:o}), we begin our study with this assumption: first, because computations are much simpler under (H_{s:o}) than under (H_{o:o}) and serve as a guide for the latter; second, because they provide initial values for the iterative procedure (Sect. 5.4.3) used to effectively compute probabilities under (H_{o:o}).
3. Severaltoone associations
In this section, we assume that hypothesis (H_{s:o}) holds. As shown in Sect. 3.3, this is also the assumption implicitly made by the authors cited in the introduction.
3.1. Probability of association: global computation
We want to compute^{2} the probability P(A_{i,j}  C ∩ C′) of association between sources M_{i} and (j ≠ 0) or the probability that M_{i} has no counterpart (j = 0), knowing the coordinates of all the objects in K and K′. Remembering that, for any events ω_{1}, ω_{2}, and ω_{3}, P(ω_{1}  ω_{2}) = P(ω_{1} ∩ ω_{2}) /P(ω_{2}) and thus (3)we have, with ω_{1} = A_{i,j}, ω_{2} = C, and ω_{3} = C′, (4)
3.1.1. Computation of P_{s:o}(C  C′)
We first compute the denominator of Eq. (4)^{3}. The event (5)is certain by definition of the A_{k,jk} and, under either assumption (H_{s:o}) or (H_{o:o}), A_{k,ℓ} ∩ A_{k,m} =? for all M_{k} if ℓ ≠ m. Consequently, using the symbol ^{} for mutually exclusive events instead of ^{}, we obtain (6)with ω_{1} = C, , and ω_{3} = C′ in Eq. (3).
Since , the first factor in the product of Eq. (6)is (7)with ω_{1} = c_{1}, , and ω_{3} = A_{k,jk} ∩ C′ in Eq. (3). Doing the same with instead of C, we obtain (8)by iteration.
If j_{ℓ} ≠ 0, M_{ℓ} is only associated with . Consequently, (9)where, denoting by the position vector of relative to M_{ℓ} and by Γ_{ℓ,jℓ} the covariance matrix of r_{ℓ,jℓ} (cf. Appendix A.2), (10)If j_{ℓ} = 0, M_{ℓ} is not associated with any source in K′. Since clustering is neglected, (11)where the last equality defines the spatial probability density ξ_{ℓ, 0}; for the uninformative prior of a uniform a priori probability distribution of Ksources without counterpart, ξ_{ℓ, 0} = 1 /S. From Eqs. (8), (9), and (11), it follows that (12)where (13)
We now compute the second factor in the product of Eq. (6). Knowing the coordinates of K′sources alone, without those of any in K, does not change the likelihood of the associations (A_{k,jk}); in other words, C′ and are mutually unconditionally independent (but conditionally dependent on C). Therefore, (14)Let q := # { k ∈ [[1,n]]  j_{k} ≠ 0 }, where #E denotes the number of elements of any set E. Since the events (A_{k,jk})_{k ∈ [[1,n]]} are independent by assumption (H_{s:o}), (15)Using definition (2), and on the hypothesis that all associations are a priori equally likely if k ≠ 0 (Sect. 2.2), we get (16)Since P_{s:o}(A_{k, 0}) = 1 − f, we have (17)Hence, from Eqs. (6), (12)and (17), (18)By the definition of q, there are q strictly positive indices j_{k} (as many as the factors “” in Eq. (18)) and n − q null ones (as many as the factors “(1 − f)”). Therefore, with (19)Eq. (18)reduces to (20)where the last equality is derived by induction from the distributivity of multiplication over addition.
3.1.2. Computation of P_{s:o}(A_{i, j}∩ C  C′)
The computation of the numerator of Eq. (4)is similar to that of P_{s:o}(C  C′): (21)where we put j_{i} := j.
Let q^{⋆} := # { k ∈ [[1,n]]  j_{k} ≠ 0 } (indices j_{k} are now those of Eq. (21)). As for P_{s:o}(C  C′), (22)
3.1.3. Final results
Finally, from Eqs. (4), (20), and (22), fori ≠ 0,
As to the probability P_{s:o}(A_{0,j}  C ∩ C′) that has no counterpart in K, it can be computed in this way: (25)and, using Eqs. (20), (23), and (3), (26)
3.2. Likelihood and estimation of unknown parameters
3.2.1. General results
Various methods have been proposed for estimating the fraction of sources with a counterpart (Kim et al. 2012; Fleuren et al. 2012; McAlpine et al. 2012; Haakonsen & Rutledge 2009). Pineau et al. (2011), for instance, fit f to the overall distribution of the likelihood ratios. We propose a more convenient and systematic method in this section.
Besides f, the probabilities P(A_{i,j}  C ∩ C′) may depend on other unknowns, such as the parameters and modeling the positional uncertainties (cf. Appendices A.2.2 and A.2.3). We write here x_{1}, x_{2}, etc., for all these parameters, and put x := (x_{1},x_{2},...). An estimate of x may be obtained by maximizing with respect to x (and with the constraint ) the overall likelihood (27)to observe all the K and K′sources at their effective positions. Unless the result is outside the possible domain for x (i.e., if L reaches its maximum on the boundary of this domain), the maximum likelihood estimator is a solution to (28)From now on, all quantities calculated at bear a circumflex.
We have (29)and, since clustering is neglected, (30)where ξ_{0,j} is the spatial probability density defined by ; for the uninformative prior of a uniform a priori probability distribution of K′sources, ξ_{0,j} = 1 /S. From Eqs. (27), (29), (30), and (13), we obtain (31)
In particular, under assumption (H_{s:o}), Eqs. (31), (20), and (13)give (32)Therefore, for any parameter x_{p} and because the ξ_{0,j} are independent of x, (33)(For reasons highlighted just after Eq. (73), it is convenient to express most results as a function of the probabilities P(A_{i,j}  C ∩ C′).)
Uncertainties on the unknown parameters may be computed from the covariance matrix V of . For large numbers of sources, V is asymptotically given (Kendall & Stuart 1979) by (34)
3.2.2. Fraction of sources with a counterpart
Consider, in particular, the case x_{p} = f. We note that (35)Under the assumption (H_{s:o}) or (H_{o:o}) (but not under (H_{o:s})), (36)so, using Eq. (35), (37)Summing Eq. (37)on i, we obtain from Eq. (33)that (38)Consequently, the maximum likelihood estimator of the fraction f of Ksources with a counterpart in K′ is After some tedious calculations, it can be shown that (41)for all f, so ∂lnL_{s:o}/∂f has at most one zero in [0,1]: is unique.
Since appears on the two sides of Eq. (39)(remember that is the value of P_{s:o} at ), we may try to determine it through an iterative back and forth computation between the lefthand and the righthand sides of this equation. (A similar idea was also proposed by Benn 1983.) We prove in Sect. 5.3 that this procedure converges for any starting value f ∈] 0,1 [.
An estimate of the fraction f′ of K′sources with a counterpart is given by (42)It can be checked from Eqs. (40), (42), and (26)that, as expected if assumption (H_{s:o}) is valid (cf. Sect. 2.2), . (Just notice that, for any numbers y_{i} ∈ [0,1], , which is obvious by induction; apply this to and then sum on j.)
3.3. Probability of association: local computation
Under assumption (H_{s:o}), a purely local computation (subscript “loc” hereafter) of the probabilities of association is also possible. Consider a region U_{i} of area containing the position of M_{i}, and such that we can safely hypothesize that the counterpart in K′ of M_{i}, if any, is inside. We assume that the local surface density of K′sources unrelated to M_{i} is uniform on U_{i}. To avoid biasing the estimate if M_{i} has a counterpart, may be evaluated from the number of K′sources in a region surrounding U_{i}, but not overlapping it (an annulus around a disk U_{i} centered on M_{i}, for instance).
Besides the A_{i,j}, we consider the following events:

: U_{i} contains sources;

, where .
We want to compute the probability that a source in U_{i} is the counterpart of M_{i}, given the positions relative to M_{i} of all its possible counterparts , i.e. . Using Eq. (3)with ω_{1} = A_{i,j}, , and in the first equality below, and then with , ω_{2} = A_{i,k}, and ω_{3} unchanged in the last one, we obtain (43)Now, (44)and (45)(The probability P_{loc}(A_{i,j}) itself could not have been computed as because would be undefined, which is why event was introduced.) If clustering is negligible, the number of K′sources randomly distributed with a mean surface density in an area follows a Poissonian distribution, so (46)(one counterpart and sources by chance in ) and (47)(no counterpart and sources by chance in ). Thus, from Eqs. (45)–(47), and (2), (48)We have (49)(rigorously, ξ_{i,j} should be replaced by , but is negligible by definition of U_{i}), so, using Eqs. (43), (48), and (49), we obtain (50)where is the likelihood ratio (cf. Eq. (1)). Mutatis mutandis, we obtain the same result as Eq. (14) of Pineau et al. (2011) and the aforementioned authors. When the computation is extended from U_{i} to the whole surface covered by K′, is replaced by in Eq. (50), ∑ _{k ∈ Ji} by , and we recover Eq. (24)since ξ_{i, 0} = 1 /S for a uniform distribution.
The index j_{MLC}(i) of the most likely counterpart of M_{i} is the value of j ≠ 0 maximizing λ_{i,j}. Very often, λ_{i,jMLC(i)} ≫ ∑ _{k ∈ Ji; k ≠ jMLC(i)}λ_{i,k}, so (51)As a “poor man’s” recipe, if the value of f is unknown and not too close to either 0 or 1, an association may be considered as true if λ_{i,jMLC(i)} ≫ 1 and as false if λ_{i,jMLC(i)} ≪ 1. Where to set the boundary between true associations and false ones is somewhat arbitrary (Wolstencroft et al. 1986). For a large sample, however, f can be estimated from the distribution of the positions of all sources, as shown in Sect. 3.2.
4. Onetoone associations
Under (H_{s:o}) (Sect. 3), a given can be associated with several M_{i}: there is no symmetry between K and K′ under this assumption and, while for all M_{i}, could be strictly larger than 1 for some sources . We assume here that the much more constraining assumption (H_{o:o}) holds. As far as we know and despite some attempt by Rutledge et al. (2000), this problem has not been solved previously (see also Bartlett & Egret 1998 for a simple statement of the question).
Since a K′potential counterpart of M_{i} within some neighborhood U_{i} of M_{i} might in fact be the true counterpart of another source M_{k}outside of U_{i}, there is no obvious way to adapt the exact local severaltoone computation of Sect. 3.3 to the case of the onetoone assumption. We therefore have to consider all the K and K′sources, as in Sect. 3.1.
Under assumption (H_{o:o}), catalogs K and K′ play symmetrical roles; in particular, (52)For practical reasons (cf. Eq. (61)), we nonetheless name K the catalog with the fewer objects and K′ the other one, so in the following.
4.1. Probability of association
4.1.1. Computation of P_{o:o}(C  C′)
The denominator of Eq. (4)is (53)(same reasons as for Eq. (6)). Because A_{k,m} ∩ A_{ℓ,m} =? if k ≠ ℓ and m ≠ 0 by assumption (H_{o:o}), this reduces to (54)where, to ensure that each K′source is associated with at most one of K, the sets X_{k} of excluded counterparts are defined iteratively by (55)As a result, (56)The first factor in the product of Eq. (56)is still given by Eq. (12), so we just have to compute the second factor, (57)Let and Q be a random variable describing the number of associations between K and K′: (58)Since by definition of q, we only have to compute and P_{o:o}(Q = q).
There are n ! /(q ! [n − q] !) choices of q elements among n in K, and choices of q elements among in K′. The number of permutations of q elements is q !, so the total number of onetoone associations of q elements from K to q elements of K′ is (59)The inverse of this number is (60)With our definition of K and K′, , so all the elements of K may have a counterpart in K′ jointly. Therefore, P_{o:o}(Q = q) is given by the binomial law: (61)From Eqs. (56), (12), (60), and (61), we obtain There are q factors “” in the above equation, one for each index j_{k} ≠ 0. There are also n − q factors “(1 − f)”, one for each null j_{k}. For every j_{k} ≠ 0, ; and, since , a different j_{k} corresponds to each ℓ ∈ [[1,q]], so . With (64)Eq. (63)therefore simplifies to (65)
4.1.2. Computation of P_{o:o}(A_{i, j}∩ C  C′)
The denominator of Eq. (4)is computed in the same way as P_{o:o}(C  C′): (66)where (67)so (68)Let . As for P_{o:o}(C  C′), (69)where (70)
4.1.3. Final results
Finally, from Eqs. (4), (65), and (69), fori ≠ 0, (71)
The probability that a source has no counterpart in K is simply given by (72)
4.2. Likelihood and estimation of unknown parameters
As in Sect. 3.2, an estimate of the set x of unknown parameters may be obtained by solving Eq. (28). Under assumption (H_{o:o}), we obtain from Eqs. (65), (31), and (13)that (73)Because the number of terms in Eq. (73)grows exponentially with n and , this equation seems useless. In fact, the prior computation of L_{o:o} is not necessary if the probabilities P_{o:o}(A_{i,j}  C ∩ C′) are calculable (we see how to evaluate these in Sect. 5.4).
Indeed, for any parameter x_{p}, we get the same result (Eq. (33)) as under assumption (H_{s:o}). First, we note that, since the ξ_{0,j} are independent of x, we obtain from Eq. (31)that (74)Now, for any set Υ of indices and any product of strictly positive functions h_{k} of some variable y, (75)With h_{k} = η_{k,jk}, y = x_{p} and Υ = [[1,n]], we therefore obtain from Eq. (65)that (76)The expression of P_{o:o}(A_{i,j} ∩ C  C′) (Eq. (69)) may also be written (77)where χ is the indicator function (i.e. χ(j_{i} = j) = 1 if proposition “j_{i} = j” is true and χ(j_{i} = j) = 0 otherwise), so (78)If j_{i} = 0, then η_{i,ji} = ζ_{i,ji}; and if j_{i} ≠ 0, the numerators of η_{i,ji} and ζ_{i,ji} are the same and their denominators do not depend on x_{p}: in all cases, ∂lnη_{i,ji}/∂x_{p} = ∂lnζ_{i,ji}/∂x_{p}. The righthand sides of Eqs. (76)and (78)are therefore identical. Dividing their lefthand sides by P_{o:o}(C  C′) and using Eqs. (74)and (4), we obtain, as announced, (79)
For x_{p} = f in particular, because of Eq. (37), and as under assumption (H_{s:o}), Eq. (79)reduces to (80)From Eq. (28), a maximum likelihood estimator of f is thus (81)where is the value of P_{o:o} at .
To compare assumptions (H_{s:o}), (H_{o:o}), and (H_{s:o}) and to select the most appropriate one to compute P(A_{i,j}  C ∩ C′), an expression is needed for L_{o:o}. If probabilities P_{o:o}(A_{i, 0}  C ∩ C′) are calculable, L_{o:o} may be obtained for any f by integrating Eq. (80)with respect to f. Since all K and K′sources are unrelated and randomly distributed for f = 0, the integration constant is (cf. Eq. (73)) (82)
5. Practical implementation: the Aspects code
5.1. Overview
To implement the results established in Sects. 3.1, 3.2, 4.1, and 4.2, we have built a Fortran 95 code, Aspects – a French acronym (pronounced [aspε] in International Phonetic Alphabet, not [æspekts]) for “Association positionnelle/probabiliste de catalogues de sources”, or “probabilistic positional association of source catalogs” in English. The source files are freely available ^{4} at www2.iap.fr/users/fioc/Aspects/ . The code compiles with IFort and GFortran.
Given two catalogs of sources with their positions and the uncertainties on these, Aspects computes, under assumptions (H_{s:o}), (H_{o:o}), and (H_{o:s}), the overall likelihood L, estimates of f and f′, and the probabilities P(A_{i,j}  C ∩ C′). It may also simulate allsky catalogs for various association models (cf. Sect. 6.1).
We provide hereafter explanations of general interest for the practical implementation in Aspects of Eqs. (23), (39), (32), (71), (81), and (73). Some more technical points (such as the procedures used to search for nearby objects, simulate the positions of associated sources and integrate Eq. (80)) are only addressed in appendices to the documentation of the code (Fioc 2014). The latter also contains the following complements: another (but equivalent) expression for L_{o:o}, formulae derived under H_{o:s}, computations under H_{o:o} for , a calculation of the uncertainties on unknown parameters under H_{s:o}, and a proof of Eq. (41).
5.2. Elimination of unlikely counterparts
Under assumption (H_{s:o}), computing the probability of association P_{s:o}(A_{i,j}  C ∩ C′) between M_{i} and from Eq. (23)is straightforward if f and the positional uncertainties are known. However, the number of calculations for the whole sample or for determining is on the order of , a huge number for the catalogs available nowadays. We must therefore try to eliminate all unnecessary computations.
Since ξ_{i,k} is given by a normal law if i ≠ 0 and k ≠ 0, it rapidly drops to almost 0 when we consider sources at increasing angular distance ψ_{i,k} from M_{i}. Therefore, there is no need to compute P_{s:o}(A_{i,j}  C ∩ C′) for all couples or to sum on all k from 1 to in Eq. (24). More explicitly, let R′ be some angular distance such that, for all , if then ξ_{i,k} ≈ 0, say (83)where the a_{ℓ} and are the semimajor axes of the positional uncertainty ellipses of K and K′sources (cf. Appendix A.2.1; the square root in Eq. (83)is thus the maximal possible uncertainty on the relative position of associated sources). We may set P_{s:o}(A_{i,j}  C ∩ C′) to 0 if ψ_{i,j}>R′, and replace the sums by in Eq. (24): only nearby K′sources matter.
5.3. Fraction of sources with a counterpart
All the probabilities depend on f and, possibly, on other unknown parameters like and (cf. Appendices A.2.2 and A.2.3). Under assumption (H_{s:o}), estimates of these parameters may be found by solving Eq. (28)using Eq. (33).
If the fraction of sources with a counterpart is the only unknown, the ξ_{i,j} need to be computed only once and may easily be determined from Eq. (39)by an iterative procedure. Denoting by g the function (84)we now prove that, for any f_{0} ∈] 0,1 [, the sequence (f_{k})_{k ∈?} defined by f_{k + 1} := g(f_{k}) tends to .
As is obvious from Eq. (24b), P_{s:o}(A_{i, 0}  C ∩ C′) decreases for all i when f increases: g is consequently an increasing function. Note also that, from Eqs. (38)and (84), (85)The only fixed points of g are thus 0, 1 and the unique solution to ∂lnL_{s:o}/∂f = 0. Because ∂^{2}lnL_{s:o}/∂f^{2}< 0 (cf. Eq. (41)) and , we have if , so in this interval by Eq. (85). Similarly, if , then and thus .
Consider the case . If , then as just shown, ; we also have , because g is an increasing function and is a fixed point of it. Since g(f_{k}) = f_{k + 1}, the sequence (f_{k})_{k ∈?} is increasing and bounded from above by : it therefore converges in . Because g is continuous and is the only fixed point in this interval, (f_{k})_{k ∈?} tends to . Similarly, if , then (f_{k})_{k ∈?} is a decreasing sequence converging to .
Because of Eq. (81), this procedure also works in practice under assumption (H_{o:o}) (with P_{s:o} replaced by P_{o:o} in Eq. (84)), although it is not obvious that P_{o:o}(A_{i, 0}  C ∩ C′) decreases for all i when f increases, nor that ∂^{2}lnL_{o:o}/∂f^{2}< 0. A good starting value f_{0} may be .
5.4. Computation of onetoone probabilities of association
What was said in Sect. 5.2 about eliminating unlikely counterparts in the calculation of probabilities under H_{s:o} still holds under H_{o:o}. However, because of the combinatorial explosion of the number of terms in Eq. (71), computing P_{o:o}(A_{i,j}  C ∩ C′) exactly is still clearly hopeless. Yet, after some wandering (Sects. 5.4.1 and 5.4.2), we found a working solution (Sect. 5.4.3).
5.4.1. A first try
Our first try was inspired by the (partially wrong) idea that, although all Ksources are involved in the numerator and denominator of Eq. (71), only those close to M_{i} should matter in their ratio. A sequence of approximations converging to the true value of P_{o:o}(A_{i,j}  C ∩ C′) might then be built as follows (all quantities defined or produced in this first try are written with the superscript “w” for “wrong”).
To make things clear, consider M_{1} and some possible counterpart within its neighborhood () and assume that M_{2} is the first nearest neighbor of M_{1} in K, M_{3} its second nearest neighbor, etc. For any d ∈ [[1,n]], define (86)The quantity thus depends only on M_{1} and its d − 1 nearest neighbors in K. As is the onetoone probability of association between M_{1} and (cf. Eq. (71)), the sequence tends to P_{o:o}(A_{1,j}  C ∩ C′) when the depth d of the recursive sums tends to n. After some initial fluctuations, enters a steady state. This occurs when ψ(M_{1},M_{d + 1}) exceeds a distance R equal to a few times R′ (at least 2 R′). We may therefore think that the convergence is then achieved and stop the recursion at this d. It is all the more tempting that and that the severaltoone probability looks like a firstorder approximation to P_{o:o}...
Fig. 1 Onetoone simulations for f = 1/2, , and circular positional uncertainty ellipses with (see Sects. 6.1 and 6.2 for details). a) Mean value of different estimators of f as a function of n. The dotted line indicates the input value of f. b) Normalized average maximum value of different likelihoods as a function of n, compared to . 

Open with DEXTER 
More formally and generally, for any M_{i}, let φ be a permutation on K ordering the elements M_{φ(1)}, M_{φ(2)}, ..., M_{φ(n)} by increasing angular distance to M_{i} (in particular, M_{φ(1)} = M_{i}). For j = 0 or within a distance R′ (cf. Sect. 5.2) from M_{i}, and for any d ∈ [[1,n]], define (87)where, as in Eqs. (55), (67), (64), and (70),
(88)(89)Let (90)Given above considerations, P_{o:o}(A_{i,j}  C ∩ C′) can be evaluated as .
The computation of may be further restricted (and in practice, because of the recursive sums in Eq. (87), must be) to sources in the neighborhood of the objects (M_{φ(k)})_{k ∈ [[1,d]]}, as explained in Sect. 5.2.
5.4.2. Failure of the first try
To test the reliability of the evaluation of P_{o:o}(A_{i,j}  C ∩ C′) by , we simulated allsky mock catalogs for onetoone associations and analyzed them with a first version of Aspects. Simulations were run for f = 1/2, , , and known circular positional uncertainties with (see Sects. 6.1 and 6.2 for a detailed description).
Three estimators of f were compared to the input value:

, the value maximizing L_{s:o} (Eq. (39));

, the value maximizing the onetoone likelihood derived from the . This estimator is computed from Eq. (81)with instead of P_{o:o}(A_{i, 0}  C ∩ C′);

, an estimator built from the onetoseveral assumption in the following way: because (H_{o:s}) is fully symmetric to (H_{s:o}), we just need to swap K and K′ (i.e., swap f and f′, n and , etc.) in Eqs. (24), (26), and (39)to obtain instead of , and then, from Eq. (42), instead of . The onetoseveral likelihood L_{o:s} is computed from Eq. (32)in the same way.
The mean values of these estimators are plotted as a function of n in Fig. 1a (error bars are smaller than the size of the points). As is obvious, the ad hoc estimator diverges from f when n increases. This statistical inconsistency ^{5} seems surprising for a maximum likelihood estimator since the model on which it is based is correct by construction. However, all the demonstrations of consistency of maximum likelihood estimators we found in the literature (e.g., in Kendall & Stuart 1979) rest on the assumption that the overall likelihood is the product of the probabilities of each datum, which is not the case for L_{o:o} (cf. Eq. (73)). Since is a good estimator of f, it might be used to compute P_{o:o}(A_{i,j}  C ∩ C′) from – if the latter correctly approximates the former. By itself, the inconsistency of is therefore not a problem.
More embarrassing is that (H_{o:o}) is not the most likely assumption (see Fig. 1b): the mean value of is less than that of over the full interval of n ! These two failures hint that the sequence has not yet converged to P_{o:o}(A_{i,j}  C ∩ C′) at d = d_{min}(i).
To check this, we ran simulations with small numbers of sources (n and less than 10), so that we could compute exactly and study how tends to it. To test whether source confusion might be the reason for the problem, we created mock catalogs with very large positional uncertainties ^{6}, comparable to the distance between unrelated sources. Because the expressions given in Appendix A for ξ_{i,j} are for planar normal laws and become wrong when the distance between M_{i} and is more than a few degrees because of the curvature, we ran simulations on a whole circle instead of a sphere; nevertheless, we took because the linear normal law is inappropriate on a circle for higher values, due to its finite extent. What we found is that, after the transient phase where it oscillates, slowly drifts to P_{o:o}(A_{i,j}  C ∩ C′) and only converges at d = n ! This drift was imperceptible for the high values of n and used in Sect. 5.4.1.
5.4.3. Reconsideration and solution
To understand where the problem comes from, we consider the simplest case of interest: . We assume moreover that ξ_{1, 2} ≈ ξ_{2, 1} ≈ 0. We then have (91)The probabilities P_{o:o}(A_{1,j}  C ∩ C′) = P_{o:o}(A_{1,j} ∩ C  C′) /P_{o:o}(C  C′) obviously depend on ξ_{2, 2}. In particular, ifξ_{2, 2} ≪ ξ_{2, 0}, (94)in that case, P_{o:o}(A_{2, 2}  C ∩ C′) ≈ 0, and both and are free for M_{1}. On the other hand, ifξ_{2, 2} ≫ ξ_{2, 0}, (95)in that case, P_{o:o}(A_{2, 2}  C ∩ C′) ≈ 1: M_{2} and are almost certainly bound, so may not be associated to M_{1}, and is the only possible counterpart of M_{1}.
Fig. 2 Mean value of different estimators of f as a function of n for f = 1/2 (dotted line), , and circular positional uncertainty ellipses with (see Sects. 6.1 and 6.2 for details). a) Severaltoone simulations. b) Onetoone simulations ( and overlap). 

Open with DEXTER 
The difference between the results obtained for ξ_{2, 2} ≪ ξ_{2, 0} and ξ_{2, 2} ≫ ξ_{2, 0} shows that probabilities P_{o:o}(A_{1,j}  C ∩ C′) depend on the relative positions of M_{2} and , even when both M_{2} and are distant from M_{1} and : unlike the idea stated in Sect. 5.4.1, distant Ksources do matter for P_{o:o} probabilities! However, as highlighted by the “/ 2” and “/ 1” factors in Eqs. (94)and (95), the distant Ksource M_{2} only changes the number of K′sources (two for ξ_{2, 2} ≪ ξ_{2, 0}, one for ξ_{2, 2} ≫ ξ_{2, 0}) that may be identified to M_{1}: its exact position is unimportant.
This suggests the following solution: replace in Eq. (89)by the number of K′sources that may effectively be associated to M_{i} and its d − 1 nearest neighbors in K; i.e., dropping the superscript “w”, define (96)where (97)and use p_{o:o}(i,j) := p_{dmin(i)}(i,j), where d_{min}(i) is defined by Eq. (90), to evaluate P_{o:o}(A_{i,j}  C ∩ C′).
An estimate of is given by ^{7}(98)The sum in Eq. (98)is nothing but the typical number of counterparts in K′ associated to distant Ksources. Note that , so we recover the theoretical result for P_{o:o}(A_{i,j}  C ∩ C′) when all sources are considered. As P_{o:o} depends on which in turn depends on P_{o:o}, both may be computed with a back and forth iteration; this procedure converges in a few steps if, instead of P_{o:o}, the value of P_{s:o} is taken to initiate the sequence.
5.5. Tests of Aspects
As computations made under assumption (H_{o:o}) are complex (they involve recursive sums for instance), we made several consistency checks of the code. In particular, we swapped K and K′ for and compared quantities resulting from this swap (written with the superscript “↔”) to original ones: within numerical errors, and, for f^{′ ↔} = f, we get and for all .
We moreover numerically checked for small n and (≲5) that Eq. (73)and the integral of Eq. (80)with respect to f are consistent and that Aspects returns the same value as Mathematica (Wolfram 1996). For even smaller n and (3), we confirmed that manual analytical expressions, obtained from the enumeration of all possible associations between K and K′, are identical to Mathematica’s symbolic calculations. For the large n and of practical interest, although we did not give a formal proof of the solution of Sect. 5.4.3, the analysis of simulations (Sect. 6) makes us confident in the code.
6. Simulations
In this section, we analyze various estimators of the unknown parameters. Because of the complexity of the expressions we obtained, we did not try to do it analytically but used simulations. We also compare the likelihood of the assumptions (H_{s:o}), (H_{o:o}), and (H_{o:s}), given the data.
6.1. Creation of mock catalogs
We have built allsky mock catalogs with Aspects in the cases of several and onetoone associations. To do this, we first selected the indices of fn objects in K, and associated randomly the index of a counterpart in K′ to each of them; for onetoone simulations, a given K′source was associated at most once. We then drew the true positions of K′sources uniformly on the sky. The true positions of Ksources without counterpart were also drawn in the same way; for sources with a counterpart, we took the true position of their counterpart. The observed positions of K and K′sources were finally computed from the true positions for given parameters (a_{i},b_{i},β_{i}) and of the positional uncertainty ellipses (see Appendix A.2.1).
6.2. Estimation of f if positional uncertainty ellipses are known and circular
Mock catalogs were created with a_{i} = b_{i} = σ (see notations in Appendix A.2.1) for all M_{i} ∈ K and with for all . Positional uncertainty ellipses are therefore circular here. Only two parameters matter in that case: f and (99)Hundreds of simulations were run for f = 1/2, , , and n ∈ [[10^{3}, 10^{5}]]. We analyzed them with Aspects, knowing positional uncertainties, and plot the mean value of the estimators of f listed in Sect. 5.4.2 as a function of n in Fig. 2. This time, however, we replaced by the estimator computed from the p_{o:o}.
For severaltoone simulations, is by far the best estimator of f and does not show any significant bias, whatever the value of n. Estimators and do not recover the input value of f, which is not surprising since they are not built from the right assumption here; moreover, while , , and are obtained by maximizing L_{s:o}, L_{o:s}, and L_{o:o}, respectively, is not directly fitted to the data.
For onetoone simulations, and unlike , is a consistent estimator of f, as expected. Puzzlingly, also works very well, maybe because (H_{s:o}) is a more relaxed assumption than (H_{o:o}); whatever the reason, this is not a problem.
6.3. Simultaneous estimation of f and
6.3.1. Circular positional uncertainty ellipses
How do different estimators of f and behave when the true values of positional uncertainties are also ignored? We show in Fig. 3 the result of simulations with the same input as in Sect. 6.2, except that . The likelihood L_{s:o} peaks very close to the input value of for both types of simulations: is still an unbiased estimator of x. For onetoone simulations, L_{o:o} is also maximal near the input value of x, so is unbiased, too.
Fig. 3 Contour lines of L_{s:o} (solid) and L_{o:o} (dashed) in the plane. Input parameters are the same as in Fig. 2, except that ; the input values of f and are indicated by dotted lines (see Sect. 6.3.1 for details). a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER 
Fig. 4 Contour lines of L_{s:o} (solid) and L_{o:o} (dashed) in the plane. Input parameters are the same as in Fig. 2, except that positional uncertainty ellipses are elongated and randomly oriented (see Sect. 6.3.2 for details); the input value of f is indicated by a dotted line. a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER 
6.3.2. Elongated positional uncertainty ellipses
To test the robustness of estimators of f, we ran simulations with the same parameters, but with elongated positional uncertainty ellipses: we took and for all . These ellipses were randomly oriented; i.e., position angles (cf. Appendix A.2.1) β_{i} and have uniform random values in [0,ß [. We then estimated f, but ignoring these positional uncertainties (see Fig. 4).
Although the model from which the parameters are fitted is inaccurate here (the ξ_{i,j} are computed assuming circular positional uncertainties instead of the unknown elliptical ones), the input value of f is still recovered by for both types of simulations and by for onetoone simulations. The fitting also provides the typical positional uncertainty on the relative positions of associated sources.
6.4. Choice of association model
Now, given the two catalogs, which assumption should we adopt to compute the probabilities P(A_{i,j}  C ∩ C′): severaltoone, onetoone or onetoseveral? As shown in Fig. 5, for known positional uncertainties and a given , source confusion is rare at low values of n (there is typically at most one possible counterpart) and all assumptions are equally likely. At larger n, for severaltoone simulations; as expected, for onetoone simulations, and , with for . In all cases, on average, the right assumption is the most likely. This is also true when positional uncertainties are ignored (Sect. 6.3).
The calculation of L_{o:o} is lengthy, and as a substitute to the comparison of the likelihoods, the following procedure may be applied to select the most appropriate assumption to compute the probabilities of association: if , use (H_{o:o}); if , then use (H_{s:o}) if , and (H_{o:s}) otherwise.
Fig. 5 Normalized average maximum value of different likelihoods as a function of n, compared to . Simulations are the same as in Fig. 2. a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER 
7. Conclusion
In this paper, we computed the probabilities of positional association of sources between two catalogs K and K′ under two different assumptions: first, the easy case where several Kobjects may share the same counterpart in K′, then the more natural but numerically intensive case of onetoone associations only between K and K′.
These probabilities depend on at least one unknown parameter: the fraction of sources with a counterpart. If the positional uncertainties are unknown, other parameters are required to compute the probabilities. We calculated the likelihood of observing all the K and K′sources at their effective positions under each of the two assumptions described above, and estimated the unknown parameters by maximizing these likelihoods. The latter are also used to select the best association model.
These relations were implemented in a code, Aspects, which we make public and with which we analyzed allsky severaltoone and onetoone simulations. In all cases, the assumption with the highest likelihood is the right one, and estimators of unknown parameters obtained for it do not show any bias.
In the simulations, we assumed that the density of K and K′sources was uniform on the sky area S: the quantities ξ_{i, 0} and ξ_{0,j} used to compute the probabilities are then equal to 1 /S. If the density of objects is not uniform, we might take ξ_{i, 0} = ρ(M_{i}) /n and , where ρ and ρ′ are, respectively, the local surface densities of K and K′sources; but if the ρ′rho ratio varies on the sky, so will the fraction of sources with a counterpart – something we did not try to model. Considering clustering or the side effects ^{8} due to a small S, as well as taking priors on the SED of objects into account was also beyond the scope of this paper.
In spite of these limitations, Aspects is a robust tool that should help astronomers crossidentify astrophysical sources automatically, efficiently and reliably.
For instance, de Ruiter et al. (1977) wrongly state that, if there is a counterpart, the closest object is always the right one.
For the sake of clarity, we mention that we adopt the same decreasing order of precedence for operators as in Mathematica (Wolfram 1996): × and /; Π; ∑; + and −.
Computing P_{s:o}(C  C′) is easier than for P_{s:o}(C′  C): the latter would require calculating (cf. Eq. (9)) because several M_{k} might be associated with the same . This does not matter for computations made under assumption (H_{o:o}).
Fortran 90 routines from Numerical Recipes (Press et al. 1992) are used to sort arrays and locate a value in an ordered table. Because of license constraints, we cannot provide them, but they may easily be replaced by free equivalents.
None of the results established outside of Appendix A depends on this assumption.
If it were not the case, the probability of and might be modeled using Kent (1982) distributions (an adaptation to the sphere of the planar normal law), but no result like Eq. (A.8)would then hold: unlike Gaussians, Kent distributions are not stable.
We seize this opportunity to correct Eqs. (A.8) to (A.11) of Pineau et al. (2011): and should be replaced by their squares in these formulae.
However, as noticed by de Vaucouleurs & Head (1978) in a different context, if three samples with unknown uncertainties σ_{i} (i ∈ [[1, 3]]) are available and if the combined uncertainties may be estimated for all the pairs (i,j)_{j ≠ i} ∈ [[1, 3]]^{2}, as in our case, then σ_{i} may be determined for each sample. Paturel & Petit (1999) used this technique to compute the accuracy of galaxy coordinates.
Acknowledgments
The initial phase of this work took place at the NASA/ Goddard Space Flight Center, under the supervision of Eli Dwek, and was supported by the National Research Council through the Resident Research Associateship Program. We acknowledge them sincerely. We also thank Stéphane Colombi for the discussions we had on the properties of maximum likelihood estimators.
References
 Bartlett, J. G., & Egret, D. 1998, in New Horizons from MultiWavelength Sky Surveys, eds. B. J. McLean, D. A. Golombek, J. J. E. Hayes, & H. E. Payne, IAU Symp., 179, 437 [Google Scholar]
 Bauer, F. E., Condon, J. J., Thuan, T. X., & Broderick, J. J. 2000, ApJS, 129, 547 [NASA ADS] [CrossRef] [Google Scholar]
 Benn, C. R. 1983, The Observatory, 103, 150 [NASA ADS] [Google Scholar]
 Brand, K., Brown, M. J. I., Dey, A., et al. 2006, ApJ, 641, 140 [NASA ADS] [CrossRef] [Google Scholar]
 Budavári, T., & Szalay, A. S. 2008, ApJ, 679, 301 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Condon, J. J., Balonek, T. J., & Jauncey, D. L. 1975, AJ, 80, 887 [NASA ADS] [CrossRef] [Google Scholar]
 Condon, J. J., Anderson, E., & Broderick, J. J. 1995, AJ, 109, 2318 [NASA ADS] [CrossRef] [Google Scholar]
 de Ruiter, H. R., Arp, H. C., & Willis, A. G. 1977, A&AS, 28, 211 [NASA ADS] [Google Scholar]
 de Vaucouleurs, G., & Head, C. 1978, ApJS, 36, 439 [NASA ADS] [CrossRef] [Google Scholar]
 de Vaucouleurs, G., de Vaucouleurs, A., Corwin, Jr., H. G., et al. 1991, Third Reference Catalogue of Bright Galaxies (New York: Springer) [Google Scholar]
 Fioc, M. 2014, Aspects: code documentation and complements [arXiv:1404.4224] [Google Scholar]
 Fleuren, S., Sutherland, W., Dunne, L., et al. 2012, MNRAS, 423, 2407 [NASA ADS] [CrossRef] [Google Scholar]
 Haakonsen, C. B., & Rutledge, R. E. 2009, ApJS, 184, 138 [NASA ADS] [CrossRef] [Google Scholar]
 Kendall, M., & Stuart, A. 1979, The advanced theory of statistics. Vol. 2: Inference and relationship (London: Griffin) [Google Scholar]
 Kent, J. T. 1982, J. Roy. Stat. Soc. Ser. B, Stat. Methodol., 44, 71 [Google Scholar]
 Kim, S., Wardlow, J. L., Cooray, A., et al. 2012, ApJ, 756, 28 [NASA ADS] [CrossRef] [Google Scholar]
 Kuchinski, L. E., Freedman, W. L., Madore, B. F., et al. 2000, ApJS, 131, 441 [NASA ADS] [CrossRef] [Google Scholar]
 McAlpine, K., Smith, D. J. B., Jarvis, M. J., Bonfield, D. G., & Fleuren, S. 2012, MNRAS, 423, 132 [NASA ADS] [CrossRef] [Google Scholar]
 Moshir, M., Kopman, G., & Conrow, T. A. O. 1992, IRAS Faint Source Survey, Explanatory supplement version 2 (IPAC) [Google Scholar]
 Moshir, M., Copan, G., Conrow, T., et al. 1993, VizieR Online Data Catalog: II/156 [Google Scholar]
 Paturel, G., & Petit, C. 1999, A&A, 352, 431 [NASA ADS] [Google Scholar]
 Paturel, G., Bottinelli, L., & Gouguenheim, L. 1995, Astrophys. Lett. Commun., 31, 13 [NASA ADS] [Google Scholar]
 Paturel, G., Petit, C., Prugniel, P., et al. 2003, VizieR Online Data Catalog: VII/237 [Google Scholar]
 Pineau, F.X., Motch, C., Carrera, F., et al. 2011, A&A, 527, A126 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical recipes in Fortran. The art of scientific computing (Cambridge: University press) [Google Scholar]
 Prestage, R. M., & Peacock, J. A. 1983, MNRAS, 204, 355 [NASA ADS] [Google Scholar]
 Rohde, D. J., Gallagher, M. R., Drinkwater, M. J., & Pimbblet, K. A. 2006, MNRAS, 369, 2 [NASA ADS] [CrossRef] [Google Scholar]
 Roseboom, I. G., Oliver, S., Parkinson, D., & Vaccari, M. 2009, MNRAS, 400, 1062 [NASA ADS] [CrossRef] [Google Scholar]
 Rutledge, R. E., Brunner, R. J., Prince, T. A., & Lonsdale, C. 2000, ApJS, 131, 335 [NASA ADS] [CrossRef] [Google Scholar]
 Sutherland, W., & Saunders, W. 1992, MNRAS, 259, 413 [NASA ADS] [CrossRef] [Google Scholar]
 Vignali, C., Fiore, F., Comastri, A., et al. 2009, in Multiwavelength Astronomy and Virtual Observatory (European Space Agency), eds. D. Baines, & P. Osuna, 53 [Google Scholar]
 Wolfram, S. 1996, The Mathematica book (Cambridge University Press) [Google Scholar]
 Wolstencroft, R. D., Savage, A., Clowes, R. G., et al. 1986, MNRAS, 223, 279 [NASA ADS] [Google Scholar]
Appendix A: Probability distribution of the observed relative positions of associated sources
Appendix A.1: Properties of normal laws
We first recall a few standard results. The probability that an mdimensional normally distributed random vector W of mean μ and variance Γ falls in some domain Ω is (A.1)where B := (u_{1},...,u_{m}) is a basis, w is a vector, w_{B} = (w_{1},...,w_{m})^{t} (resp. μ_{B}) is the column vector expression of w (resp. μ) in B, , and Γ_{B} is the covariance matrix of W (i.e. the matrix representation of Γ) in B. We denote this by W ~ G_{m}(μ,Γ).
In another basis , we have w_{B} = T_{B → B′}·w_{B′}, where T_{B → B′} is the transformation matrix from B to B′ (i.e. ). Since d^{m}w_{B} =  detT_{B → B′}  d^{m}w_{B′} and (A.2)we still obtain (A.3)where is the covariance matrix of W in B′. In the following, B and B′ are orthonormal bases, so T_{B → B′} is a rotation matrix. From , we get (A.4)
For independent random vectors W_{1} ~ G_{m}(μ_{1},Γ_{1}) and W_{2} ~ G_{m}(μ_{2},Γ_{2}), we have (A.5)
Appendix A.2: Covariance matrix of the probability distribution of relative positions
We now use these results to derive the probability distribution of vector , where r_{i} and are, respectively, the observed positions of source M_{i} of K and of its counterpart in K′. Introducing the true positions r_{0,i} and of M_{i} and , we have (A.6)
Appendix A.2.1: Covariance matrix for identical true positions and known positional uncertainties
Assume^{9}, as is usual, that (A.7)If the true positions of M_{i} and are identical (case of point sources), then, from Eqs. (A.5)–(A.7), (A.8)(See also Condon et al. 1995.) In Eqs. (A.7), r_{i} − r_{0,i} and must be considered as the projections (gnomonic ones, for instance) of these vectors on the planes tangent to the sphere at M_{i} and , respectively; Eqs. (A.7)are approximations, valid only because positional uncertainties are small ^{10}. Equation (A.8)is also an approximation: it is appropriate because the observed positions of associated sources M_{i} and are close, so the tangent planes to the sphere at both points nearly coincide.
To use Eq. (A.8), we now compute the column vector expression of r_{i,j} and the covariance matrices associated to Γ_{i}, , and Γ_{i,j} in some common basis. For convenience, we drop the subscript and the “prime” symbol in the following whenever an expression only depends on either M_{i} or .
Let (u_{x},u_{y},u_{z}) be a direct orthonormal basis, with u_{z} oriented from the Earth’s center O to the North Celestial Pole and u_{x} from O to the Vernal Point. At a point M of right ascension α and declination δ, a direct orthonormal basis (u_{r},u_{α},u_{δ}) is defined by The uncertainty ellipse on the position of M is characterized by the lengths a and b of its semimajor and semiminor axes, and by the position angle β between the north and the semimajor axis. Let u_{a} and u_{b} be unit vectors directed along the major and the minor axes, respectively, and such that (u_{r},u_{a},u_{b}) is a direct orthonormal basis and that β := ∠(u_{δ},u_{a}) is in [0,ß [ when counted eastward. Since (u_{α},u_{δ}) is obtained from (u_{a},u_{b}) by a (β − ß/2)counterclockwise rotation in the plane oriented by + u_{r}, we have T_{(ua,ub) → (uα,uδ)} = Rot(β − ß/2), where, for any angle τ, (A.12)Using notation (A.13)for diagonal matrices, we have ^{11} and (A.14)As noticed by Pineau et al. (2011), around the Poles, even for sources M_{i} and close to each other, we may have (u_{α,i},u_{δ,i}) ≉ (u_{α′,j},u_{δ′,j}): the covariance matrices (Γ_{i})_{(uα,i,uδ,i)} and must therefore be first converted to a common basis before their summation in Eq. (A.8). We use the same basis as Pineau et al. (2011), denoted by (t,n) below. While the results we get are intrinsically the same, some people may find our expressions more convenient.
Denote by n := u_{r,i} × u_{r′,j}/ ∥ u_{r,i} × u_{r′,j} ∥ a unit vector perpendicular to the plane . Because ψ_{i,j} := ∠(u_{r,i},u_{r′,j}) ∈ [0,ß], we have u_{r,i}·u_{r′,j} = cosψ_{i,j} and ∥ u_{r,i} × u_{r′,j} ∥ = sinψ_{i,j}, so (A.15)and (A.16)Let γ_{i} := ∠(n,u_{δ,i}) and be angles oriented clockwise around + u_{r,i} and + u_{r′,j}, respectively. Angle γ_{i} is fully determined by the following expressions (cf. Eqs. (A.16)and (A.9)–(A.11)): Similarly, (A.19)Let (A.20)(≈n × u_{r′,j} since M_{i} and are close): vector t is a unit vector tangent in M_{i} to the minor arc of great circle going from M_{i} to . Project the sphere on the plane (M_{i},t,n) tangent to the sphere in M_{i} (the specific projection does not matter since we consider only K′sources in the neighborhood of M_{i}). We have (A.21)and the basis (t,n) is obtained from (u_{a},u_{b}) by a (β + γ − ß/2)counterclockwise rotation around + u_{r}, so,
Appendix A.2.2: Case of unknown positional uncertainties
If the positional uncertainty on M_{i} is unknown, we may model it with (Γ_{i})_{(t,n)} = σ^{2} Diag(1,1), using the same σ for all Ksources, and derive an estimate of by maximizing the likelihood to observe the distribution of K and K′sources (see Sects. 3.2 and 4.2). For a galaxy, however, the positional uncertainty on its center is likely to increase with its size. If the position angle θ_{i} (counted eastward from the north) and the major and minor diameters D_{i} and d_{i} of the bestfitting ellipse of some isophote are known for M_{i} (for instance, parameters PA, D_{25} and d_{25} := D_{25}/R_{25} taken from the RC3 catalog (de Vaucouleurs et al. 1991) or HyperLeda (Paturel et al. 2003)), we may model the positional uncertainty with (A.24)and derive estimates of and from the likelihood. Such a technique might indeed be used to estimate the accuracy of coordinates in some catalog (see Paturel & Petit 1999 for another method).
If the positional uncertainty on is unknown too, we can also put (A.25)with the same σ′ and ν′ for all K′sources. As , only estimates of and may be obtained ^{12} by maximizing the likelihood, not the values of σ, σ′, ν or ν′ themselves.
Appendix A.2.3: Possibly different true positions
A similar technique can be applied if the true centers of Ksources and of their counterparts in K′ sometimes differ. This might be useful in particular when associating galaxies from an optical catalog and from a ultraviolet or farinfrared one, because, while the optical is dominated by smoothlydistributed evolved stellar populations, the ultraviolet and the farinfrared mainly trace starforming regions. Observations of galaxies (e.g., Kuchinski et al. 2000) have indeed shown that galaxies are very patchy in the ultraviolet, and the same has been observed in the farinfrared.
Since the angular distance between the true centers should increase with the size of the galaxy, we might model this as (A.26)We then have (A.27)
Once again, if σ, σ′, ν, ν′ and ν_{0} are unknown, only and may be estimated through likelihood maximization.
All Figures
Fig. 1 Onetoone simulations for f = 1/2, , and circular positional uncertainty ellipses with (see Sects. 6.1 and 6.2 for details). a) Mean value of different estimators of f as a function of n. The dotted line indicates the input value of f. b) Normalized average maximum value of different likelihoods as a function of n, compared to . 

Open with DEXTER  
In the text 
Fig. 2 Mean value of different estimators of f as a function of n for f = 1/2 (dotted line), , and circular positional uncertainty ellipses with (see Sects. 6.1 and 6.2 for details). a) Severaltoone simulations. b) Onetoone simulations ( and overlap). 

Open with DEXTER  
In the text 
Fig. 3 Contour lines of L_{s:o} (solid) and L_{o:o} (dashed) in the plane. Input parameters are the same as in Fig. 2, except that ; the input values of f and are indicated by dotted lines (see Sect. 6.3.1 for details). a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER  
In the text 
Fig. 4 Contour lines of L_{s:o} (solid) and L_{o:o} (dashed) in the plane. Input parameters are the same as in Fig. 2, except that positional uncertainty ellipses are elongated and randomly oriented (see Sect. 6.3.2 for details); the input value of f is indicated by a dotted line. a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER  
In the text 
Fig. 5 Normalized average maximum value of different likelihoods as a function of n, compared to . Simulations are the same as in Fig. 2. a) Severaltoone simulations. b) Onetoone simulations. 

Open with DEXTER  
In the text 