next previous
Up: Colors of Minor Bodies System


Subsections

  
Appendix B: Statistical tests

In this appendix, we describe in detail the statistical tests used thorough this paper.

B.1 Correlation coefficient

Pearson's correlation coefficient r evaluates the association between two continuous variables x and y (such as the orbit semi-major axis a and the V-R color). r is given by

 \begin{displaymath}r = \frac{ \sum_i (x_i - \bar{x})(y_i - \bar{y})}
{\sqrt{ \sum_i(x_i-\bar{x})^2}\sqrt{ \sum_i(y_i-\bar{y})^2}}
\end{displaymath} (B.1)

where $\bar{x}$ and $\bar{y}$ are the mean of x and y. r is in the -1, 1 range. Large values (positive or negative) indicate a strong correlation between the two variables, while a value close to 0 indicates that they are uncorrelated. Unfortunately, there is no reliable way to quantify the significance of that correlation for small samples (less than 500 elements).

B.2 Comparing two distributions

The tests described in this section aim at comparing two continuous, 1D distributions (such as the V-R colors of two MBOSS families). These three tests estimate the validity of the null hypothesis "the two samples are extracted from the same population.'' This is performed by computing an estimator (f, t and d resp., defined below), whose direct interest is limited. From the estimator, a much more interesting value is derived: Prob, the probability that the statistical estimator is as large as measured by chance. Probis the probability to get a statistical estimator as large as or larger than the value measured while the two samples compared being actually random sub-samples of a same distribution. Large values of Prob indicate that it is very probable to get the measured estimator by chance, or in other words, that we have no reason to claim (on statistical bases) that the two samples come from different distributions. Remember, however, that this does not allow us to say that the samples are identical, only that they are not statistically incompatible. On the other hand, small values of Prob indicate that the chances of getting the observed estimator by chance while extracting the two samples from the same distributions are small, or in other words, that the two samples are not statistically compatible. The size of the sub-samples is taken into account in the computation of Prob. While it is definitely safer to work on "large'' samples, the advantage of these methods is that they start to give fairly reliable results with fairly small samples; in this study, we set the threshold as $\geq$7. The probability at which one can conclude that samples are different depends on the certainty level required. Traditional values are 0.05 and 0.003, corresponding to the usual 2 and $3\sigma$ levels. For this study, we will start raising the warning flags at $Prob \leq 0.1$. Of course, if we raise 10 such flags, we can expect that one of them will be a random effect.

The statistic tests are described in more detail, together with their original references and with the algorithms we used in Press et al. (1992).

  
B.2.1 Student's $\mathsfsl{t}$ test

This test checks whether the means of two distributions are significantly different. The basic implementation of this test implies that the variance of both distributions are equal. For the MBOSSes colors, this cannot be guaranteed (we deal with that question with the next section). We therefore used a modified version of the t test that deals with unequal variances:

 \begin{displaymath}t = \frac{ \overline{x_A} - \overline{x_B} }
{ ({\rm Var}(x_A)/N_A + {\rm Var}(x_B)/N_B)^{1/2} },
\end{displaymath} (B.2)

where xA and xB are the two color distributions considered, $\overline{x} $ and Var(x) their means and variances, and N the number of objects. The statistic Prob of t is distributed approximately as the original Student's t, and is given by the Student's distribution probability function A, which is related to the incomplete beta function (see Press et al. 1992, for details). Small values of Prob indicate that the distributions are different.

  
B.2.2 $\mathsfsl{f}$ test

The f-test evaluates whether two distributions have significantly different variances. The statistic f is simply the ratio of the largest variance to the smaller one:

 \begin{displaymath}f = \frac{ {\rm Var}(x_A) }{ {\rm Var}(x_B) }\cdot
\end{displaymath} (B.3)

Very large values of f indicate that the difference is significant. Prob, the statistics of F, is obtained by the f-distribution probability function, which is related to the incomplete beta function.

  
B.2.3 Kollmogorov-Smirnov test

Obviously, the whole information from a distribution is not contained in its two first moments (mean and variance). A more complete comparison of the color distributions is therefore interesting. The ideal statistics tool for this purpose is the Kolmogorov-Smirnov (KS) test. The distributions are compared through their Cumulative Probability Function (CPF) S(x), which is defined as the fraction of the sample whose value is smaller or equal to x. f starts at 0 and increases till it reaches 1 for the x corresponding to largest element of the distribution. d, the KS test, is the maximum (vertical) distance between the CPFs S1 and S2 of the samples to be compared, i.e.

 \begin{displaymath}d = {\max_{\tiny -\infty < x < \infty}} \vert S_1(x) - S_2(x)\vert.
\end{displaymath} (B.4)

The distribution of d's statistic can be calculated: the probability to get a d larger than the observed one, the two data sets being drawn from the same distribution, is given by
 
$\displaystyle {Prob}(d > {\rm observed})$      
$\displaystyle = Q_{{\rm KS}}$ $\textstyle \left( (\sqrt{N_{\rm e}} + 0.12 + 0.11/\sqrt{N_{\rm e}})\; d \right),$   (B.5)

where $N_{\rm e}$ is the effective number of data points,

 \begin{displaymath}N_{\rm e} = \frac{N_1 N_2}{N_1+N_2},
\end{displaymath} (B.6)

and the function $Q_{{\rm KS}}$ is defined as

 \begin{displaymath}Q_{{\rm KS}}(\lambda) = 2 \sum_{j=1}^{\inf}(-1)^{j-1} e^{-2j^2\lambda^2}.
\end{displaymath} (B.7)


next previous
Up: Colors of Minor Bodies System

Copyright ESO 2002