Free Access
Issue
A&A
Volume 519, September 2010
Article Number A85
Number of page(s) 12
Section Astronomical instrumentation
DOI https://doi.org/10.1051/0004-6361/201014079
Published online 16 September 2010
A&A 519, A85 (2010)

Unevenly-sampled signals: a general formalism for the Lomb-Scargle periodogram

R. Vio1 - P. Andreani2,3 - A. Biggs4

1 - Chip Computers Consulting srl, Viale Don L. Sturzo 82, S. Liberale di Marcon, 30020 Venice, Italy ,
2 - ESO, Karl Schwarzschild strasse 2, 85748 Garching, Germany
3 - INAF - Osservatorio Astronomico di Trieste, via Tiepolo 11, 34143 Trieste, Italy
4 - ESO, Karl Schwarzschild strasse 2, 85748 Garching, Germany

Received 15 January 2010 / Accepted 12 May 2010

Abstract
The periodogram is a popular tool that tests whether a signal consists only of noise or if it also includes other components. The main issue of this method is to define a critical detection threshold that allows identification of a component other than noise, when a peak in the periodogram exceeds it. In the case of signals sampled on a regular time grid, determination of such a threshold is relatively simple. When the sampling is uneven, however, things are more complicated. The most popular solution in this case is to use the Lomb-Scargle periodogram, but this method can be used only when the noise is the realization of a zero-mean, white (i.e. flat-spectrum) random process. In this paper, we present a general formalism based on matrix algebra, which permits analysis of the statistical properties of a periodogram independently of the characteristics of noise (e.g. colored and/or non-stationary), as well as the characteristics of sampling.

Key words: methods: data analysis - methods: statistical

1 Introduction

Spectral analysis is a popular tool for testing whether a given experimental time series $\{ x(t_0), x(t_1), \ldots, x(t_{M-1}) \}$ contains only noise, i.e.  x(tj) = n(tj), or whether some other component s(t) is present, i.e.  x(t) = s(t) + n(t). The classic approach is to fit the time series with the model function

\begin{displaymath}%
x(t_j) = \sum_{k=0}^{N-1} a_k \cos{(2 \pi f_k t_j)} + b_k \sin{(2 \pi f_k t_j}) + n(t_j),
\end{displaymath} (1)

$j=0, 1, \ldots, M-1$. If, for example, a periodic component s(t) is present with a frequency fl close to one in the set $\{ f_k \}$, then the periodogram  $\{ \widehat p_k \}$,

\begin{displaymath}%
\widehat p_k = a_k^2 + b_k^2, \qquad k=0, 1, \ldots, N-1,
\end{displaymath} (2)

will show a prominent peak close to k=l. If s(t) is semi-periodic or even non-periodic, the situation is more complicated since more peaks are expected. The main problem with the use of this technique is the definition of a detection threshold that fixes the contribution of noise in such a way that, when a peak exceeds it, the presence of a component s(t) can be claimed. In the case of signals sampled on a regular time grid, the determination of such a threshold is a relatively simple procedure, but this is not the case when the condition of regularity does not apply. In this respect, several solutions have been proposed (see Zechmeister & Kürster 2009; Reegen 2007; Ferraz-Mello 1981; Lomb 1976; Scargle 1982; Gilliland & Baliunas 1987, and references therein) that, however, work only under rather restrictive conditions (e.g. white and/or stationary noise) and are difficult to extend to more general situations.

In this paper, a general formalism is presented that allows analysis of the statistical properties of periodograms independently of the specific characteristics of the noise and the sampling of the signal. In Sect. 2 the formalism is presented for the case of even sampling and its extension to arbitrary sampling in Sect. 3. The usefulness of the proposed formalism is illustrated in Sect. 4, where the case of white noise with a mean different from zero and that of colored noise is calculated. In Sect. 5 the relationship between the periodogram and the least-squares method is considered. Finally, on the basis of simulated signals and a real time series, we discuss in Sect. 6 whether the use of algorithms specifically developed for computing the periodogram of unevenly-sampled time series is really advantageous.

2 Periodogram analysis in the case of even sampling

If a continuous signal x(t) is sampled on a set of N equispaced time instants $t_0, t_1, \ldots, t_{N-1}$, a time series xj, $j=0, 1, \ldots, N-1$ is obtained. Its discrete Fourier transform (DFT) is given by

\begin{displaymath}%
\widehat x_k = \frac{1}{\sqrt{N}} \sum_{j=0}^{N-1} x_j {\rm e}^{ - {\rm i} 2 \pi k j / N}, \qquad k=0, 1, \ldots, N-1,
\end{displaymath} (3)

with ${\rm i} = \sqrt{-1}$. The sequence $\{ x_j \}$ can be recovered from $\{ \widehat x_k \}$ by means of

\begin{displaymath}%
x_j = \frac{1}{\sqrt{N}} \sum_{k=0}^{N-1} \widehat x_k {\rm e}^{ 2 \pi k j/ N}, \qquad j=0, 1, \ldots, N-1.
\end{displaymath} (4)

The set $\{ k/N \}_{k=0}^{N-1}$ provides the so-called Fourier frequencies. Implicit in the use of DFT is the assumption that $\{ x_j \}$ is a periodic sequence with period $N \Delta t$ where $\Delta t = t_{j+1} - t_j$.

In matrix notation, Eqs. (3) and (4) can be written in the form

\begin{displaymath}%
{\vec{\widehat x}}= {\bf F}{\vec x}
\end{displaymath} (5)

and

\begin{displaymath}%
{\vec x}= {\bf F}^* {\vec{\widehat x}}.
\end{displaymath} (6)

Here, ${\vec x}$ and ${\vec{\widehat x}}$ are column arrays that contain, respectively, the sequences $\{ x_j \}$ and $\{ \widehat x_k \}$, and ${\bf F}$ is the so-called ``Fourier matrix'', which is an N $\times$ N square, symmetric matrix whose (k,j)-entry[*] is given by

\begin{displaymath}%
F_{k j} = \frac{1}{\sqrt{N}} {\rm e}^{ - {\rm i} 2 \pi k j /N}.
\end{displaymath} (7)

The superscript ``*'' denotes the complex conjugate transpose. Matrix ${\bf F}$ is unitary, i.e.,

\begin{displaymath}%
{\bf F}{\bf F}^* = {\bf F}^* {\bf F}= {\bf I},
\end{displaymath} (8)

with ${\bf I}$ the identity matrix. Another useful property is that

\begin{displaymath}%
{\bf F}{\bf F}= {\bf F}^{\rm T} {\bf F}= {\bf F}{\bf F}^{\rm T} = {\bf F}^{\rm T} {\bf F}^{\rm T} = {\bf H}
\end{displaymath} (9)

with

\begin{displaymath}%
{\bf H}=
\left(
\begin{array}{ccccccc}
1 & 0 & 0 & \cdots &...
...& 0 \\
0 & 1 & 0 & \cdots & 0 & 0 & 0 \\
\end{array}\right).
\end{displaymath} (10)

By means of Eq. (9) it can be shown that

\begin{displaymath}%
{\bf F}_{\Re} {\bf F}_{\mathcal{I}} = {\bf F}^{\rm T}_{\Re}...
...} = {\bf F}^{\rm T}_{\Re} {\bf F}^{\rm T}_{\mathcal{I}} = {0},
\end{displaymath} (11)

where ${\bf F}_{\Re} \equiv \Re[{\bf F}]$ and ${\bf F}_{\mathcal{I}} \equiv \mathcal{I}[{\bf F}]$ and where $\Re[.]$ and $\mathcal{I}[.]$ are the real and the imaginary parts of a complex quantity. Indeed,

\begin{displaymath}%
{\bf F}{\bf F}= {\bf F}_{\Re} {\bf F}_{\Re} + {\bf F}_{\mat...
...I}} + {\rm i} 2 {\bf F}_{\Re} {\bf F}_{\mathcal{I}} = {\bf H},
\end{displaymath} (12)

and ${\bf H}$ is a real matrix, hence Eq. (11) has to hold. From Eq. (3) it is also possible to see that $\widehat x_0$ is a real quantity. In the case that N is an even number, the same holds for $\widehat x_{N_{\dag }+1}$ where $N_{\dag } = \lceil N/2 \rceil$ and $\lceil z \rceil$ represents the smallest integer greater than z. Finally, dealing with ${\vec{\widehat x}}$, only half of this array can be considered, since $\widehat x_{N-k} = \widehat x^*_k$, $k=1, 2, \ldots, N_{\dag }-\delta_{2 N_{\dag }}^N$, where $ \delta_i^j = 1$ if i=j and zero otherwise. With this notation, the periodogram of ${\vec x}$ is defined as

\begin{displaymath}%
\widehat p_k = 2 (\Re[\widehat{\underline{x}}_k]^2 + \mathc...
...hat{\underline{x}}_k \vert^2, \quad k=0,1,\ldots, N_{\dag }-1,
\end{displaymath} (13)

where $\widehat{\underline{x}}_k$ is the kth entry of the array $\underline{\vec{\widehat{{x}}}}^{\rm T} = [\widehat x_0, \widehat x_1, \ldots, \widehat x_{N_{\dag }-1}]$[*] and | . | denotes the Euclidean norm. By means of

\begin{displaymath}%
{\vec{\widehat z}}= \left(
\begin{array}{c}
\underline{\vec...
...derline{\vec{\widehat{{x}}}}_{\mathcal{I}}
\end{array}\right),
\end{displaymath} (14)

a column array obtained by the column concatenation of $\underline{\vec{\widehat{{x}}}}_{\Re} \equiv \Re[\underline{\vec{\widehat{{x}}}}]$ and $\underline{\vec{\widehat{{x}}}}_{\mathcal{I}} \equiv \mathcal{I}[\underline{\vec{\widehat{{x}}}}]$, Eq. (13) can be rewritten in the form

\begin{displaymath}%
\widehat p_k = 2 \left(\widehat z_k^2 + \widehat z_{N_{\dag }+k}^2\right), \qquad k=0,1,\ldots, N_{\dag }-1.
\end{displaymath} (15)

In Sect. 5 it is shown that this periodogram is equivalent to that obtainable by means the least-squares fit of model (1) with M=N, tj = j and fk = k/N.

An important point to stress is that, if ${\vec x}$ is the realization of a (not necessarily Gaussian) random process, then each $\widehat{\underline{x}}_k$ is given by the sum of N random variables. This is because of the linearity of the Fourier operator ${\bf F}$. Thanks to the central limit theorem, therefore, the entries of ${\vec{\widehat x}}$ can be expected to be Gaussian random quantities. As a consequence, the entries of  ${\vec{\widehat z}}$ can also be expected to be Gaussian random quantities with covariance matrix $\Cb_{{\vec{\widehat z}}} = {\rm E}[{\vec{\widehat z}}{\vec{\widehat z}}^{\rm T}]$ given by

\begin{displaymath}%
\Cb_{{\vec{\widehat z}}} = \left(
\begin{array}{cc}
{\under...
...erline{\vec{F}}}^{\rm T}_{\mathcal{I}}
\end{array}\right)\cdot
\end{displaymath} (16)

Here, ${\rm E}[.]$ denotes the expectation operator, $\Cb_{{\vec x}} = {\rm E}[{\vec x}{\vec x}^{\rm T}]$ is the covariance matrix of ${\vec x}$, and ${\underline{\vec{F}}}$ the matrix obtained by the first $N_{\dag }$ rows of the Fourier matrix ${\bf F}$. From Eqs. (16) and (11), it is easy to deduce that, if ${\vec x}$ is the realization of a standard white-noise process, i.e., $\Cb_{{\vec x}} = {\bf I}$, then $\Cb_{{\vec{\widehat z}}}$ is a diagonal matrix with $(\Cb_{{\vec{\widehat z}}})_{11}=1$, $(\Cb_{{\vec{\widehat z}}})_{N_{\dag } N_{\dag }}=0$and $(\Cb_{{\vec{\widehat z}}})_{kk}=0.5$. In other words, the entries of ${\vec{\widehat z}}$ are mutually uncorrelated. In turn, this means that $\Re[\widehat{\underline{x}}_k]$ is uncorrelated with $\mathcal{I}[\widehat{\underline{x}}_k]$. If ${\vec x}$ is a colored (not necessarily stationary) noise process, i.e.  $\Cb_{{\vec x}}$ is not a diagonal matrix, then this holds also for $\Cb_{{\vec{\widehat z}}}$. However, from ${\vec x}$ it is possible to obtain an array ${\vec y}$ containing mutually uncorrelated entries by means of the transformation

\begin{displaymath}%
{\vec y}= \Cb_{{\vec x}}^{-1/2} {\vec x}.
\end{displaymath} (17)

The matrix $\Cb_{{\vec x}}^{-1/2}$ can be computed via

\begin{displaymath}%
\Cb_{{\vec x}}^{-1/2} = {\bf U}^{\rm T} {\bf\Sigma}^{-1/2} {\bf U}
\end{displaymath} (18)

with ${\bf U}$ the orthogonal matrix whose columns contain the eigenvectors of $\Cb_{y}$ and  ${\bf\Sigma}$ a diagonal matrix containing the corresponding eigenvalues $\{ \lambda_l \}_{l=0}^{N - 1}$. This decomposition is particularly simple and computationally efficient if $\Cb_{{\vec x}}$ is a circulant matrix since it can be diagonalized according to

\begin{displaymath}%
\Cb_{{\vec x}} = {\bf F}^* {\rm diag}[{\bf F}{\vec c}] {\bf F},
\end{displaymath} (19)

where `` ${\rm diag}[{\vec q}]$'' denotes a diagonal matrix whose diagonal entries are given by the array ${\vec q}$ and ${\vec c}$ is the first column of  $\Cb_{{\vec x}}$. Because of this, the decomposition (18) can be directly computed with ${\bf U}= {\bf F}$ and ${\bf\Sigma}= {\rm diag}[{\vec c}]$; hence, ${\vec y}= {\bf F}^* {\bf\Sigma}^{-1/2} {\bf F}$. This means that the whitening operation can be performed in the harmonic domain through the following procedure: a) computation of ${\vec{\widehat x}}$, i.e. the DFT of ${\vec x}$; b) computation of $\{ \widehat y_k \} = \{\widehat x_k / \lambda^{1/2}_k \}$; and c) the inverse DFT of  ${\vec{\widehat y}}$. A potential difficulty in using transformation (17) is that  $\Cb_{{\vec x}}$ is required to be of full rank. Such a condition can be expected to be satisfied in most practical applications. Some problems can arise if the time step of the sampling is much shorter than the decorrelation time of ${\vec n}$[*]. Indeed, some columns (and therefore some rows) of ${\bf C}$ could almost be identical i.e., this matrix could become numerically ill-conditioned. In this case, the most natural solution consists of averaging the data that are close in time.

When the periodogram is used to test whether ${\vec x}= {\vec n}$ vs. ${\vec x}= {\vec s}+ {\vec n}$, a threshold $L_{\widehat p_k}$ has to be defined such that, with a prefixed probability, a peak in  $\widehat p_k$ that exceeds $L_{\widehat p_k}$ can be expected to not arise because of the noise. This requires knowledge of the statistical properties of  ${\vec{\widehat p}}$ under the hypothesis that ${\vec x}= {\vec n}$. The simplest situation is when ${\vec n}$ is the realization of a standard white-noise process. In fact, since the entries of  ${\vec{\widehat z}}$ are uncorrelated random Gaussian quantities, from Eq. (15) it can be derived that the entries of  ${\vec{\widehat p}}$ are (asymptotically) independent quantities distributed according to a $\chi^2_2$ distribution[*]. As a consequence, independent of the frequency k, a threshold  $L_{{\rm Fa}}$ can be determined that corresponds to the level that a peak due to the noise would exceed with a prefixed probability $\alpha$ when a number Nf of (statistically independent) frequencies are inspected. More specifically, $L_{{\rm Fa}}$ is the highest value for which $1-[1-\exp{(-L_{\widehat p_k})}]^{N_f} \le \alpha$ (Scargle 1982), in formula

\begin{displaymath}%
L_{{\rm Fa}} = \sup_{L_{\widehat p_k}} \left\{ 1-[1-\exp{(-L_{\widehat p_k})}]^{N_f} \le \alpha \right\}.
\end{displaymath} (20)

If the entire periodogram is inspected, then $N_f = N_\dag $. Commonly, $L_{{\rm Fa}}$ is called the level of false alarm.

Threshold (20) is not applicable when the noise is colored. However, the requirement to fix a different level for each frequency can be avoided if the original signal ${\vec x}$ is transformed into

\begin{displaymath}%
{\vec y}= {\bf C}^{-1/2}_{{\vec n}} {\vec x}.
\end{displaymath} (21)

Indeed, under the hypothesis that ${\vec x}= {\vec n}$, the entries of ${\vec y}$ are uncorrelated and unit-variance random quantities. As seen above, this operation should not be difficult. Some problems emerge if the decomposition (18) is carried out by means of the efficient DFT approach described above (a necessary approach in the case of very long sequences of data). This is because, in forming $\Cb_{{\vec n}}$, it is necessary to take the periodicity of the sequence ${\vec x}$ forced by the DFT into account. In other words, it is necessary to impose a spurious correlation among the first and the last entries in ${\vec n}$. For instance, for a stationary noise process, $\Cb_{{\vec n}}$ is a Toeplitz matrix, but it has to be approximated with a circulant one. When dealing with sampled signals, this is an unavoidable problem that no technique can completely solve. A classical solution to relieving this situation is the windowing method, i.e., the substitution of xj in Eq. (3) with $\eta_j x_j$, where $\{ \eta_j \}$ is some prefixed discrete function (window) that makes the signal gently reduce to zero at the extreme of the sampling interval (e.g. see Oppenheim & Shafer 1989). This method can be expected to work satisfactorily only when the decorrelation time of noise is shorter than the length of the signal.

3 Periodogram analysis in the case of uneven sampling

If a signal x(t) is sampled on an uneven set of time instants, some problems emerge: it is no longer possible to define a set of ``natural'' frequencies such as those obtained by the Fourier transform. In turn, this implies some ambiguities in the definition of the Nyquist frequency that, loosely speaking, corresponds to the highest frequencies that contain information on the signal of interest (e.g. see Vio et al. 2000; Koen 2006). As a consequence, Eq. (3) has to be modified. In the following, with no loss of generality, it is assumed that  ${\vec x}(t)$ is sampled at M arbitrary time instants $t_0, t_1, \ldots, t_{M-1}$ with t0 = 0, tM-1 = M-1 and the remaining tj arbitrarily distributed within this interval. Moreover, a set of N frequencies $k =0, 1, \ldots, N-1$ is considered with $N \gtreqless M$. Such a set corresponds to the frequencies that are typically inspected when looking for a periodicity. However, others can be chosen. With these conditions, a transformation corresponding to the one given by Eq. (5) is

\begin{displaymath}%
{\vec{\widehat x}}= {\vec{\mathcal{F}}}{\vec x},
\end{displaymath} (22)

where

\begin{displaymath}%
\mathcal{F}_{k j} = \frac{1}{\sqrt{M}} {\rm e}^{ - {\rm i} 2 \pi k \tilde{t}_j / N},
\end{displaymath} (23)

$\tilde{t}_j = t_j/ \Delta_m t$, $\Delta_m t = \gamma \min{ [ \{t_{j+1} - t_j\}]}$ and $\gamma$ is a real positive number. Because of the normalization by  $\Delta_m t$, time  $\tilde{t}_j$ is expressed in units of the shortest sampling time interval. Apart from the substitution of the Fourier matrix ${\bf F}$ with ${\vec{\mathcal{F}}}$, the uneven sampling of signals does not modify the formalism introduced in the previous section. In particular, the covariance matrix  $\Cb_{{\vec{\widehat z}}}$ defined in Eq. (16) becomes

\begin{displaymath}%
\Cb_{{\vec{\widehat z}}} = \left(
\begin{array}{cc}
\underl...
...{{\mathcal{F}}}}^{\rm T}_{\mathcal{I}}
\end{array}\right)\cdot
\end{displaymath} (24)

The $N \times M$ Fourier matrix ${\vec{\mathcal{F}}}$ does not have the properties (8)-(12). As consequence, and also in the case that ${\vec x}$ is the realization of a standard white-noise process i.e. $\Cb_{{\vec x}} = {\bf I}$, matrix  $\Cb_{{\vec{\widehat z}}}$,

\begin{displaymath}%
\Cb_{{\vec{\widehat z}}} = \left(
\begin{array}{cc}
\underl...
...\vec{{\mathcal{F}}}}^{\rm T}_{\mathcal{I}}
\end{array}\right),
\end{displaymath} (25)

is not diagonal. In general, $\Cb_{{\vec{\widehat z}}}$ is not even diagonalizable. For example, this happens when the periodogram of a time series containing M data is computed on N frequencies with N > M (a typical situation in practical applications). This implies that the entries of  ${\vec{\widehat z}}$ cannot be made mutually uncorrelated. Obviously, the same holds for the entries of  ${\vec{\widehat p}}$ as given by Eq. (15). As a consequence, although a number N of frequencies are considered in  ${\vec{\widehat p}}$, at most only M/2 of them are statistically independent[*]. Particularly troublesome is that, for a given frequency k, $\Re[\widehat{\underline{x}}_k]$, and $\mathcal{I}[\widehat{\underline{x}}_k]$ (i.e.  $\widehat z_k$ and $\widehat z_{N_{\dag }+k}$) are also correlated. This makes it difficult to fix the statistical characteristics of  $\widehat p_k$. In this respect, two choices are possible. The first consists in the determination, for each frequency, of the PDF of $\widehat p_k$. Actually, this is a rather involved approach, because $\widehat z_k$ and $\widehat z_{N_{\dag }+k}$ have variance $(C_{{\vec{\widehat z}}})_{kk}$ and $(C_{{\vec{\widehat z}}})_{N_{\dag } + k, N_{\dag } + k}$, respectively, and covariance $(C_{{\vec{\widehat z}}})_{k, N_{\dag } + k}$. Therefore, once $\widehat z_k$ and $\widehat z_{N_{\dag }+k}$ are normalized to unit variance, each  $\widehat p_k$ is given by the sum of two correlated $\chi_1^2$ random quantities. Although available in analytical form (Simon 2006), the resulting PDF is rather complex and hence difficult to handle (for an alternative approach, see Reegen 2007). Moreover, there is the additional problem that  $L_{\widehat p_k}$ changes with k. A simpler alternative is the use of two uncorrelated and unit-variance random quantities, $\widehat{\upsilon}_k$ and $\widehat{\upsilon}_{N_{\dag } + k}$, obtained through the transformation

\begin{displaymath}%
{\vec{\widehat{\upsilon}}}= {\bf\Sigma}^{-1/2}_\star {\bf U}_\star {\vec{\widehat z}}_\star.
\end{displaymath} (26)

Here, ${\vec{\widehat{\upsilon}}}^{\rm T} = [\widehat{\upsilon}_k, \widehat{\upsilon}_{N_{\dag } + k}]$, ${\vec{\widehat z}}_\star^{\rm T} = [\widehat z_k, \widehat z_{N_{\dag } + k}]$, ${\bf\Sigma}_{\star}^{-1/2}$is a diagonal matrix whose entries are given by the reciprocal of the square root of the non-zero eigenvalues of the covariance matrix

\begin{displaymath}%
\Cb_\star = \left(
\begin{array}{cc}
(C_{{\vec{\widehat z}}...
...ehat z}}})_{N_{\dag } + k, N_{\dag } + k}
\end{array}\right),
\end{displaymath} (27)

zero otherwise, and ${\bf U}_\star$ is an orthogonal matrix that contains the corresponding eigenvectors[*]. Indeed, if the periodogram is defined as

\begin{displaymath}%
\widehat p_k = \widehat{\upsilon}_k^2 + \widehat{\upsilon}^2_{N_{\dag } + k}, \qquad k=0,1, \ldots, N_{\dag } -1,
\end{displaymath} (28)

then each $\widehat p_k$is given by the sum of two independent, unit-variance, Gaussian random quantities. As a consequence, the corresponding PDF is, independently of k, a $\chi^2_2$ whose cumulative distribution function (CDF) is the exponential function. This permits determining the statistical significance of  $\widehat p_k$ for a specified frequency k. Things become more complex if Nf frequencies are inspected when looking for a peak. Indeed, also after the operation (26), it happens that ${\rm E}[\widehat p_k \widehat p_l] \neq 0$ for $k \neq l$, i.e., the frequencies of the periodogram remain mutually correlated. This is an unavoidable problem. Because of it, Nf does not correspond to the number of independent frequencies, so the level of false alarm (20) cannot be computed. However, since $L_{\widehat p_k}$ is the same for all the frequencies, an upper limit can be fixed for $L_{{\rm Fa}}$ by setting $N_f = \lceil
M/2 \rceil$. The periodogram obtained by means of Eq. (28) corresponds to the original Lomb-Scargle periodogram.

Since the transformation (17) does not depend on the characteristics of the signal sampling, the strategy of following in the case that ${\vec x}$ is the realization of (not necessarily stationary) colored noise is simply the one in Sect. 2 i.e. transformation of ${\vec x}$ to an array ${\vec y}$ with uncorrelated entries. After that, the Lomb-Scargle periodogram can be computed. It is worth noticing that this simple result has been possible thanks to a formulation of the problem in the time domain and the use of the matrix notation. The same results could have been obtained by following the popular approach of working in the harmonic domain but at the price of a much more difficult derivation.

4 Two examples

To illustrate the usefulness and the simplicity of the proposed formalism in handling different situations from the classical ones, we show two examples in this section.

The first consists of a periodogram of a mean-subtracted time series. The evaluation of the reliability of a peak in the periodogram of a signal ${\vec x}$ requires that (under the null hypothesis ${\vec x}= {\vec n}$) ${\vec n}$ be the realization of a zero-mean noise process. In most experimental situations, this condition is not fulfilled and one works with a centered (i.e. mean-subtracted) version  ${\vec \chi}$ of ${\vec x}$. This, however, introduces some (often neglected) problems. The case where ${\vec x}$ is the realization of a discrete white noise process with variance  $\sigma^2_{{\vec x}}$ has been considered several times in the literature. An example is the paper by Zechmeister & Kürster (2009) where a rather elaborate solution is presented. With the approach proposed here, a simpler solution can be obtained if one takes into consideration that the subtraction of the mean from ${\vec x}$ forces a spurious correlation among the entries of ${\vec \chi}$ in such a way that the covariance matrix $\Cb_{{\vec \chi}}=E[ {\vec \chi}{\vec \chi}^{\rm T} ]$ is given by

\begin{displaymath}%
\Cb_{{\vec \chi}} = \sigma^2_{{\vec x}} \left( {\bf I}- \frac{\large {{1}}}{M} \right),
\end{displaymath} (29)

where M is number of entries of ${\vec x}$ and $\large {{1}}$ an M $\times$ M matrix with every entry equal to unity. Since this matrix is singular, it cannot be diagonalized and therefore ${\vec \chi}$ cannot be whitened. In any case, if in Eq. (24) matrix  $C_{{\vec x}}$ is substituted for  $\Cb_{{\vec \chi}}$ and one sets

\begin{displaymath}%
{\vec{\widehat z}}= \left(
\begin{array}{c}
\underline{\vec...
...line{\vec{\widehat{{\chi}}}}_{\mathcal{I}}
\end{array}\right),
\end{displaymath} (30)

then it is a trivial matter to decorrelate $\widehat z_k$ and $\widehat z_{N_{\dag }+k}$ by means of Eqs. (26), (27) and to compute the periodogram through Eq. (28). This result can be easily extend to the case where, because of measurement errors, each entry of ${\vec x}$ has its own variance  $\sigma^2_{x_j}$ and a weighted mean is subtracted from the data sequence i.e. $\chi_j = x_j - \sum _l \eta_j x_l / \sum_l \eta_l$, with $\eta_l = 1/ \sigma^2_{x_l}$. Indeed, it is sufficient to substitute  $\Cb_{{\vec \chi}}$ as given by Eq. (29) with

\begin{displaymath}%
\Cb_{{\vec \chi}} = {\rm diag}[{\vec \sigma}^2] - \frac{\large {{1}}}{\sum_l \eta_l},
\end{displaymath} (31)

where ${\vec \sigma}^2=[\sigma^2_{x_0}, \sigma^2_{x_1}, \ldots, \sigma^2_{x_{N-1}}]^{\rm T}$. The rest of the procedure remains unmodified.

The second example consists of zero-mean colored noise. The improvement in the quality of the results obtainable with the approach presented in the previous section is visible in Fig. 1. The top left panel shows a discrete signal $x_j = 0.5 \sin{(2 \pi f j)} + n_j$, f=0.127, simulated on a regular grid of 120 time instants but with missing data in the ranges [31 70] and [76 115]. Here, ${\vec n}$ is the realization of a discrete, zero-mean, colored noise process whose autocovariance function is given in the top right panel. From the bottom left panel, it is evident that Lomb-Scargle periodogram of the original sequence ${\vec x}$ provides rather ambiguous results concerning the presence of a sinusoidal component. On the other hand, such component is well visible in the bottom right panel that shows the Lomb-Scargle periodogram of the sequence ${\vec y}= \Cb_{{\vec n}}^{-1/2} {\vec x}$.

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg1.eps}
\end{figure} Figure 1:

Results concerning the numerical experiment, presented in Sect. 4, on the detection of a sinusoidal component in colored noise. The top left panel shows an observed time series (blue crosses) obtained through the simulation of signal ${\vec x}= \{ x_j \}_{j=0}^{119}$ with $x_j = 0.5 \sin{(2 \pi f j)} + n_j$, f=0.127, on a regular grid of 120 time instants but with the data in the ranges [31 70] and [76 115]. Here, ${\vec n}$ (red line) is the realization of a discrete, zero-mean, colored noise process whose autocovariance function is given in the top right panel. For comparison, the sinusoidal component is also plotted (green line). The bottom left panel shows the Lomb-Scargle periodogram of the original sequence ${\vec x}$ computed on 120 frequencies $k=0,1/120, \ldots, 119/120$, whereas the bottom right panel shows the Lomb-Scargle periodogram corresponding to its whitened version. In both cases, only the first 60 frequencies are shown and the vertical red line corresponds to the frequency of the sinusoidal component. The horizontal green line in the bottom panels provides the threshold corresponding to a 0.01 level of false alarm (number of independent frequencies Nf=60), i.e. the probability that the periodogram of a pure noise signal exceeds such a threshold by chance is $1\%$.

Open with DEXTER

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg2.eps}
\end{figure} Figure 2:

Covariance matrix of the real and the imaginary parts, say $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ of the Fourier transform  $\{ \widehat x_k \}$ of an unevenly sampled white-noise signal ${\vec x}$ (i.e. matrix  $\Cb_{{\vec{\widehat z}}}$ as given by Eq. (25)). In particular, ${\vec x}$ is assumed to be the realization of a zero-mean, unit-variance, white-noise process sampled at 120 time instants regularly spaced but with 80 missing data. Three different cases have been considered with missing data in the following ranges: a) [31 110] ( top left panel), b) [31 70], and [76 115] ( top right panel), c) [6 25], [36 75] and [96 115] ( bottom left panel). In a fourth case, the missing data are randomly distributed ( bottom right panel). Because of the small size of the figure, it is necessary to stress that for each panel, none of the prominent diagonal structures visible in the top right, as well in the bottom left quadrant, correspond to the main diagonal of the quadrant itself (i.e. none of them provide the covariance between  $\Re[\widehat{\underline{x}}_k]$ and  $\mathcal{I}[\widehat{\underline{x}}_k]$ that is seen in Fig. 3).

Open with DEXTER

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg3.eps}
\end{figure} Figure 3:

Variance of $\Re[\widehat{\underline{x}}_k]$ and covariance of $\Re[\widehat{\underline{x}}_k]$ with $\mathcal{I}[\widehat{\underline{x}}_k]$ as a function of the frequency k for the experiments in Fig. 2. The blue line corresponds to the main diagonal of the top left quadrant in each panel of Fig. 2, whereas the red one corresponds to the main diagonal in the top right, as well in the bottom left quadrant.

Open with DEXTER

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg4.eps}
\end{figure} Figure 4:

Real (blue line) and imaginary (red line) parts of the spectral windows  ${\vec{\widehat w}}$ that have been used to simulate the irregular sampling of the signal in the experiments corresponding to Fig. 2 (i.e.,  ${\vec{\widehat w}}$ is the Fourier transform of the sampling pattern ${\vec w}$ of ${\vec x}$, see Eqs. (37), (38)).

Open with DEXTER

5 Periodogram and least-squares fit of sinusoids

The formalism proposed here is also useful in the context of more theoretical questions (but with important practical implications). For example, a point often overlooked in the astronomical literature is the relationship between the periodogram and the least-squares fit of sine functions. Often these two methods are believed to be equivalent. Actually, this is true only when the sampling is regular and the frequencies of the sinusoids are given by the Fourier ones. Indeed, if $t_j = \tilde{t}_j$ and fk = k / N, $k =0, 1, \ldots, N-1$, then Eq. (1) can be written in the form

\begin{displaymath}%
{\vec x}- {\vec{{\mathfrak F}}}^{\rm T} {\vec a}= {\vec n},
\end{displaymath} (32)

with

\begin{displaymath}%
{\vec{{\mathfrak F}}}= \frac{2}{\sqrt{N}} \left(
\begin{arr...
...\Re} \\
{\vec{\mathcal{F}}}_{\mathcal{I}}
\end{array}\right),
\end{displaymath} (33)

and ${\vec a}= [a_0, a_1, \ldots, a_{N-1}, b_0, b_1, \ldots, b_{N-1}]^{\rm T}$. The least-squares solution $\bar{{\vec a}}$ of system (32) is given by

\begin{displaymath}%
\bar{{\vec a}} = ({\vec{{\mathfrak F}}}{\vec{{\mathfrak F}}}^{\rm T})^+ {\vec{{\mathfrak F}}}{\vec x},
\end{displaymath} (34)

where ``+'' denotes Moore-Penrose pseudo-inverse (Björck 1996). In the case of even sampling, i.e. when ${\vec{\mathcal{F}}}_{\Re} = {\bf F}_{\Re}$ and ${\vec{\mathcal{F}}}_{\mathcal{I}} = {\bf F}_{\mathcal{I}}$, it happens that

\begin{displaymath}%
\bar{{\vec a}} = {\vec{{\mathfrak F}}}{\vec x}.
\end{displaymath} (35)

In other words, coefficients $\{ a_k \}$ and $\{ b_k \}$, as given by the least-squares approach, can be obtained through the DFT of ${\vec x}$, because, as shown by means of Eqs. (11), $({\vec{{\mathfrak F}}}{\vec{{\mathfrak F}}}^{\rm T})^+ {\vec{{\mathfrak F}}}= {\vec{{\mathfrak F}}}$. In the case of uneven sampling, this identity is not fulfilled. Any kind of periodogram computed through Eq. (22) and the least-squares fit of sine functions has to be expected to give different results. Moreover, as only under the two above-mentioned conditions do the sine functions constitute an orthonormal basis, the least-squares fit of a single sine function per time does not in general provide the same result as the simultaneous fit of all the sinusoids as in Eq. (34) (e.g. see Hamming 1973, p. 450). In particular, if an unevenly-sampled signal is given by the contribution of two or more sinusoids, the one-at-a-time fit of a single sine function provides biased results. This also holds for the Lomb-Scargle periodogram, which is equivalent to the least-squares fit of a single sinusoid with a specified frequency, with the constraint that the corresponding coefficients ``a'' and ``b'' are uncorrelated (Zechmeister & Kürster 2009; Scargle 1982).

6 Discussion

As demonstrated in Sect. 3, when noise has arbitrary statistical characteristics, the computation of the periodogram of an unevenly-sampled signal requires two steps:

  • whitening and standardization of the noise component (in this way a signal ${\vec y}$ is obtained);

  • computation of the Lomb-Scargle periodogram of ${\vec y}$.
The first step, unavoidable even in the case of regular sampling, is a computationally expensive operation. Therefore, for time series containing more than a few thousand data points, dedicated algorithms exploiting the specific structure of  $\Cb_{{\vec n}}$ (e.g. often this matrix is of banded type) have to be developed for implementing Eq. (21). However, this problem is beyond the aim of the present paper. The second step is much less time consuming. Indeed, in the case of time series containing some thousands of points and when the periodogram has to be computed on a similar number of frequencies, the direct implementation of Eqs. (22)-(28) results in a few seconds of computation time only. In other words, in many practical situations, no dedicated algorithm is really necessary. However, fast algorithms have been proposed for very long time series (Press et al. 1992).

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg5.eps}
\end{figure} Figure 5:

Cross-correlation between the real and the imaginary parts of the spectral windows  ${\vec{\widehat w}}$ that are shown in Fig. 4. The red cross in each panel corresponds to the point (0,0).

Open with DEXTER

\begin{figure}
\par\includegraphics[width=15cm,clip]{14079fg6.eps}
\end{figure} Figure 6:

Covariance matrix of the real and the imaginary parts, say $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ of the Fourier transform $\{ \widehat x_k \}$ of an unevenly sampled white-noise signal ${\vec x}$ with periodic gaps (i.e. matrix  $\Cb_{{\vec{\widehat z}}}$ as given by Eq. (25)). In particular, ${\vec x}$ is the realization of a discrete zero-mean, unit-variance, white-noise process sampled on a grid of 600 time instants randomly distributed (i.e. not rebinned) according to the cyclic sampling pattern shown in the bottom panel of Fig. 7. In total, 1200 frequencies have been considered.

Open with DEXTER

\begin{figure}
\par\includegraphics[width=8cm,clip]{14079fg7.eps}
\end{figure} Figure 7:

Top panel: variance of $\Re[\widehat{\underline{x}}_k]$ and covariance of $\Re[\widehat{\underline{x}}_k]$ with $\mathcal{I}[\widehat{\underline{x}}_k]$ (only the first 100 frequencies are shown) for the experiment concerning the realization of a discrete zero-mean, unit-variance, white-noise process on 600 time instants randomly distributed (i.e. not rebinned) according to a cyclic sampling. In total, 1200 frequencies have been considered (see text in Sect. 6). Bottom panel: cyclic sampling used in the experiment.

Open with DEXTER

The last issue that has to be considered is to which extent the use of the Lomb-Scargle periodogram is really advantageous. Indeed, the action of the algorithms dealing with uneven sampling is essentially directed, for each frequency k, to force  $\widehat p_k$ to be the sum of two independent Gaussian random quantities. However, although not clearly emphasized, it has already been pointed out that this operation is not critical (e.g. seeScargle 1982). It can be expected that a periodogram computed simply through

\begin{displaymath}%
\widehat p_k = 2 \vert \widehat{\underline{y}}_k \vert^2,
\end{displaymath} (36)

with $\underline{\vec{\widehat{{y}}}}= \underline{\vec{{\mathcal{F}}}}{\vec y}$, is often very close to the one given by Eq. (28). A rigorous demonstration of this fact is difficult because of its strict dependence on the specific sampling. However, with the help of some numerical experiments and of the formalism introduced here, some insights are possible. In particular, we consider the covariance matrix  $\Cb_{{\vec{\widehat z}}}$ of a white noise signal when sampled on different uneven time grids.

6.1 Numerical simulations

In our simulations, we take the realization of a zero-mean, unit-variance, Gaussian white-noise process ${\vec n}$ sampled on $M_{\rm s}=120$ time regularly-spaced instants, but with 80 missing data (i.e. M=40). The available signal ${\vec x}$ can be written in the form

  $\displaystyle {\vec x}= {\bf W}{\vec n},$ (37)
  $\displaystyle {\bf W}= {\rm diag}[{\vec w}],$ (38)

where ${\bf W}= {\rm diag}[{\vec w}]$ and ${\vec w}$ is an array whose entries are equal to one in correspondence to a value of ${\vec n}$ that is available and zero otherwise. In this case, Eq. (3) can be written as
    $\displaystyle % = $\displaystyle {\bf F}{\bf W}{\vec n}$ (39)
  = $\displaystyle {\vec{\mathcal{F}}}{\vec n},$ (40)

with ${\vec{\mathcal{F}}}= {\bf F}{\bf W}$. Three different cases have been considered where the missing data have time indices in the ranges a) [31 110], b) [31 70] and [76 115], c) [6 25], [36 75] and [96 115], whereas they are randomly distributed in a fourth case. The related covariance matrices  $\Cb_{{\vec{\widehat z}}}$, computed through Eq. (25) with $N=M_{\rm s}$, are shown in Fig. 2, whereas Fig. 3 displays the corresponding main diagonal of the blocks $\underline{\vec{{\mathcal{F}}}}_{\Re} \underline{\vec{{\mathcal{F}}}}^{\rm T}_{\Re}$ and $\underline{\vec{{\mathcal{F}}}}_{\Re} \underline{\vec{{\mathcal{F}}}}^{\rm T}_{\mathcal{I}}$. Especially from Fig. 3 it is evident that, for an arbitrary frequency k, the covariance between $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ is quite close to zero. This means that each entry of  ${\vec{\widehat p}}$, as given by Eq. (36), can be assumed to be distributed according to a $\chi^2_2$. From Fig. 2 it is also evident that, in the case of gaps present in the sampling pattern, there are nearby frequencies k and l for which not only $\Re[\widehat{\underline{x}}_k]$ and $\Re[\widehat{\underline{x}}_l]$ are mutually correlated, but also $\Re[\widehat{\underline{x}}_k]$ with $\mathcal{I}[\widehat{\underline{x}}_l]$ and $\Re[\widehat{\underline{x}}_l]$ with $\mathcal{I}[\widehat{\underline{x}}_k]$. These correlations, especially those between the real and the imaginary components, disappear in the case of random sampling.

6.2 Interpretation of the results of the simulations

To understand these results, it is necessary to take into account that Eq. (39) can be rewritten in the form

          $\displaystyle % = $\displaystyle {\bf F}{\bf W}{\bf F}^* {\bf F}{\vec n},$ (41)
  = $\displaystyle {\vec{\widehat W}}{\vec{\widehat n}}.$ (42)

As ${\bf W}$ is a (singular) diagonal matrix, ${\vec{\widehat W}}$ is a (singular) circulant matrix. This implies that  ${\vec{\widehat x}}$ is given by the circular convolution of ${\vec{\widehat n}}$ (the DFT of the original signal, inclusive of the missing data) with the spectral window  ${\vec{\widehat w}}$ (the DFT of the sampling pattern). Since for two arbitrary frequencies, say k and l, $\Re[\widehat{\underline{n}}_k]$, $\mathcal{I}[\widehat{\underline{n}}_k]$, $\Re[\widehat{\underline{n}}_l]$, and $\mathcal{I}[\widehat{\underline{n}}_l]$ are mutually independent, any correlation existing between $\Re[\widehat{\underline{x}}_k]$, $\mathcal{I}[\widehat{\underline{x}}_k]$, $\Re[\widehat{\underline{x}}_l]$, and $\mathcal{I}[\widehat{\underline{x}}_l]$ is induced by the correlation between the entries of $\Re[{\vec{\widehat w}}]$ and $\mathcal{I}[{\vec{\widehat w}}]$. Now, as shown in Fig. 4, in the presence of gaps the real and the imaginary parts of ${\vec{\widehat w}}$are both only significantly different from zero in a narrow interval of frequencies surrounding the origin, i.e., corresponding to the lowest frequencies (${\vec w}$ can be interpreted as low-pass filter). This produces the correlations observed among $\Re[\widehat{\underline{x}}_k]$, $\Re[\widehat{\underline{x}}_l]$, $\Re[\widehat{\underline{x}}_l]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ for nearby frequencies k and l. On the other hand, in the case of random sampling ${\vec{\widehat w}}$ mimics the behaviour of white noise, making the correlations less important. Moreover, Fig. 5 shows that, in the case of sampling with gaps, the cross-correlation between $\Re[{\vec{\widehat w}}]$ with $\mathcal{I}[{\vec{\widehat w}}]$ is significant but not centered at zero lag. This explains why the quantities  $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_l]$, $k \neq l$ can be strongly correlated in spite of the small correlation between $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$. Similar arguments also hold for the case of sampling with periodic gaps (rather common in astronomical experiments). Indeed, in practical applications the period of these gaps is somewhat smaller than the mean sampling time step of an uninterrupted sequence of data. As a consequence, both $\Re[{\vec{\widehat w}}]$ and $\mathcal{I}[{\vec{\widehat w}}]$ present a sharp and narrow peak in correspondence to a frequency close to the origin. Actually, the spectral window of a sampling with periodic gaps is also characterized by the presence of aliases. However, these aliases too are sharp and narrow, and their importance decreases for increasing frequencies. The combination of these facts leads again to the quantities $\Re[\widehat{\underline{x}}_k]$, $\Re[\widehat{\underline{x}}_l]$, $\Re[\widehat{\underline{x}}_l]$, and $\mathcal{I}[\widehat{\underline{x}}_k]$ almost being uncorrelated if the frequencies k and l are not sufficiently close enough. Moreover, since with periodic gaps the cross-correlation between $\Re[{\vec{\widehat w}}]$ with $\mathcal{I}[{\vec{\widehat w}}]$ can also be significant, but not centered on zero lag, then for each frequency k the correlation between $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ is negligible. These considerations are confirmed by Figs. 6 and 7. Figure 6 displays the covariance matrix  $\Cb_{{\vec{\widehat z}}}$, computed through Eq. (25), of a discrete zero-mean, unit-variance, and white-noise process sampled on 600 time instants randomly distributed (i.e. not rebinned) along six cycles (i.e. 100 points per cycle) of the sampling pattern shown in the bottom panel of Fig. 7; 1200 frequencies have been considered. The top panel of Fig. 7 displays the main diagonal of the blocks $\underline{\vec{{\mathcal{F}}}}_{\Re} \underline{\vec{{\mathcal{F}}}}^{\rm T}_{\Re}$and $\underline{\vec{{\mathcal{F}}}}_{\Re} \underline{\vec{{\mathcal{F}}}}^{\rm T}_{\mathcal{I}}$.

The numerical simulations indicate that the consequences of unevenly-sampled data seem to concern the number of independent frequencies in ${\vec{\widehat p}}$ rather than the correlation between $\Re[\widehat{\underline{x}}_k]$and its imaginary counterpart  $\mathcal{I}[\widehat{\underline{x}}_k]$. At present, no general method has been developed to deal with this problem. However, it has been pointed out in the literature that the number of independent frequencies is not a critical parameter to test the significance level of a peak in  ${\vec{\widehat p}}$. In particular, empirical arguments indicate that this number can be safely set to M/2 (e.g. see Press et al. 1992). The conclusion is that forcing each entry of  ${\vec{\widehat p}}$ to be the sum of two independent Gaussian quantities only has minor effects. This is shown by Fig. 8 where the top panel shows that the Lomb-Scargle periodogram of the time series with periodic gaps considered above is quite similar to that provided simply by $\widehat p_k = 2 \vert\widehat{\underline{x}}_k\vert^2$ with $\underline{\vec{\widehat{{x}}}}= \underline{\vec{{\mathcal{F}}}}{\vec x}$. This is evident in the bottom panel of the same figure where the similarity of the two periodiograms is demonstrated by their absolute difference.

A final point to underline, which has important practical implications, is that for long time series the small differences visible in Fig. 8 should decrease. Indeed, as seen above, $\widehat p_k$ will be significantly correlated with $\widehat p_l$ only if the two frequencies k and l are close enough. The only exception is represented by the frequencies at the extremes of the frequency domain where the assumption of periodic signal intrinsic to DFT forces a spurious correlation. For longer and longer time series, this spurious correlation will affect a smaller and smaller fraction of frequencies and, as consequence, a larger and larger fraction of them will be mutually independent. This is shown in Fig. 9 where, in the context of the previous experiment, the mean absolute difference between the two periodograms is plotted as a function of $N_{\rm s}$, the number of cyclic sampling patterns ( $N_{\rm s} = 6$ in Fig. 8). This argument also explains why in many practical situations the number of independent frequencies can be safely fixed to M/2. In conclusion, only in the case of signals that contain a small number of data, the Lomb-Scargle periodogram can be expected to exhibit noticeable differences from the periodogram given by Eq. (36). Often, using it does not change anything. Comparable results can be expected with less sophisticated approaches.

\begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg8.eps}
\end{figure} Figure 8:

The top panel shows the Lomb-Scargle periodogram and the classic periodogram given by $\widehat p_k = 2 \vert\widehat{\underline{x}}_k\vert^2$ for the experiment in Fig. 7. The bottom panel shows the smallness of their absolute difference.

Open with DEXTER

6.3 Application to an astronomical time series

As an example of an unsophisticated method able to produce results similar to those obtainable with the Lomb-Scargle periodogram, we consider the rebinning of the original time series on an arbitrarily dense regular time grid (in this way a signal with a regular sampling is obtained but some data are missing) followed by applying any of the fast Fourier transform (FFT) algorithms available nowadays. Figure 10 shows an experimental (mean-subtracted) time series versus its rebinned version. This time series, which is characterized by rather irregular sampling, was obtained with the VLA array (Biggs et al. 1999) and consists of polarisation position angle measurements at an observing frequency of 15 GHz for one of the images of the double gravitational lens system B0218+357. The original sequence contains only M=45 data and it is rebinned on a regular grid of Mr = 92 time instants. In spite of this, as visible in Fig. 11, the corresponding periodograms, computed on N=Mr equispaced frequencies by means of the Lomb-Scargle and the FFT approach, are remarkably similar. Here, the highest frequency approximately corresponds to the Nyquist frequency that is related to the shortest sampling time step. The main conclusion of this example is to point out that, although in the previous section we stated that use of the Lomb-Scargle periodogram can be expected to be effective only for time series that contain a small number of data, this is not a sufficient condition to guarantee that the method is truly useful.

\begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg9.eps}
\end{figure} Figure 9:

Mean absolute difference $\sum_k \vert \Delta \widehat p_k \vert / N_k$ (Nk = number of frequencies) between the Lomb-Scargle periodiogram and the classic periodogram given by $\widehat p_k = 2 \vert\widehat{\underline{x}}_k\vert^2$ as a function of the number $N_{\rm s}$ of cyclic sampling patterns (Fig. 8 shows the case with $N_{\rm s} = 6$).

Open with DEXTER

\begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg10.eps}
\end{figure} Figure 10:

Experimental (mean-subtracted) time series containing 45 unevenly-spaced data versus a rebinned version computed on a regular grid of 92 time instants (data taken from Biggs et al. 1999).

Open with DEXTER

\begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg11.eps}
\end{figure} Figure 11:

Periodograms of the time series shown in Fig. 10. Frequency is in given in units of the Nyquist frequency corresponding to the shortest sampling time step. The periodogram of the original time series has been obtained by means of the Lomb-Scargle method and that of the rebinned version by means of a classic FFT algorithm.

Open with DEXTER

7 Conclusions

In this paper we worked out a general formalism, based on the matrix algebra, that is tailored to analysis of the statistical properties of the Lomb-Scargle periodogram independently ofthe characteristics of the noise and the sampling. With this formalism it has become possible to develop a test for the presence of components of interest in a signal in more general situations than those considered in the current literature (e.g. when noise is colored and/or non-stationary). Moreover, we were able to clarify the relationship between the Lomb-Scargle periodogram and other techniques (e.g. the least-squares fit of sinusoids) and to fix the conditions under which the use of such method can be expected to be effective.

References

  1. Biggs, A. D., Browne, I. W. A., Helbig, P., et al. 1999, MNRAS, 304, 349 [NASA ADS] [CrossRef] [Google Scholar]
  2. Björck, A. 1996, Numerical Methods for Least Squares Problems (Philadelphia: SIAM) [Google Scholar]
  3. Ferraz-Mello, S. 1981, AJ, 86, 619 [NASA ADS] [CrossRef] [Google Scholar]
  4. Gilliland, R. L., & Baliunas, S. L. 1987, ApJ, 314, 766 [NASA ADS] [CrossRef] [Google Scholar]
  5. Koen, C. 2006, MNRAS, 37, 1390 [NASA ADS] [CrossRef] [Google Scholar]
  6. Hamming, R. W. 1973, Numerical Methods For Scientists and Engineering (New York: Dover) [Google Scholar]
  7. Lomb, N. R. 1976, Ap&SS, 39, 447 [NASA ADS] [CrossRef] [Google Scholar]
  8. Oppenhaim, A. V., & Shafer, R. W. 1989, Discrete-time Signal Processing (London: Prentice Hall) [Google Scholar]
  9. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes (Cambridge: Cambridge University Press) [Google Scholar]
  10. Reegen, P. 2007, A&A, 467, 1353 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  11. Scargle, J. D. 1982, ApJ, 263, 835 [NASA ADS] [CrossRef] [Google Scholar]
  12. Simon, M. K. 2006, Probability Distributions Involving Gaussian Random Variables (Heidelberg: Springer) [Google Scholar]
  13. Vio, R., Strohmer, T., & Wamsteker, W. 2000, PASP, 112, 74 [NASA ADS] [CrossRef] [Google Scholar]
  14. Zechmeister, M., & Kürster, M. 2009, A&A, 496, 577 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]

Footnotes

...)-entry[*]
In the following, the element in the nth row and mth column of an N $\times$ M matrix ${\bf A}$ will be indicated with Amn or alternatively with $({\bf A})_{mn}$, $n=0, 1, \ldots N-1$, $m=0, 1,\ldots, M-1$.
...$\underline{\vec{\widehat{{x}}}}^{\rm T} = [\widehat x_0, \widehat x_1, \ldots, \widehat x_{N_{\dag }-1}]$[*]
From now on, if ${\vec r}$ is an N $\times$ 1 column array, then  $\underline{\vec{{r}}}$ is a column array that contains the first $N_{\dag } = \lceil N/2 \rceil$ entries of ${\vec r}$, i.e.  $\underline{\vec{{r}}}= [r_0, r_1, \ldots, r_{N_{\dag }-1}]^{\rm T}$. Similarly, if ${\bf A}$ is an N $\times$ M matrix, then  $\underline{\vec{{A}}}$ is a matrix that contains the first $N_{\dag }$ rows of ${\bf A}$.
...${\vec n}$[*]
The decorrelation time of a random signal n(t) is the time interval $\Delta t$ such that two values n(t1) and n(t2), with $t_2 - t_1 = \Delta t$, can be considered as uncorrelated.
...$\chi^2_2$ distribution[*]
$\chi^2_2$ denotes the chi-square distribution with two degrees of freedom.
... independent[*]
This is because, if N > M, the rank of the N $\times$ N matrix $\Cb_{{\vec{\widehat z}}}$ is smaller than or equal to M. This implies that the array ${\vec{\widehat z}}$ has at most M degrees of freedom. Since each entry of ${\vec{\widehat p}}$ is given by the sum of two entries of  ${\vec{\widehat z}}$, then a periodogram has at most M/2 degrees of freedom.
... eigenvectors[*]
The diagonal elements $\lambda_1$ and $\lambda_2$ of $\Sigma_*$ can be trivially computed through the solution of the quadratic equation $\lambda^2 - {\rm tr}[\Cb_\star] + {\rm det}[\Cb_\star] = 0$, with ${\rm tr[.]}$ and ${\rm det}[.]$ denoting the trace and determinant operators. The arrays ${\vec u}_1$ and ${\vec u}_2$, which constitute the columns of ${\bf U}_*$, can be obtained by solving the equations $(\Cb_\star - \lambda_l {\bf I}) {\vec u}_l = {0}$, l=1,2.

All Figures

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg1.eps}
\end{figure} Figure 1:

Results concerning the numerical experiment, presented in Sect. 4, on the detection of a sinusoidal component in colored noise. The top left panel shows an observed time series (blue crosses) obtained through the simulation of signal ${\vec x}= \{ x_j \}_{j=0}^{119}$ with $x_j = 0.5 \sin{(2 \pi f j)} + n_j$, f=0.127, on a regular grid of 120 time instants but with the data in the ranges [31 70] and [76 115]. Here, ${\vec n}$ (red line) is the realization of a discrete, zero-mean, colored noise process whose autocovariance function is given in the top right panel. For comparison, the sinusoidal component is also plotted (green line). The bottom left panel shows the Lomb-Scargle periodogram of the original sequence ${\vec x}$ computed on 120 frequencies $k=0,1/120, \ldots, 119/120$, whereas the bottom right panel shows the Lomb-Scargle periodogram corresponding to its whitened version. In both cases, only the first 60 frequencies are shown and the vertical red line corresponds to the frequency of the sinusoidal component. The horizontal green line in the bottom panels provides the threshold corresponding to a 0.01 level of false alarm (number of independent frequencies Nf=60), i.e. the probability that the periodogram of a pure noise signal exceeds such a threshold by chance is $1\%$.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg2.eps}
\end{figure} Figure 2:

Covariance matrix of the real and the imaginary parts, say $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ of the Fourier transform  $\{ \widehat x_k \}$ of an unevenly sampled white-noise signal ${\vec x}$ (i.e. matrix  $\Cb_{{\vec{\widehat z}}}$ as given by Eq. (25)). In particular, ${\vec x}$ is assumed to be the realization of a zero-mean, unit-variance, white-noise process sampled at 120 time instants regularly spaced but with 80 missing data. Three different cases have been considered with missing data in the following ranges: a) [31 110] ( top left panel), b) [31 70], and [76 115] ( top right panel), c) [6 25], [36 75] and [96 115] ( bottom left panel). In a fourth case, the missing data are randomly distributed ( bottom right panel). Because of the small size of the figure, it is necessary to stress that for each panel, none of the prominent diagonal structures visible in the top right, as well in the bottom left quadrant, correspond to the main diagonal of the quadrant itself (i.e. none of them provide the covariance between  $\Re[\widehat{\underline{x}}_k]$ and  $\mathcal{I}[\widehat{\underline{x}}_k]$ that is seen in Fig. 3).

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg3.eps}
\end{figure} Figure 3:

Variance of $\Re[\widehat{\underline{x}}_k]$ and covariance of $\Re[\widehat{\underline{x}}_k]$ with $\mathcal{I}[\widehat{\underline{x}}_k]$ as a function of the frequency k for the experiments in Fig. 2. The blue line corresponds to the main diagonal of the top left quadrant in each panel of Fig. 2, whereas the red one corresponds to the main diagonal in the top right, as well in the bottom left quadrant.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg4.eps}
\end{figure} Figure 4:

Real (blue line) and imaginary (red line) parts of the spectral windows  ${\vec{\widehat w}}$ that have been used to simulate the irregular sampling of the signal in the experiments corresponding to Fig. 2 (i.e.,  ${\vec{\widehat w}}$ is the Fourier transform of the sampling pattern ${\vec w}$ of ${\vec x}$, see Eqs. (37), (38)).

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg5.eps}
\end{figure} Figure 5:

Cross-correlation between the real and the imaginary parts of the spectral windows  ${\vec{\widehat w}}$ that are shown in Fig. 4. The red cross in each panel corresponds to the point (0,0).

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=15cm,clip]{14079fg6.eps}
\end{figure} Figure 6:

Covariance matrix of the real and the imaginary parts, say $\Re[\widehat{\underline{x}}_k]$ and $\mathcal{I}[\widehat{\underline{x}}_k]$ of the Fourier transform $\{ \widehat x_k \}$ of an unevenly sampled white-noise signal ${\vec x}$ with periodic gaps (i.e. matrix  $\Cb_{{\vec{\widehat z}}}$ as given by Eq. (25)). In particular, ${\vec x}$ is the realization of a discrete zero-mean, unit-variance, white-noise process sampled on a grid of 600 time instants randomly distributed (i.e. not rebinned) according to the cyclic sampling pattern shown in the bottom panel of Fig. 7. In total, 1200 frequencies have been considered.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=8cm,clip]{14079fg7.eps}
\end{figure} Figure 7:

Top panel: variance of $\Re[\widehat{\underline{x}}_k]$ and covariance of $\Re[\widehat{\underline{x}}_k]$ with $\mathcal{I}[\widehat{\underline{x}}_k]$ (only the first 100 frequencies are shown) for the experiment concerning the realization of a discrete zero-mean, unit-variance, white-noise process on 600 time instants randomly distributed (i.e. not rebinned) according to a cyclic sampling. In total, 1200 frequencies have been considered (see text in Sect. 6). Bottom panel: cyclic sampling used in the experiment.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=17.5cm,clip]{14079fg8.eps}
\end{figure} Figure 8:

The top panel shows the Lomb-Scargle periodogram and the classic periodogram given by $\widehat p_k = 2 \vert\widehat{\underline{x}}_k\vert^2$ for the experiment in Fig. 7. The bottom panel shows the smallness of their absolute difference.

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg9.eps}
\end{figure} Figure 9:

Mean absolute difference $\sum_k \vert \Delta \widehat p_k \vert / N_k$ (Nk = number of frequencies) between the Lomb-Scargle periodiogram and the classic periodogram given by $\widehat p_k = 2 \vert\widehat{\underline{x}}_k\vert^2$ as a function of the number $N_{\rm s}$ of cyclic sampling patterns (Fig. 8 shows the case with $N_{\rm s} = 6$).

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg10.eps}
\end{figure} Figure 10:

Experimental (mean-subtracted) time series containing 45 unevenly-spaced data versus a rebinned version computed on a regular grid of 92 time instants (data taken from Biggs et al. 1999).

Open with DEXTER
In the text

  \begin{figure}
\par\includegraphics[width=9cm,clip]{14079fg11.eps}
\end{figure} Figure 11:

Periodograms of the time series shown in Fig. 10. Frequency is in given in units of the Nyquist frequency corresponding to the shortest sampling time step. The periodogram of the original time series has been obtained by means of the Lomb-Scargle method and that of the rebinned version by means of a classic FFT algorithm.

Open with DEXTER
In the text


Copyright ESO 2010

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.