next previous
Up:  Scuti stars in Praesepe


Subsections

5 Data merging and weighting

The light curves from the different sites and nights constitute a rather inhomogeneous sample. Combining these data is far from trivial. In order to see how some subjective decisions which must be made at this stage affect the final result, the merging and time-series analysis has been carried out independently by two teams and the results compared. It should be noted that the zero point corrections and detrending we apply for our data at the merging state suppress all low-frequency ($\sim$2 d-1 and smaller) signals, both spurious and real, if such are present.

5.1 Aarhus, method 1

Merging the data we try to correct for zero point offsets due to e.g. variable extinction and to adjust the scale in order to match different instrumental systems used at different sites.

The merging was done as an iterative process, where a number of free parameters were introduced and then fixed, either because a good value could be determined or a default value had to be chosen. Sometimes the datasets were too small or too noisy to allow for any but the simplest choice of parameters.

In the first step, the data made in b and v bands were transfered to a common scale with V-filter using scaling ratios of $b/y
\sim 1.24$ and $v/y \sim 1.44$ derived from Viskum et al. (1998) observations of FGVir. Because of the large scatter, the $I_{\rm C}$-filter observations were not used. The yand V amplitudes were assumed to be identical. The data were next divided into separate sets consisting of one-night observations from each site.

The parameters employed in the merging were then for each set: a correction for the zero point of the magnitude scale, a scaling factor (close to 1 and identical for all datasets with the same filter from one site) and a subjective quality factor, which was used to adjust weights applied in the time-series analysis.

Starting off with zero-point correction equal to zero, scaling factors of 1 and identical quality factors, a first fit of a light curve to the data was derived. Subtracting the current fit from the data points, zero points and scaling values were derived and used to replace the original parameters. Also, a weight for each data point was calculated based on a smoothed value of the rms deviation at each time. This weight was multiplied by the quality factor. Sigma clipping was also applied by giving zero weight to outliers.

The zero point offsets found were small and the scaling factors in most cases not significantly different from 1.0. The scaling from vand b to V with factors 1.24 and 1.44 was however slightly modified. A certain amount of subjective decisions entered into this process: when to stop iterations and freeze the parameter set.

Because the main effort has been to reduce the noise by improving the weights, several principles were used to derive weights:

All the above weights were applied and results compared throughout the analysis 1 described in the next section.

5.2 Wroc\law, method 2

Because in method 1 scaling factors were found to be close to 1 for V and y observations, in the second approach no scaling of light curves made through different filters was performed. Since most of the data were obtained in Johnson V and Strömgren y bands, only these data were included. Because of different quality and sampling times, the emphasis was laid on a proper weighting of data. Low-quality nights were rejected from the analysis. Moreover, some data sets were freed from the instrumental low-frequency variations. This was done by fitting a sinusoid with a dominant frequency and allowing a linear time-dependent trend which was later subtracted.


  \begin{figure}
\par\includegraphics[width=8.7cm,clip]{h2827f5.ps}\end{figure} Figure 5: An example of the calculation of weights in method 2: Observatorio del Teide observations carried out on 1998 February 21/22. a) Observations (open circles), 0.008-day averages (filled circles), and smooth spline fit (continuous line). b) Residuals from the fit shown in panel a). For comparison, the weighting function (truncated Gaussian) is shown in scale. c) The weights (in arbitrary units).

An example of the calculation of weights within method 2 is shown in Fig. 5. The weights were assigned to each point individually. As we wanted the weights to be inversely proportional to the local variance, the procedure of calculating weigths was the following. Firstly, the real light variations were fitted by calculating averages in 0.008-day intervals and smoothing them by a spline fit (Fig. 5a). The resulting residuals (Fig. 5b) were next used to derive the local variance. This variance was calculated in a common way, but additional weighting of residuals was introduced in order to secure that only the points closest to a given one contribute to the local variance. This weighting function, a truncated Gaussian with $\sigma = 0.03$ d, is also shown in Fig. 5b. Finally, weights were calculated, as the inverse of the local variance multiplied by an arbitrary scaling factor. As can be seen in Fig. 5c, the weights indeed change accordingly with the changing scatter in residuals. We also point out that this procedure includes no assumption on the frequency content of the real signal.


next previous
Up:  Scuti stars in Praesepe

Copyright ESO 2001