Free Access
Issue
A&A
Volume 544, August 2012
Article Number A90
Number of page(s) 11
Section Stellar structure and evolution
DOI https://doi.org/10.1051/0004-6361/201219328
Published online 03 August 2012

© ESO, 2012

1. Introduction

Many breakthrough results for red-giant (G-K) stars have been presented using data obtained by the CoRoT (Baglin et al. 2006) and NASA Kepler (Borucki et al. 2010) missions. These results include statistical ensemble studies of global oscillation parameters, i.e., frequency of maximum oscillation power, νmax, mean frequency separation between modes of the same degree and consecutive orders,  ⟨ Δν ⟩ , small frequency separations between modes of different degree, , amplitudes and visibilities of the oscillations, and tests of scaling relations (e.g., De Ridder et al. 2009; Hekker et al. 2009; Bedding et al. 2010; Huber et al. 2010; Hekker et al. 2011d; Huber et al. 2011; Mosser et al. 2012). Additionally, it has been possible to determine stellar parameters such as masses and radii (Kallinger et al. 2010a,b). In addition to these results, asteroseismic investigations into the granulation (Mathur et al. 2011), red giants in clusters (Basu et al. 2011; Hekker et al. 2011b; Stello et al. 2011a,b) and red giants in eclipsing binaries (Hekker et al. 2010b) have been performed, as well as detailed investigations into the internal structure of single stars (e.g., Di Mauro et al. 2011; Jiang et al. 2011; Baudin et al. 2012). The Kepler results referred to are based on timeseries with a near regular cadence of either 29.4 min or 58.85 s and a timespan ranging from  ~30 days up to more than 1.5 yr. These are the first datasets from space-based telescopes with such long timespan and high fill (≳ 90%) and frequency resolution (≈ 0.019 μHz). Underpinning much of this work is the ability to determine global oscillation parameters and the uncertainties in these values. It is reasonable to ask if there are now enough data available and whether there are any gains to be obtained from observing individual stars for longer periods. In this paper we address the precision and reliability of the determination of some of the global seismic parameters. There are other areas where there is a clear need for data of longer duration because the features detected in the power spectra are narrow and hence barely resolved even by the current datasets. In particular, we highlight the detection of g-p mixed modes (Beck et al. 2011). The observed mean period spacings appear to have different values for stars that burn only H (in a shell) and those that also burn He in the core (Bedding et al. 2011; Mosser et al. 2011a), hence the period spacing can be used to distinguish between different evolutionary states in which red giants are observed using the characteristics of their frequency spectra. Another method to distinguish between different evolutionary phases is based on the difference in frequency dependence of radial modes (Kallinger et al. 2012). Furthermore, recently, the timeseries obtained with Kepler have become long enough to study rotational splitting of the oscillation modes, which led to the detection of differential rotation in red giants (Beck et al. 2012).

In this work, we use the 19 months of data available from Q0 to Q7 to investigate how the increased timespan influences the detectability of the oscillation modes, and the absolute values and uncertainties of the global oscillation parameters, νmax and  ⟨ Δν ⟩ . These are important in several ways. Knowing the dependence of the precision on data duration is a guide for observing strategies, and for the determination of those secondary parameters that are derived from the primary global oscillation parameters, such as stellar mass and radius. Furthermore, it is crucial to be able to estimate the proportion of false negatives and false positives for population studies. Also, for detailed modelling of individual oscillation frequencies νmax turned out to be of great diagnostic potential (Gruberbauer et al. 2012). We will include in our considerations the impact of other relevant parameters such as the observed height-to-background ratio of the oscillation excess. This work is a follow-up of Hekker et al. (2011c, hereafter Paper I) on the red giants and Verner et al. (2011) on solar-type stars, in which results obtained with different methods have been compared and validated.

Paper I described the comparison of global oscillation parameters extracted from about four month of Kepler data using different methods. From this comparison, it was concluded that 1) the results from the different methods agree for most stars within a few percent; 2) at least five methods (out of the seven tested) obtained results for 92% of stars for νmax within the range of 50 μHz to 170 μHz, and this percentage decreased to 69% when all stars with νmax covering the complete frequency range, i.e., 0−283.4 μHz (the Nyquist frequency) were included; 3) the scatter due to realization noise, originating from the stochastic nature of the oscillations, is non-negligible and can be at least as important as the internal uncertainty of the results due to the method used, but this depends on the frequency of maximum oscillation power, νmax, and on the methods. In case a model is used to describe the variation of Δν with frequency the results are less sensitive to realization noise than others; 4) the influence of the obtained value of  ⟨ Δν ⟩  is less dependent on the frequency range over which it is computed than is the case for solar-type stars. A theoretical follow-up study to explain the latter has been performed by Hekker et al. (2011a).

2. Data

For the current study, we use data obtained with the Kepler satellite during its first  ~19 months of operation (Q0-7). These data have a  ~29.4 min near regular cadence and have been corrected for possible artifacts in the way described by García et al. (2011). See e.g. Jenkins et al. (2010) for some characteristics of these data. The stars in the sample investigated here have been selected for asteroseismic investigations by the Kepler Asteroseismic Science Consortium (KASC) or for astrometric purposes. We exclude cluster stars from this sample. Additionally, we include only stars for which a power excess characteristic for stochastic oscillations is detected. In some cases the stars episodically fall on the one CCD that has gone inactive, resulting in loss of data. We exclude these stars from our current investigation. Other causes of data loss are safe mode and momentum dumping from the spacecraft, as well as data downlinks every  ~30 days. These result in rather smaller losses of data. We require that the stars have been observed in all available quarters and we accept a fill level down to 94% accounting for some additional loss of data. This then leaves us with 1028 stars.

The  ⟨ Δν ⟩  distribution of stars in the dataset we consider here is shown in Fig. 1. This distribution is similar to the ones seen in other published work on the Kepler red giants (e.g., Hekker et al. 2011d).

thumbnail Fig. 1

Distribution of the mean large frequency separations of the stars in our sample.

3. Parameter extraction

For the data analysis, all the methods used here are based on a subset of those described in Paper I, i.e. COR (Mosser & Appourchaux 2009; Mosser et al. 2011b), OCT (Hekker et al. 2010a) and CAN (Kallinger et al. 2010a). For νmax values we have used the autocorrelation function from COR:EACF (Mosser & Appourchaux 2009), and the centre of the Gaussian fit to the oscillation power excess from OCT (method II in Hekker et al. 2010a) and CAN (Kallinger et al. 2010a). For  ⟨ Δν ⟩  we use the autocorrelation method (COR:EACF, Mosser & Appourchaux 2009) and the universal pattern (COR:UP, Mosser et al. 2011b) as well as the determination of the peak in the power spectrum of the power spectrum using statistics of grouped data OCT:PS ⊗ PS and with the addition of Bayesian statistics OCT:PS ⊗ PS (Bayesian) (Hekker et al. 2010a), and finally, fitting of the central three radial orders (CAN, Kallinger et al. 2010a).

A homogeneous comparison between the values of the shorter timeseries as presented in Paper I, and of longer timeseries cannot be performed directly, as continuous improvements to the methods have been made. These improvements have been made as a result of our increasing knowledge of the data from earlier runs and to deal with the longer timeseries. The changes are of numerical nature and do not alter the underlying principles of the methods. Hence, the references cited above are still valid. To perform a uniform study of the impact of the length of the timeseries, the (Q0-Q7) dataset (~600 days) has been used both as a whole and divided into subsets. These datasets are all analysed with the latest versions of the analysis methods.

4. Likelihood of detecting oscillation power in frequency spectra

Recently, Hekker et al. (2011d) analysed one-month data sets of publicly available data for over 16 000 red giants selected on the basis of effective temperature and surface gravity. They found that in  ~70% of the stars, oscillations could be detected. This raises questions as to whether this fraction is telling us something about the ability of red giants to sustain stochastically-driven oscillations, or if it is just a reflection of the difficulties in the automated detections of oscillations for relatively short data sets? Perhaps also, some of the stars were so faint that their noise levels prevented the oscillations being detected. Alternatively, can some other feature in the star suppress the oscillations in the same manner as activity is known to suppress the oscillations in solar-like stars (Mosser et al. 2009; Chaplin et al. 2011a; Huber et al. 2011)? Here we will first give consideration to the importance of the amplitude of the oscillations and apparent brightness of the stars and we will subsequently consider the problems associated with the automated methods.

We consider how we might estimate the likelihood of detecting oscillation power when it is present in the data. We use the same method as given in (Chaplin et al. 2011b), adapted for the red giants, to show that there is a high expectation that we will be able to detect the modes of oscillations in all the red giants in the Kepler data set. It is important to note that, although the correct identification of the frequency range in which the modes are located is of fundamental importance, most current methods do not use this as their primary consideration when determining if there is oscillation power in the spectrum. For most of the methods, the determination of Δν is done first. If this fails then “no detection” is reported. This may not be the best strategy, but before we construct that discussion we should first explore the existing predictions for the amplitudes of the modes of oscillations in red giants.

4.1. Prediction for mode power

Kjeldsen & Bedding (1995) devised scaling relations predicting that the amplitude of solar-like oscillations scale with their luminosity to mass ratio, which implies that the amplitude of the oscillations increases with increasing stellar radius. Hence, solar-like oscillations in red giants are expected to have higher amplitudes than oscillations in solar-type stars of equal masses. These scaling relations have recently been revised (Kjeldsen & Bedding 2011), and also tested, both theoretically (e.g. Samadi et al. 2007) and observationally (e.g. Baudin et al. 2011; Huber et al. 2011; Stello et al. 2011a).

To determine if it is possible to detect the modes, we are interested in the signal-to-noise ratio in the vicinity of the modes. In Mosser et al. (2012) it was shown that, with a small dependence on evolutionary status, the ratio of the height of the smoothed power spectrum to the granulation noise background evaluated at νmax is between 3.7 and 4.0 for clump and red-giant branch stars, respectively. Accordingly, we will use the lower limit of this to work out the signal-to-noise in the integrated spectral power excess. For all the red giants that we consider here, the intrinsic photon shot noise is negligible and we neglect it. This removes a consideration of the stellar luminosity from the calculations.

A commonly accepted model of the envelope of the oscillation power is a Gaussian function whose width, Wenv, scales with the frequency of maximum power νmax as (Mosser et al. 2010). We can determine the average power in the oscillations by smoothing the power spectrum over a range of at least one large spacing so that no trace of the individual modes remains. It is recognized that doing this in practice requires considerable care as is spelt out in Mosser et al. (2012). We will take twice the full-width half-maximum of the underlying Gaussian as the range over which we will integrate to determine the average power. This range contains all but a few per cent of the oscillation power. The granulation background in the vicinity of the modes can be modeled with a power law with index of −2.1 (Mosser et al. 2012). Integration of these two functions, over the same range, leads to an integrated height-to-background ratio (H/Bint) of 1.55 (0.42 the height-to-background ratio at νmax).

The averaging of the data during the sampling interval in the time domain causes an attenuation of the amplitudes in the frequency spectrum according to a sinc function and we can use the ratio of νmax to νNyq, the Nyquist frequency which is 283.4 μHz for the Kepler long cadence data, to quantify the size of this reduction. The majority of stars in our sample have νmax ≪ νNyq and for these stars this sinc term is negligible and we do not consider it further.

4.2. Model of detection probability

Here we present a model, based on predicted integrated height-to-background in the vicinity of the oscillations, for how detectable the oscillations are. To do this we adapt the formulation devised by Chaplin et al. (2011b) for solar-type stars to red giants. The principle of the method is to compare the power present in the modes with that present in the background and then to use probability distributions to ascertain the likelihood of the mode power being reliably detected.

The question that we now wish to answer is “given the H/Bint what is the chance of a false detection?”. We set a probability, pfalse, at which we are prepared to risk a false positive detection. In general, this level should be low. Typically for this work we have used pfalse = 0.01 (i.e. 1%). As detailed in Chaplin et al. (2011b), we compute a threshold value, θ, in a χ2 distribution with 2n degrees of freedom (d.o.f.) such that the probability that some random variable is greater than the threshold value supplied is equal to pfalse. In this, we take n, the number of degrees of freedom, as the number of independent frequency bins used to compute H/Bint. We must also take account of the chance that, because of random noise in the data, we will miss a true detection for a star with sufficient signal-to-noise for detection which leads to a new threshold value θ2(1)This value θ2 is then used to derive probability p, where p is the probability that in a χ2 distribution with 2n d.o.f. a random variable is less than or equal to the cut off level specified. Finally we have the probability we sought which is pfinal = 1 − p the probability that a given H/Bint will exceed the computed threshold θ.

The recipe as described predicts that for all stars considered here (even for datasets as short as 50 days) we are likely to detect the oscillations. In general, the lower probabilities are at about 93% likelihood for stars which have νmax below 10 μHz. For one particular star the detection probability dropped to 75%. Note that these predictions are not sensitive to the shape of the oscillation power excess nor to any structure, such as the large frequency separation in it, and that we have taken the worst case scenario for the H/B of helium-core-burning evolutionary status. These results are based on the integrated power of the oscillations. So from this test it appears that a detection rate higher than 70% as quoted by Hekker et al. (2011d) would be expected when using the H/B indications. But how does this compare with observational results from longer timeseries? We now consider this issue in the next section.

thumbnail Fig. 2

Fraction of runs with returned values for each star per Δν interval. Each panel shows the results of a certain method (A: COR – Universal Pattern, B: COR – EACF, C: OCT – PS ⊗ PS, D: OCT – PS ⊗ PS (Bayesian), E: CAN) with run length 50, 200, 400, 600 days in red, blue, cyan and black, respectively. Note that the 50 and 600 day curves in panel E overlap due to the fact that the 600 day results were used to constrain the input for the 50 day runs. No results for 200 and 400 day long runs were obtained by CAN.

5. Observational results

For observed stars we do not know the true values of the seismic parameters. All that we can do is to estimate them using the observations. In order to obtain such estimates of the seismic parameters νmax and Δν, the COR and OCT methods are used to analyse the full timespan of just under 600 days of the complete set of stellar data. The analysis was “blind” in that no manual checks were made on the outcomes. We therefore expect to have some errors in the results. We did not use the CAN method because, for computational reasons (the multinest procedure is very time consuming), not all available stars were analysed with it. For 974 stars there is close agreement between the results from OCT and COR for νmax and  ⟨ Δν ⟩ . In this context, close agreement is taken to be that the two completely independent methods identify the oscillations in the same region of the spectrum to within half the expected width of the envelope of the oscillation power, with the width of the oscillation envelope as defined by Mosser et al. (2010). Taking this relatively relaxed constraint is justified by the fact that we want to select a statistically significant sample of stars with oscillations detected by different methods in the same frequency range. For the remaining 54 stars, there are disagreements between the values obtained with the different methods. We inspected these stars by eye and for 39 stars the oscillations are at low frequencies (ν < 5   μHz), for four stars the oscillations straddle the Nyquist frequency and for 11 stars we do not have the standard red-giant oscillation spectrum due to the presence of artefacts or these could possibly be mis-classified as red giants.

For the 974 stars for which there is agreement, we create reference values which are the mean values of νmax and  ⟨ Δν ⟩ , respectively. These reference values are essentially an arbitrary zeropoint used to select reliable results and to discard outliers.

5.1. Outlier removal in short datasets

When short datasets are considered there will be occasions when the returned values are unreliable. We wish to remove some of these so that we can look at the spread in the reliable results. A very simple outlier rejection algorithm is used whose purpose is to reject patently wrong answers. This is the same as described in Paper I and depends on comparing the reference value with the individual values. The results presented in Verner et al. (2011) suggest that for solar-type stars it is appropriate to use rejection criteria that scale with the νmax of the star. However, it was shown in Paper I that this is not appropriate for red-giant stars. The process adopted here first rejects points that are more than 50% different from the reference value, irrespective of νmax or  ⟨ Δν ⟩ , and then applies an absolute cut. For these absolute cuts a value of 10 μHz has been used for all but the low values of νmax and a cut of 2 μHz has been used for  ⟨ Δν ⟩ . The cross-over position where the absolute cut off is more stringent than the relative one occurs at about νmax = 20   μHz.

5.2. Is a data duration of 50 days enough to reliably detect the presence of modes?

The statistical tests considered in Sect. 4.2 suggested that 50 days of data were sufficient to reliably detect the presence of the oscillation power based on the height-to-background ratio. We can now see if that is true with the algorithms used. As we used only stars for which we had firm detections of oscillations in the 600-day dataset, we expected to have results for each of the 50-day runs, i.e. 12 results per star. For runs of duration 200 days we expect to have 3 returns etc. This is not the case as can be seen from Fig. 2 where for each of the different methods we plot the fraction of returns for the different data durations as a function of Δν on a logarithmic scale. The data have been binned for this graph. In general the bin width used is just under 1 μHz but bins are combined at high frequencies to improve the statistics where there are few stars in the original sample as can be seen in Fig. 1. As expected, as the run duration increases the general efficacy of each method improves. The exception to this is for the CAN method where the data from the long runs are used to constrain the fitted parameter ranges in the short runs and the method is not “blind”. The results are summarised in Table 1. Although all methods have difficulties at low frequency, the different methods are clearly somewhat different in the spectral regions where their response is reliable. Additionally, COR is less effective for mid-range frequencies. The detection capabilities of the EACF method underlie the two methods COR:UP and COR:EACF employed for the determination of  ⟨ Δν ⟩ . For this method the value of the parameter Amax as given in Mosser & Appourchaux (2009) is important. The threshold value set for a detection is 8 for rejecting the H0 hypothesis at the 1% level. They have shown that the value of Amax improves linearly with the duration of the dataset and so we expect a marked improvement as longer datasets are used. This is indeed the case as shown in Table 1. The peak detection methods underlying CAN does depend on a predefined list of stars, and hence this shows the distribution of the type of stars on the list used for the analysis presented here. The OCT method has issues at the very high frequencies.

It is important to note that all these algorithms rely not on detecting the presence of the oscillation power but instead they look for patterns in the spectrum that are the consequence of the regular spacing in the spectrum of the modes. In looking at the fractions of the stars for which we detect regular mode structure we are really considering a different measure from the H/B ratio-based derivations in Sect. 4.2, hence we are comparing two different strategies. In a dataset of 50 days the modes are barely resolved (Baudin et al. 2011) and so the amplitude of the mode in the spectrum is very variable. In fact the power varies as χ2 with 2 d.o.f., which means that the probability distribution of power is negative exponential and it is not unusual for a particular mode to be essentially absent. As the duration of the dataset increases and the modes become resolved this is less of a problem. From Table 1 we see that for timeseries of 100 days length we have just about 85% return and for 200 days long timeseries about 95%, increasing to over 95% for 400 day datasets. The OCT:PS ⊗ PS(Bayesian) is most sensitive to the timespan of the data and is only as reliable in detecting the oscillations as the other methods for timeseries of 400 days or longer. These tests suggest that in short datasets the height-to-background would be a more reliable method to detect oscillations as opposed to the currently developed methods based on the regularity of the frequency pattern.

So the simple answer to the question posed at the beginning of this section is “no, 50 days is not enough to be certain to pick up more than 90% of the oscillations with the currently employed methods, but with methods based on height-to-background it is predicted that it would be possible to obtain reliable results in such short data-sets”.

Table 1

Fraction of runs per star, for which results have been returned for  ⟨ Δν ⟩  as a function of timespan of the data, where 12, 6, 3, 1 and 1 runs are available for data of 50, 100, 200, 400 and 600 days length, respectively.

thumbnail Fig. 3

Normalised distribution of the offset of the individual results from the reference value, i.e., the result of the 600 day run of the same method for the same star (left), and normalised distributions of the uncertainties (centre) for  ⟨ Δν ⟩  for 50 (top), 200 (middle) and 400 day (bottom) datasets. COR:UP, COR:EACF, CAN and OCT results are indicated in black solid lines, green dashed-dotted lines, blue dashed-triple dotted lines and red dashed lines respectively. The right column shows the normalised distribution of the offset of the individual results divided by its stated uncertainty for data of 50 (top), 200 (middle) and 400 days (bottom) length. Colours and linestyles are the same as in the left panels.

5.3. Dependence of νmax,  ⟨ Δν ⟩  and their quoted uncertainties on the length of the timeseries

We have looked at the likelihood of the modes being detected in datasets of differing lengths but there is another important consideration. Here we consider the precision of these results by comparing them with reference values. Because all methods use slightly different definitions for νmax and  ⟨ Δν ⟩  and we first aim to investigate the influence of the timespan only, we use the results of the 600 day run of a particular method as the reference to compare results of the shorter runs of that same method with. We evaluate both the deviation of the returned values from the reference values and the quoted uncertainty on the value.

We first explore how the deviations from the reference value and the uncertainties compare for the different data durations. The left panels of Figs. 3 and 4 show the distribution of the deviation of the individual results from their respective reference values for each of the global parameters considered here for data with a timespan of 50, 200 and 400 days for the range of methods employed. The different timespans are shown in different rows and the different methods are plotted in different colours with different line styles. The left hand panels of Figs. 3 and 4 show that except for the measure of  ⟨ Δν ⟩  by COR:UP the spread in the difference decreases with increasing timespan of the data. The reason for the difference in behaviour of COR:UP originates from the fact that this method applies the additional constraint of a regular pattern on the spectrum. The decrease of the spread with increasing timespan raises the question whether we can expect further improvements from even longer datasets. Therefore, we show the spread as a function of timespan in Fig. 5. The spread for COR:UP is 0.000 at 400 days (not shown) and this method is very reliable at determining the  ⟨ Δν ⟩  even for short datasets. The decreasing trend of the spread in the global oscillation parameters for longer timeseries of the other methods suggests that longer datasets would still improve the precision of the obtained parameters. To investigate this further we show linear fits in log-scale through the MAD values of each method. When extrapolating these fits to 2000 days (~5.5 yrs, which is the current predicted length of the mission), this would imply a reduction in the MAD of at least a factor of 10 (for  ⟨ Δν ⟩  factors of 23, 11 and 20 for COR:UP, COR:EACF and OCT respectively and for νmax factors of 10 and 14 for COR:EACF and OCT). In addition to the spread in the results we also checked for potential biases. It is noticeable that the offsets are not zero even though the method is its own reference. These biases are more clearly visible in the right hand panels where we show the distribution of the offsets divided by the quoted uncertainty (σ) expressed in dimensionless units.

thumbnail Fig. 4

Same as Fig. 3, but now for νmax.

thumbnail Fig. 5

Median absolute deviations (MAD) observed for  ⟨ Δν ⟩  (top) and νmax (bottom) for the different methods as a function of the timespan of the dataset. Colour coding the same as in Fig. 3 with red for OCT, green for COR:EACF, black for COR:UP and blue for CAN. The 400 day results of COR:UP agree with the 600 day results and hence the MAD is 0.000 and not shown. The dashed lines indicate linear fits through the data (with same colour-coding) in log-scale. See text for further details.

We now turn to the uncertainties reported by the different methods. The normalised distributions of these uncertainties are shown in the central columns of Figs. 3 and 4. Again we can see that for some run durations, the different methods produce similar uncertainties and for others they differ. An important consideration is the validity of the uncertainties as a guide to the reliability of the returned results. To this end, in the right hand column we show the distribution of the offsets divided by their individual quoted uncertainties expressed in dimensionless units. In case of statistically reliable quoted uncertainties we would expect the distributions to have a width of  ± 1σ at half maximum. In case of a wider distribution the uncertainties are underestimated and a more narrow distribution indicates overestimated uncertainties. For  ⟨ Δν ⟩  we see that OCT and CAN provide realistic uncertainties for runs of 50 day lengths, although the tails of the distribution of OCT are well-populated. Both methods of COR seem to overestimate the uncertainties. The banded nature of the COR:UP results is a byproduct of the method used to find the peak in the autocorrelation function. For longer datasets all methods seem to overestimated the uncertainties to a certain extend. Similar conclusions can be drawn for the results of νmax in the right hand panels of Fig. 4.

The measures described above do however average over the frequency range at which the oscillations occur and the uncertainty might be expected to be a function of frequency. Figure 6 shows the frequency dependence of the mean uncertainty and the median absolute deviation (MAD) for several methods. For a Gaussian distribution (white noise), the typical ratio of root median square deviation to MAD is roughly 0.8. So we multiply the MAD by 0.8 in order to compare it with the typical uncertainty. The left hand column is for  ⟨ Δν ⟩  and the right hand column is for νmax. Each graph in the figure corresponds to a different method and allows us to illustrate how the deviations (MAD) and uncertainties correspond for a given method at the longest and the shortest data duration, i.e., 400 and 50 day long datasets.

thumbnail Fig. 6

Left: uncertainties (open diamonds: 50 days, dashed line: 400 days) and mean absolute deviations multiplied by 0.8 (see text, filled diamonds: 50 days, solid line: 400 days) in  ⟨ Δν ⟩  as a function of  ⟨ Δν ⟩  for results from the different methods: COR:UP (panel A), COR:EACF (panel B), OCT:PS ⊗ PS (panel C) and CAN (panel D). Right: uncertainties (open diamonds: 50 days, dashed line: 400 days) and mean absolute deviations (filled diamonds: 50 days, solid line: 400 days) in νmax as a function of νmax for results from three different methods: COR (panel E), OCT (panel F) and CAN (panel G).

It is clear that although there is some consistency in the curves for any one method, the frequency dependencies of the uncertainty and of the deviation are not identical. We now discuss each method in turn starting with  ⟨ Δν ⟩ . For COR:UP, we see again that the results for 50 or 400 day long timeseries are remarkably similar. Significant improvement can only be seen at low frequencies. The uncertainties seem to be overestimated. For COR:EACF, at 50 days the uncertainties are over-estimated. However, at 400 days the uncertainties and MAD have reduced and are more closely in agreement except for the highest frequencies where there are not many stars. For OCT:PS ⊗ PS, at 50 days the uncertainties are underestimated at low and medium frequencies with the agreement steadily improving as the frequency increases. At 400 days, the uncertainties are progressively over estimated. The determination of values as illustrated by the value of MAD improves in the longer datasets. Finally, we consider CAN. Although, the trends for 50 day results are very similar the uncertainties are slightly overestimated.

Just three methods are used for νmax. For OCT and CAN there is general agreement between MAD and uncertainty with a slight tendency for the uncertainties to be over estimated. The uncertainties for COR:EACF are overestimated by roughly a factor of two to three.

Additionally, for all methods the variation of MAD with frequency is not strong and supports our earlier assumption for the outlier rejection to use a fixed threshold independent of frequency.

5.4. Offsets between different methods

In the previous subsection we saw that within any one given method, short datasets can give, on average, slightly biased results when compared with longer sets. Here we concentrate on the differences between different methods using the results obtained with 600 days of data. We know that the different methods involve different assumptions and no method is without assumptions as is shown by Kallinger et al. (2012). Two methods can be considered to lie at extreme ends of the choices for how to measure  ⟨ Δν ⟩ . At one extreme is CAN which uses individual peak bagging to measure two values of Δν close to the peak of the oscillation power and returns their average as  ⟨ Δν ⟩ . At the other end of the choice is COR:UP, which imposes a regular pattern on the whole spectral range and returns a  ⟨ Δν ⟩  based on that. It is known that variation of the large separation with frequency is dependent on the evolutionary state of the star (Kallinger et al. 2012) and this is seen very clearly if the values of  ⟨ Δν ⟩  from CAN and COR:UP are compared (see Fig. 7). Indeed the COR:UP show a bimodal distribution with respect to the CAN results, in which the left peak are predominantly RC stars and the rightmost peak are RGB stars. Following the reasoning of (Kallinger et al. 2012), this clear difference between  ⟨ Δν ⟩  could even be used to classify whether a star is already in its He-core burning phase. For the other methods the differences follow the same pattern, but are not as clear because, firstly they are in between CAN and COR:UP in terms of their global/local approach and secondly CAN and COR:UP are not particular sensitive to realization noise. For COR:UP this is because of the regularity constraint and for CAN it is due to the fact that the frequency determination of a given peak is relatively insensitive to the realization noise given the long datasets.

For νmax, effects are less pronounced. OCT agrees well with CAN but still with a (small) difference between RGB and RC. This could be due to difference in the acoustic cutoff frequency and/or differences in the smoothing applied to fit the power excess. Mosser et al. (2012) investigated this in detail and showed that smoothing can have a non-negligible effect (also already pointed out by Kallinger et al. 2010a). Furthermore, they show that clump stars have oscillations with lower amplitudes, but larger νmax, than stars ascending the red-giant branch with similar values for  ⟨ Δν ⟩ .

This comparison of results of long datasets obtained with different methods shows that the definition of the obtained parameter is of importance and that the differences in the definition are significantly larger than the observational uncertainties. Hence it is important when quoting a parameter value to provide the detailed definition of that particular parameter. Note that all methods also differ in their sensitivity to realization noise as already seen in Paper I.

5.5. Comparison between the predicted and observed mode H/B

For each star analyzed, a value for the envelope height and the noise background at νmax are returned. We have certain expectations for the values. We expect, on average, the ratio of these two parameters to have a value of about 3.7 or 4.0 depending on the evolutionary status of the star (Mosser et al. 2012). From the same work we know that within factors of order unity the values returned by different methods will not be entirely consistent. In this section we explore how closely the expectations are met. We also look at how the ratio varies from run to run particularly for the short runs in order to evaluate whether this is a significant factor in the non-detection of the oscillations. For the longest available dataset of 600 days, the median value of the observed H/B is 4.1 with inter-quartile distance of 1.4 which is roughly consistent with the expectations.

thumbnail Fig. 7

Normalised distribution of the offset of the individual 600 day results from the reference value, i.e., the CAN 600 day results, for  ⟨ Δν ⟩  (top) and νmax (bottom). COR:UP, COR:EACF and OCT results are indicated in black solid lines, green dashed-dotted lines and red dashed lines respectively.

Next we turn to a consideration of the 50-day data. Here we find that on average the returned envelope height and noise background are consistent with the figures for the longer runs. However, this masks a large amount of variability. The apparent height of the envelope is very variable. We do not know if this is genuine variability or a defect in the algorithms. However, it is clear that even with height-to-background ratios significantly below unity, detection of the modes is possible thanks to the regular pattern of the oscillations. We do not have the values where the algorithms failed to find evidence for oscillations and so cannot comment on the height-to-background ratio in these cases.

6. Prediction of νmax from rms flux

An automated analysis of the red giant data is made more difficult by the fact that for some of the largest giants the peak in the mode power is at very low frequency (below  ~5 μHz). Unless the datasets are very long, the spectra do not have enough resolution to clearly distinguish the oscillations. The automated algorithms may then fasten on features at other frequencies and thus provide a false positive detection. We therefore have sought an independent parameter to guide the software to the appropriate region. We have found that the mean flux variance in the timeseries data is one such guide. We first provide an analysis which shows why this should be so and then provide the data to illustrate the dependence that we find.

Parseval’s theorem states that the variance of the timeseries is equal to the integrated power in the spectrum. We therefore look at the sources of power in the spectrum. At very low frequencies, instrumentation effects will become important. To some extent this has been removed from the data considered here by the data preparation algorithms. At all frequencies there is photon shot noise, but the red giants are usually sufficiently bright that it can be neglected. As a consequence, for red giants the major sources of the signal in the data are the granulation and the oscillations. The mode power is modelled as a Gaussian of height H and full width half power δenv hence the total power in the modes is (2)Using Mosser et al. (2012), we can express both the height at maximum and the width of the distribution as a function of the frequency of maximum power: (3)The frequency distribution of the power in the granulation is modelled according to the Harvey prescription: (4)where variance in the timeseries of the granulation is  and τgran is the timescale of the granulation. We can use this to estimate the power, B, in the granulation signal at νmax. At νmax the factor of unity in the denominator can be neglected: (5)From Mathur et al. (2011) we have that , hence (6)Knowing that we can use the observation that the ratio of height to background is a constant of value 3.7 to 4 depending on the evolutionary state of the star: (7)Knowing H/B we can now estimate a value for . Thus the total variance (V) in the timeseries is A typical value for H/B is about 4, hence (10)It is clear that although the power law indices of νmax in the two components of the noise are not the same, they are relatively close to each other. The granulation provides just over twice the amount of power as do the modes.

thumbnail Fig. 8

Variance of the flux as a function of νmax, with RGB, RC, second clump and AGB stars indicated by black asterisks, red diamonds, green triangles and blue crosses, respectively. Fits to the values of the four evolutionary states are shown by the yellow solid line, the green dashed line, the red dashed-dotted line and light-blue dashed-triple dotted line. The prediction from Eq. (10) is indicated with the gray line.

Table 2

Coefficients of the fit: , with V the variance of the flux in ppm2 and νmax the frequency of maximum oscillation power in μHz, for different evolutionary phases.

Observationally we get  ppm2 for RGB stars (see fit in Fig. 8). We see that for other evolutionary states the fits have different coefficients (Table 2). This indicates that there are differences in either the granulation description and/or the height and width ratio of the oscillation power as a function of evolution phase. This is consistent with what is shown by Mosser et al. (2012), and needs further investigations which is beyond the scope of this paper.

7. Summary

In this work we investigated the impact of the length of the timeseries on the precision and accuracy of the determined global oscillation parameters νmax and  ⟨ Δν ⟩  of red giants. We used Kepler light curves spanning about 600 days and divided them in short runs of 50, 100, 200 and 400 days. All these runs have been analysed using automated methods. The oscillation detection rate has been compared with predictions and the resulting values for the global oscillation parameters have been compared as a function of method, run length,  ⟨ Δν ⟩  of the oscillations. From this study we find that:

  • For 95% of the stars consistent global oscillationparameters are obtainedfrom 600 day timeseries withdifferent methods. For the remaining 5%, therewere good reasons for the lack of consistency.

  • Using the observational methods we find more than 95% (of the consistent results of 600 day data) or more reliable detections of oscillations in timeseries of 400 days or longer.

  • Current predictions of the detectability of oscillations are based on the amplitudes and predict that in the majority of the cases the likelihood to detect oscillations are above 90% for both the long and short runs. However, most observational algorithms use the regularity in the power spectrum to detect the oscillations and the regularity has reduced sensitivity for shorter runs.

  • The precision of the determined global oscillation parameters increases with increasing timeseries and the trends suggest that this continues for even longer timeseries than investigated here. From the extrapolation of fits to the median absolute deviations a reduction of more than a factor of 10 for an increase in timespan from 50 to 2000 days (the currently foreseen length of the mission) is foreseen. Thus, there are real advantages to be gained from working with even longer timeseries than considered here. We note that the universal pattern is already effective for short datasets.

  • The distributions of the offsets – difference between results of short runs with respect to the result obtained with the same method on the 600-day long timeseries – divided by the quoted uncertainties show that the quoted uncertainties have a tendency to be overestimated, which is in general more severe for longer datasets. However, this does depend on the method.

  • We find that 50 day timeseries are not long enough to be certain to pick up more than 90% of the oscillations with the currently employed methods.

  • When comparing different methods it is clear that the differences due to different definitions are non-negligible. This difference is a function of the evolutionary state of the stars and this could be used to determine the evolutionary state.

  • The different strengths, definitions and sensitivity to realization noise of the different methods indicate that the simultaneous use of more methods is likely to be profitable.

Additionally, we propose and justify a new method to estimate the frequency of maximum oscillation power from variance in the timeseries. We show that the dependence of the flux variance on νmax is also a function of evolutionary phase. The effectiveness of this method does not depend on the data duration nor on the location of the peak of the spectrum – always assuming that the necessary data detrending is not attenuating the oscillations signal. We recommend that this method be used in conjunction with the methods described here as an additional independent constraint to detect the oscillations.

Acknowledgments

Funding for this Discovery Mission is provided by NASA’s Science Mission Directorate. The Kepler Team is recognized for helping to make the mission and these data possible. S.H. acknowledges financial support from the Netherlands Organisation for Scientific Research (NWO). Y.E. and W.J.C. acknowledge support from the Science and Technology Facilities Council (STFC).

References

  1. Baglin, A., Auvergne, M., Barge, P., et al. 2006, in ESA SP, 1306, eds. M. Fridlund, A. Baglin, J. Lochard, & L. Conroy, 33 [Google Scholar]
  2. Basu, S., Grundahl, F., Stello, D., et al. 2011, ApJ, 729, L10 [NASA ADS] [CrossRef] [Google Scholar]
  3. Baudin, F., Barban, C., Belkacem, K., et al. 2011, A&A, 529, A84; 535, C1, Corrigendum [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  4. Baudin, F., Barban, C., Goupil, M. J., et al. 2012, A&A, 538, A73 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  5. Beck, P. G., Bedding, T. R., Mosser, B., et al. 2011, Science, 332, 205 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  6. Beck, P. G., Montalban, J., Kallinger, T., et al. 2012, Nature, 481, 55 [NASA ADS] [CrossRef] [Google Scholar]
  7. Bedding, T. R., Huber, D., Stello, D., et al. 2010, ApJ, 713, L176 [NASA ADS] [CrossRef] [Google Scholar]
  8. Bedding, T. R., Mosser, B., Huber, D., et al. 2011, Nature, 471, 608 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  9. Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  10. Chaplin, W. J., Bedding, T. R., Bonanno, A., et al. 2011a, ApJ, 732, L5 [Google Scholar]
  11. Chaplin, W. J., Kjeldsen, H., Bedding, T. R., et al. 2011b, ApJ, 732, 54 [NASA ADS] [CrossRef] [Google Scholar]
  12. De Ridder, J., Barban, C., Baudin, F., et al. 2009, Nature, 459, 398 [Google Scholar]
  13. Di Mauro, M. P., Cardini, D., Catanzaro, G., et al. 2011, MNRAS, 415, 3783 [NASA ADS] [CrossRef] [Google Scholar]
  14. García, R. A., Hekker, S., Stello, D., et al. 2011, MNRAS, 414, L6 [NASA ADS] [CrossRef] [Google Scholar]
  15. Gruberbauer, M., Guenther, D. B., & Kallinger, T. 2012, ApJ, 749, 109 [NASA ADS] [CrossRef] [Google Scholar]
  16. Hekker, S., Kallinger, T., Baudin, F., et al. 2009, A&A, 506, 465 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  17. Hekker, S., Broomhall, A.-M., Chaplin, W. J., et al. 2010a, MNRAS, 402, 2049 [NASA ADS] [CrossRef] [Google Scholar]
  18. Hekker, S., Debosscher, J., Huber, D., et al. 2010b, ApJ, 713, L187 [NASA ADS] [CrossRef] [Google Scholar]
  19. Hekker, S., Basu, S., Elsworth, Y., & Chaplin, W. J. 2011a, MNRAS, 418, L119 [NASA ADS] [Google Scholar]
  20. Hekker, S., Basu, S., Stello, D., et al. 2011b, A&A, 530, A100 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  21. Hekker, S., Elsworth, Y., De Ridder, J., et al. 2011c, A&A, 525, A131 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  22. Hekker, S., Gilliland, R. L., Elsworth, Y., et al. 2011d, MNRAS, 414, 2594 [Google Scholar]
  23. Huber, D., Bedding, T. R., Stello, D., et al. 2010, ApJ, 723, 1607 [NASA ADS] [CrossRef] [Google Scholar]
  24. Huber, D., Bedding, T. R., Stello, D., et al. 2011, ApJ, 743, 143 [NASA ADS] [CrossRef] [Google Scholar]
  25. Jenkins, J. M., Caldwell, D. A., Chandrasekaran, H., et al. 2010, ApJ, 713, L120 [NASA ADS] [CrossRef] [Google Scholar]
  26. Jiang, C., Jiang, B. W., Christensen-Dalsgaard, J., et al. 2011, ApJ, 742, 120 [NASA ADS] [CrossRef] [Google Scholar]
  27. Kallinger, T., Mosser, B., Hekker, S., et al. 2010a, A&A, 522, A1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  28. Kallinger, T., Weiss, W. W., Barban, C., et al. 2010b, A&A, 509, A77 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  29. Kallinger, T., Hekker, S., Mosser, B., et al. 2012, A&A, 541, A51 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  30. Kjeldsen, H., & Bedding, T. R. 1995, A&A, 293, 87 [NASA ADS] [Google Scholar]
  31. Kjeldsen, H., & Bedding, T. R. 2011, A&A, 529, L8 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  32. Mathur, S., Hekker, S., Trampedach, R., et al. 2011, ApJ, 741, 119 [NASA ADS] [CrossRef] [Google Scholar]
  33. Mosser, B., & Appourchaux, T. 2009, A&A, 508, 877 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  34. Mosser, B., Michel, E., Appourchaux, T., et al. 2009, A&A, 506, 33 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  35. Mosser, B., Belkacem, K., Goupil, M.-J., et al. 2010, A&A, 517, A22 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  36. Mosser, B., Barban, C., Montalbán, J., et al. 2011a, A&A, 532, A86 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  37. Mosser, B., Belkacem, K., Goupil, M. J., et al. 2011b, A&A, 525, L9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  38. Mosser, B., Elsworth, Y., Hekker, S., et al. 2012, A&A, 537, A30 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  39. Samadi, R., Georgobiani, D., Trampedach, R., et al. 2007, A&A, 463, 297 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  40. Stello, D., Huber, D., Kallinger, T., et al. 2011a, ApJ, 737, L10 [NASA ADS] [CrossRef] [Google Scholar]
  41. Stello, D., Meibom, S., Gilliland, R. L., et al. 2011b, ApJ, 739, 13 [NASA ADS] [CrossRef] [Google Scholar]
  42. Verner, G. A., Elsworth, Y., Chaplin, W. J., et al. 2011, MNRAS, 415, 3539 [NASA ADS] [CrossRef] [Google Scholar]

All Tables

Table 1

Fraction of runs per star, for which results have been returned for  ⟨ Δν ⟩  as a function of timespan of the data, where 12, 6, 3, 1 and 1 runs are available for data of 50, 100, 200, 400 and 600 days length, respectively.

Table 2

Coefficients of the fit: , with V the variance of the flux in ppm2 and νmax the frequency of maximum oscillation power in μHz, for different evolutionary phases.

All Figures

thumbnail Fig. 1

Distribution of the mean large frequency separations of the stars in our sample.

In the text
thumbnail Fig. 2

Fraction of runs with returned values for each star per Δν interval. Each panel shows the results of a certain method (A: COR – Universal Pattern, B: COR – EACF, C: OCT – PS ⊗ PS, D: OCT – PS ⊗ PS (Bayesian), E: CAN) with run length 50, 200, 400, 600 days in red, blue, cyan and black, respectively. Note that the 50 and 600 day curves in panel E overlap due to the fact that the 600 day results were used to constrain the input for the 50 day runs. No results for 200 and 400 day long runs were obtained by CAN.

In the text
thumbnail Fig. 3

Normalised distribution of the offset of the individual results from the reference value, i.e., the result of the 600 day run of the same method for the same star (left), and normalised distributions of the uncertainties (centre) for  ⟨ Δν ⟩  for 50 (top), 200 (middle) and 400 day (bottom) datasets. COR:UP, COR:EACF, CAN and OCT results are indicated in black solid lines, green dashed-dotted lines, blue dashed-triple dotted lines and red dashed lines respectively. The right column shows the normalised distribution of the offset of the individual results divided by its stated uncertainty for data of 50 (top), 200 (middle) and 400 days (bottom) length. Colours and linestyles are the same as in the left panels.

In the text
thumbnail Fig. 4

Same as Fig. 3, but now for νmax.

In the text
thumbnail Fig. 5

Median absolute deviations (MAD) observed for  ⟨ Δν ⟩  (top) and νmax (bottom) for the different methods as a function of the timespan of the dataset. Colour coding the same as in Fig. 3 with red for OCT, green for COR:EACF, black for COR:UP and blue for CAN. The 400 day results of COR:UP agree with the 600 day results and hence the MAD is 0.000 and not shown. The dashed lines indicate linear fits through the data (with same colour-coding) in log-scale. See text for further details.

In the text
thumbnail Fig. 6

Left: uncertainties (open diamonds: 50 days, dashed line: 400 days) and mean absolute deviations multiplied by 0.8 (see text, filled diamonds: 50 days, solid line: 400 days) in  ⟨ Δν ⟩  as a function of  ⟨ Δν ⟩  for results from the different methods: COR:UP (panel A), COR:EACF (panel B), OCT:PS ⊗ PS (panel C) and CAN (panel D). Right: uncertainties (open diamonds: 50 days, dashed line: 400 days) and mean absolute deviations (filled diamonds: 50 days, solid line: 400 days) in νmax as a function of νmax for results from three different methods: COR (panel E), OCT (panel F) and CAN (panel G).

In the text
thumbnail Fig. 7

Normalised distribution of the offset of the individual 600 day results from the reference value, i.e., the CAN 600 day results, for  ⟨ Δν ⟩  (top) and νmax (bottom). COR:UP, COR:EACF and OCT results are indicated in black solid lines, green dashed-dotted lines and red dashed lines respectively.

In the text
thumbnail Fig. 8

Variance of the flux as a function of νmax, with RGB, RC, second clump and AGB stars indicated by black asterisks, red diamonds, green triangles and blue crosses, respectively. Fits to the values of the four evolutionary states are shown by the yellow solid line, the green dashed line, the red dashed-dotted line and light-blue dashed-triple dotted line. The prediction from Eq. (10) is indicated with the gray line.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.