The change in surface abundance of isotopes j at first dredge up, , is interpolated from a table of detailed models with Z=10-4 in the mass range . A correction factor , the ratio of CNO mass fraction at the terminal-age main sequence (TMS) and zero-age main sequence (ZAMS), is then applied to CNO elements to take into account accretion during the main sequence.
In the Izzard et al. (2006) model first dredge up is considered as an instantaneous event. In terms of time evolution this is a reasonable assumption because giant-branch evolution is fast, but in terms of luminosity or gravity this approximation is not good and it proves difficult to compare to e.g. the vs. data of Lucatello et al. (2006). To resolve this problem the changes in abundances are modulated by a factor where is the core mass, is the core mass at which first dredge up reaches its maximum depth and is the core mass at the base of the giant branch, before first dredge up starts. is known from the stellar evolution prescription and is interpolated from a grid of models constructed with the TWIN stellar evolution code (Eggleton & Kiseleva-Eggleton 2002).
In summary, the surface abundances changes at first dredge up are given by . They agree well with the detailed models, as a function of , and time.
Abundance changes at third dredge up are treated in a similar way to the prescription of Izzard et al. (2004) and Izzard et al. (2006). Intershell abundances are interpolated from tables based on the Karakas et al. (2002) detailed models the metallicities of which extend down to Z=10-4.
In low-metallicity TPAGB stars dredge up of the hydrogen-burning shell enhances the surface abundance of and (at higher metallicity the effect is negligible because the initial abundance of and is relatively large). This is modelled by dredging up of hydrogen-burnt material during each third dredge up, where the abundance mixture in this material is enhanced in and according to
and M(t) is the instantaneous stellar mass, is the instantaneous envelope mass, is the thermal pulse number, X12 is the envelope abundance of and . The first term gives the amount of H-burnt material dredged up, the second term is a turn-on effect as the star reaches the asymptotic regime and the third term is a turn-off effect for small envelopes.
We consider three mass-loss prescriptions for TPAGB stars.
- The formalism of Vassiliadis & Wood (1993, VW93) relates the mass-loss rate to the Mira pulsation period of the star, given by
The mass loss rate is then given by, as in Karakas et al. (2002), i.e. without the term of the original VW93 prescription,
unless in which case a superwind is applied
The free parameters and subtly affect the mass-loss rate. The factor is a simple multiplier, which is 1 by default (see model set 27). The period shift allows the onset of the superwind to be delayed, e.g. in model set 33 - it is zero by default.
- The Reimers mass-loss rate is given by
where is a parameter of order unity (Reimers 1975) which we vary in model sets 10, 11 and 12.
- van Loon.
- In model set 13 we use the split form of van Loon et al. (2005) appropriate to oxygen-rich red giants,
where and . Note, if we enforce a minimum mass-loss rate of because the above formula can approach zero as the temperature rises (and the envelope mass becomes small) as a star approaches the white-dwarf cooling track.
Our default binary-star distribution is the combination of
- The initial mass function (IMF) of Kroupa et al. (1993, KTG93) for the initial primary mass M1
where p1=-1.3, p2=-2.2, p3=-2.7, m0=0.1, m1=0.5, m2=1.0 and . Continuity requirements and give the constants a1, a2 and a3.
- A distribution flat in q=M2/M1 for the initial secondary mass M2, where
- A distribution flat in (i.e. probability ) for the separation a where .
- Initially circular binaries (except for model set Ae5).
When data exists for the same star from more than one source, we take the arithmetic mean of the values and add errors in quadrature. In the case of log-values, e.g. or , we simply average the log-values rather than attempt a more sophisticated approach. This makes little difference to our final results. In the case of data limits (e.g. x<4) we ignore the data - few data are of this type and the general result is not affected.
We ignore error bars in the sense that, e.g. a star with 0.2 is not included in our selection, even though it may well have - in reality - and hence qualify. This is the price we pay for a simple selection procedure and in the large number limit (the database has about 1300 stars) it is not a problem.