Volume 526, February 2011
|Number of page(s)||31|
|Section||Cosmology (including clusters of galaxies)|
|Published online||10 January 2011|
Estimation of the SED of contaminating host galaxy light is an essential step if spectral indicators in contaminated and uncontaminated spectra are to be compared. This will, in turn, be unavoidable when comparing nearby (usually with the SN clearly separated from the host galaxy core) and distant SNe (where SN and galaxy light are degenerate). In Östman et al. (2010) we present the host-galaxy subtraction pipeline applied to the NTT/NOT SNe. In short this method consists in matching a SN template with a number of galaxy eigencomponent spectra, including a slit loss/reddening correction. Even if the observed SN SED deviates slightly from the SN templates used in the fit, the very large number of wavelength bins compared with the few fit parameters (five) will allow this SN deviations to remain after the subtraction.
However, the host subtraction produces an increased indicator measurement uncertainty and possibly a bias. It is important that this uncertainty or bias is estimated. In this Appendix we describe the extensive simulations that were run to study the effectiveness of the host subtraction. These simulations were used to calculate a systematic bias and uncertainty for every measurement, depending on the shape of the indicator and contamination level.
As a second step of these simulations we used suggested (metallicity) evolution models to study under which circumstances these would be detected assuming the properties of the NTT/NOT data set.
The subtraction pipeline is described in detail in Östman et al. (2010). The input parameters are flux density and (optionally) error, observer frame wavelength, redshift and an epoch estimate. This pipeline thus operates identically for real and simulated spectra. A range of internal fit parameters can be changed, including which templates and host galaxy eigencomponent spectra are used as well as the nature of slit loss/extinction approximation. The fit parameters were optimised and fixed during a series of test runs.
To test the reliability of the estimated host galaxy spectra and the impact on spectral indicators, a large number of simulated contaminated spectra were created. Besides contamination, these simulations included realistic slit loss and noise levels. The synthetic spectra are created from
A supernova spectrum. The SN spectra used as templates all havehigh S/N and low contamination. Their epochs are similar to theones of the NTT/NOT spectra9.Eleven different SN spectra are used: five of SN2003du (epochs −6, −2, 4, 9, 10, 17) (Stanishev et al. 2007), one of SN 1998aq (Branch et al. 2003) at peak brightness, two of the subluminous SN 1999by (epochs −5 and 3) (Garnavich et al. 2004) and two of the peculiar and luminous SN 1999aa (epochs −5 and 0) (Garavini et al. 2004).
Reddening is added to the SN spectrum. The reddening is added using the Cardelli et al. (1989) extinction law using a total-to-selective extinction ratio RV of 2.1 and a colour excess E(B − V) drawn from the distribution of E(B − V) obtained from the NTT/NOT lightcurve fits.
A galaxy spectrum. Four galaxy templates of varying type (elliptical, S0, Sa and Sb) from Kinney et al. (1996) are used together with three real galaxy spectra observed at NTT at the same time as the SN spectra analysed here (host galaxy spectra for SDSS SN7527, SN13840 and SN15381). The contamination level is randomly chosen between 0 and 70% for the g band. These simulations were later extended in a second series where 50 randomly chosen SDSS galaxy spectra were used. Figures displayed here are based on the first run series, but results are similar when including the second set of galaxy spectra.
Redshift. The object redshift is randomly drawn from the NTT/NOT redshift distribution.
Slit loss is added to the SN spectrum. The differential slit loss functions are taken from Östman et al. (2010) and correspond to typical NTT/NOT situations and range from insignificant to severe.
Noise addition. A S/N value is randomly chosen from the NTT/NOT spectral S/N distribution. Poisson noise is added to the spectra until the target S/N is achieved. The shape of the noise is determined as a linear combination of the input spectrum and a randomly chosen NTT/NOT sky spectrum. The linear combination is regulated such that the highest S/N value in the NTT/NOT sample corresponds to no contribution from sky noise, the lowest S/N corresponds to complete dominance by sky noise and intermediate values to a combination of the two error sources.
All of the created synthetic spectra were then processed through the host subtraction pipeline and spectral indicators were measured. The measured spectral indicators could then be compared with the ones obtained from the original SN spectrum. The subtractions were thus evaluated only with respect to how well correct indicators were measured.
The simulation results can be analysed in a number of ways: Looking at specific SN spectra, specific galaxy types, spectra with more or less slit loss or contamination or any combination of these. For each of these subgroups errors in all equivalent widths and velocities can be calculated.
In general simulations are stable with the following characteristics: a small bias for very low contamination levels that decrease with added contamination and a random dispersion that increases with contamination. The size of these effects vary slightly from feature to feature. The small bias for low level contamination means that the subtraction pipeline finds “something” to subtract even when no contamination was added. This is fully consistent with having a small amount of host light already present in the template spectra. But we cannot rule out that that a part of this bias is caused by the subtraction methods. In practise we do not perform host subtraction on spectra with very low contamination levels. In all cases the full bias as estimated in the simulations is retained, thus generally overestimating the bias levels.
Sample simulation results are presented in Fig. A.1.
Sample host contamination simulations results. For every simulated spectrum the final fractional error is calculated (fractional error is used so that all templates of different epoch and subtypes can be added and analysed as function of contamination). The panels show the distribution of errors, divided into four contamination bins (0−17.5, 17.5−35, 35−52.5 and 52.5−70% in g band). The average contamination, average error and Population RMS (Prms) is printed for each bin. The average error (shown as dashed orange line) indicates a small bias, decreasing with contamination. The dispersion indicates a random error from host subtraction, increasing with contamination. These plots are based on pEW f3; other pEWs show similar results.
|Open with DEXTER|
The simulations were evaluated with and without added noise. Noise was found to increase the error dispersion but not introduce any significant bias. The added dispersion was comparable to uncertainties yielded from the designated noise simulations. We thus separate errors from host contamination and noise. See Appendix B for a further discussion about noise and filtering. For final spectra the systematic uncertainties will be the sum in quadrature of the respective subtraction and noise systematic uncertainties.
A number of alternative host subtraction methods were tried. These included two fundamentally different fitting methods: linear fits using all nearby SN spectra as SN templates and photometry fixed galaxy subtraction where the host galaxy photometry is used to constrain the galaxy shape and proportion. Both methods relax the dependence on the SN template, the first through including a larger variety of such and the second through not using any template at all. However, in general the multiplicative method including the slit loss/reddening correction was found to be superior in most cases and generally more stable.
A number of different implementations of the subtraction pipeline were also tried. These included modifying the number of galaxy eigenspectra, the origin of these eigenspectra and changing constraints on the eigenspectra proportions. The host galaxy subtraction method described above was the final product of these tests.
However, there will be individual objects, for which the host subtraction fails or performs less than ideal. This is a natural consequence of the degeneracy between SN, host galaxy and noise. For some of these objects alternative subtraction methods could have been better suited, but for consistency uniform host subtractions were used. The simulations were designed to estimate the bias caused by such failed subtractions.
Since it is unknown if evolution exists and how it, if existing, affects the SN Ia SED, it is impossible to predict whether evolution could be detected with the NTT/NOT SNe. But we can still study proposed models to quantify how well these effects would be detected. Two different models were considered here: first ad hoc decrease of the depth of feature 3 and 4, where the frac parameter regulates the percent decrease of these depths. This modification was inspired by the indication of changes in these features seen by Foley et al. (2008) and Sullivan et al. (2009). As a second set of models we use the spectral changes caused by one low and one high metallicity model simulated by Lentz et al. (2000). For spectrum templates with epochs less than −2.5 the 15 days after explosion model was used, otherwise the day +20 model.
All base SN templates used in the above simulations were modified according to the evolution models, and processed through the subtraction and measurement pipelines again. The modifications as applied to the SN spectrum of SN2003du observed at April 30 2003 is displayed in Fig. A.2.
These models should not be considered realistic evolutionary models to be tested, but rather tests as to what level of evolution can be detected assuming host subtraction uncertainties. They are however, examples of evolution that would not be detected by visual inspection of noisy data but could still effect SN Ia cosmology.
All measurements on “evolved” host galaxy subtracted spectra are collected and compared to the true unevolved reference values. This difference between measurements can then be compared with the estimated statistical and systematic uncertainties and the likelihood of detecting the evolution studied. Sample evolution detection probabilities for evolved SNe is shown in Fig. A.3.
These comparisons show that most evolved SNe would be detected. However, the detection limits we are searching for must be realistic: we do not expect all SNe at higher redshift to be evolved, but rather the fraction of e.g. low metallicity SNe will change. To study this limit we designed a further simulation based on the NTT/NOT redshift distribution. The probability of each SN to be evolved according to one of the above models, is set to be proportional to redshift and reach 50% at the average redshift of the NTT/NOT data set. For each model we repeat the measurement 5000 times and in each we randomly select which SNe are evolved. The total spectral indicator offset is calculated and compared to the uncertainties, thus obtaining a distribution of the evolution detection limit.
In Table A.1 detection limits assuming all NTT/NOT SNe (including high contamination) are listed for a number of indicators for the models for evolution/metallicity discussed above. These limits are completely dominated by the systematic bias levels of the high contamination events, since the systematic bias is set to be a systematic floor where the largest bias contained is used. A more realistic and less conservative estimate arises when we remove the highest bias/contamination events; these limits are given in Table A.2.
These results show that we would be sensitive to all but the very weakest of these evolution models using at least one indicator, albeit at a fairly low significance level.
Models of evolution/metallicity changes applied to SN2003du. f (frac) models consist of a decrease in the depth of the f3 and f4 features, Z 0.5 corresponds to the Lentz et al. (2000) model of increased metallicity, Z −1.5 corresponds to the Lentz et al. (2000) model of decreased metallicity.
|Open with DEXTER|
Sample study of how well evolution is detected in simulated spectra. The “30%” evolution model was applied to all template spectra and the measured indicators compared with the unevolved measurements. The panels show the distribution of fractional difference, divided into the same contamination bins as in Fig. A.1. The total uncertainty in each bin (bias and dispersion) as estimated above is shown as an orange dashed line. Events where the reported difference is larger than uncertainties would be seen as deviating. In this sense detectable events are shown as hashed bins. The fraction of detected events is shown in each panel. This fraction decreases with contamination.
|Open with DEXTER|
Probability of detecting models for SN Ia evolution.
Probability of detecting models for SN Ia evolution.
Host contamination could affect velocity measurements either through introducing a false minimum or through modifying the position of the true minimum. Studies of simulated spectra show that velocity errors do increase with contamination, but below an r-band contamination of 60%, the errors are small compared to statistical and noise uncertainties.
Host subtraction methods in general perform similarly. The same subtractions as for pEWs are used (for consistency). Systematic uncertainties as estimated from the simulations are added to all measurements.
Random noise will degrade data quality, making measurements less accurate. For low S/N SN spectra, the conventional solution is to apply a filter to remove the high-frequency noise. This technique works well if small levels of filtering are used (filtering/smoothing are considered identical processes here), where the true shape is clearly visible. For noisy data it is no longer obvious what filter to use or how accurate results are.
According to the definition, pseudo-equivalent widths run from one wavelength extremum point to another. This makes such measurements extremely sensitive to noise: if any noise peaks remain, the pseudo continuum will be defined from there. To remove these, and create unbiased data, strong filtering is needed for low S/N data. We would, however, not want to filter high S/N spectra (at any redshift) too much since this would destroy information. We would also like to estimate noise uncertainties.
A further complication caused by filtering is that errors in filtered bins are correlated.
Average pEW error for feature 3 (left panel) and feature 7 (right panel) at the epoch of maximum light. The noise level, expressed through the logarithm of the pseudo-S/N, increases along the y-axis and the filter strength along the x-axis. Darker shades show smaller errors.
|Open with DEXTER|
A series of Monte Carlo simulations were run in order to (i) compare filter methods; (ii) determine filter parameters and (iii) estimate associated uncertainties (while avoiding having to determine filtered error correlations). These simulations are described below.
Three filters easy to implement are (1) the boxcar filter, which is simple averaging over a wavelength range, (2) the variance-weighted Gaussian filter where the smoothed value in a pixel is determined from a surrounding region weighted by a Gaussian determined by the inverse variance10 and (3) the FFT filter, where all frequencies above a certain maximum frequency are removed from the spectrum.
In order to determine which filter method works best and find optimal filter parameters, MC simulations were run. Random noise was added to template SN spectra after which the S/N was determined, the spectra filtered and indicators measured. For each method the optimal filter parameters were found through minimisation vs. the true value. This process was repeated until MC errors were sufficiently small. It was found that there is no optimal method with a single set of parameters that worked over the complete range of varying features and S/N values. All methods can yield non-biased values if correct filter parameters are used. The correct filter parameters should be determined by the actual noise level and the nature of the feature studied (broad or sharp).
Since all methods can be made to work but none will work with a single set of parameters, we selected the simplest method, the boxcar filter, as described below.
The above simulations showed that true pseudo-equivalent widths can be measured from noisy spectra after binning, but correct bin widths must be used. A range of MC simulations were run to determine the widths to use and the typical errors caused by noise. This procedure is detailed below.
Noise was generated with a certain amplitude. A gradually stronger filter was applied, while measuring relevant features at each stage. Through comparison with the true, noiseless values, the errors are obtained. For each iteration a “pseudo-S/N” is calculated as follows: a minimal boxcar (spanning three bins) is applied, and a pseudo-S/N can be calculated by comparing this with the original spectrum. This value serves as an initial estimation of noise level, and can later be compared to real spectra (adjusting for bin widths). A pseudo-S/N is feature relative, and calculated within the maximum boundaries of the feature in question.
This procedure is repeated 100 times for each noise amplitude11. For each filter strength and pseudo-S/N we thus have a range of pEW errors, from which we obtain the average and dispersion. Two sample mappings of these errors are displayed in Fig. B.1 (for these maps we have used absolute errors). It is seen that for any pseudo-S/N it is possible to define filter strengths yielding small errors (dark shades in figure), but the optimal filter strength varies with pseudo-S/N.
These maps are used to find the correct filter for a given feature and pseudo-S/N. Separate maps are created for each feature, where broad features typically demand stronger filtering. Furthermore, the dispersion of pEW-values in the optimal bin can be used to approximate the systematic error of doing pEW measurements on noisy spectra12.
Note that it is the shape of the feature that determines correct filtering, and that this evolves with epoch. To correctly account for this, the above procedure was repeated for each epoch of the Hsiao templates (Hsiao et al. 2007). The templates were interpolated to 2.5 Å bins in all simulations.
Simulation results are written to a table. These provide, for every feature and lightcurve epoch, the best filter-width to use to minimise the risk for noise bias. Since only the pseudo-S/N is used, we do not require error spectra.
The application to real data can be summarised as:
A minimal boxcar is applied, through which the pseudo-S/N isdetermined.
By comparing Monte-Carlo runs for the Hsiao template of the same epoch and feature, the optimal boxcar width is determined.
The average MC error and dispersion around the reference values are taken as systematic errors from the simulation.
For the well-defined type Ia SN minima studied here, minimum positions are stable relative to noise as long as sufficiently wide bins are used. A constant bin width in velocity space can thus be used. However, determinations of minima will still be affected by noise to the degree that on average noisy data will have larger dispersion. Both these effects, that no bias occurs and the increased dispersion, were studied using MC simulations of the Hsiao templates using the same approach as for pEWs. Random noise is added to the Hsiao templates (Hsiao et al. 2007) and the velocities are calculated after binning.
For every template epoch and feature, both bias and dispersion are obtained as functions of pseudo-S/N. For velocities 2, 3, 5, 6 and 7 (and reasonable epoch intervals), these results are consistent with no bias and a gradual increase in dispersion with noise.
For each spectrum studied (in both the reference and NTT/NOT set), epoch and pseudo-S/N values were used to locate the corresponding MC dispersion, which was used as systematic velocity error.
For features with more complicated minima (feature 4) or possible additional high velocity absorption features (feature 1), simply determining the minima will not be enough. These features demand either stringent minima criteria or function fitting for optimal study. Automatic minima measurements will show a large scatter.
© ESO, 2011
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.