EDP Sciences
Free Access

This article has an erratum: [erratum]

Issue
A&A
Volume 535, November 2011
Article Number A109
Number of page(s) 18
Section Astronomical instrumentation
DOI https://doi.org/10.1051/0004-6361/201117810
Published online 21 November 2011

© ESO, 2011

detect the neutronization burst, a short outbreak of νe’s released by electron capture on protons soon after collapse. Tantalizing signatures, such as the formation of a quark star or a black hole as well as the characteristics of shock waves, are investigated to illustrate IceCube’s capability for supernova detection.

Key words. neutrinos – supernovae: general – instrumention: detectors

1. Introduction

On February 23, 1987, a burst of mainly electron anti-neutrinos with energies of a few tens of MeV emitted by the supernova SN1987A was recorded simultaneously by the Baksan (Alekseev et al. 1987), IMB (Bionta et al. 1987), and Kamiokande-II (Hirata et al. 1987, 1988) detectors, a few hours before its optical counterpart was discovered. With just 24 neutrinos collected, stringent limits on the mass of the , its lifetime, its magnetic moment and the number of leptonic flavors could be derived (Kotake et al. 2006). As of now, SN1987A remains the only source of neutrinos that has been detected outside of our solar system. Although the optical detection of supernova explosions has a long history, detailed features of the gravitational collapse can only be studied with neutrinos, which carry away nearly 99% of the gravitational binding energy soon after the collapse. The current generation of detectors is capable of detecting many orders of magnitude more neutrinos and thus it can study details of the gravitational collapse and neutrino properties.

The rate of galactic stellar collapses, including those obscured in the optical, is estimated by various methods to be  ≈ (1 − 7)/100 years (Diehl et al. 2006; Strom 1994). A compilation in Giunti & Kim (2007) narrows the expected range to (1.7 − 2.5)/100 years by taking into account experimental and theoretical limits. The best experimental upper limit is  <9.3 /100 years (Novoseltseva et al. 2009).

While differences in the onset of the neutrino emission between various models are small (Kachelriess et al. 2005), the models have yet to overcome problems with the supernova explosion mechanisms. The theoretical knowledge about the neutrino emission at times longer than several 100 μs after the deleptonization (Buras et al. 2003; Kitaura et al. 2006) is limited. However, three characteristic phases are expected: a rapid luminosity increase during collapse with the appearance of a shock breakout burst, an accretion phase ending after during which the neutrino flux of all flavors is maximal, and a cooling phase. The duration of supernova neutrino emission is determined by the neutrino diffusion time scale in the dense matter inside the proto-neutron star. The exact features will depend on the progenitor mass with modulations introduced by the dynamics of the collapse.

IceCube is primarily designed to observe TeV neutrino sources with a wide lattice of light sensors embedded in highly transparent glacier ice used as Cherenkov medium. However, it was recognized early by Pryor et al. (1988) and Halzen et al. (1996) that neutrino telescopes offer the possibility to monitor our Galaxy for supernovae. In spite of the much lower neutrino energies of involved in a supernova burst, Cherenkov light induced by neutrino interactions will increase the count rate of all light sensors above their average value. Although the increase in the noise rate in each light sensor is not statistically significant, the effect will be clearly seen once the rise is considered collectively over many sensors. Low photomultiplier noise rates, low photon absorption in the Cherenkov medium and a large number of sensors are essential.

IceCube is uniquely suited for this measurement due to its location and 1 km3 size. The noise rates in IceCube’s photomultiplier tubes average around 540 Hz since they are surrounded by inert and cold ice with depth dependent temperatures ranging from  − 43   °C to  − 20   °C. At depths between (1450–2450) m they are also largely shielded from cosmic rays. The noise rate is further reduced by the use of detector components with reduced radioactivity. The detected signal rate is essentially independent of the photon scattering length and depends linearly on the absorption length of  ≈ 100 m in ice.

The expected signal significance in IceCube is somewhat reduced due to two types of correlations between pulses that introduce supra-Poissonian fluctuations. The first correlation involves a single photomultiplier tube. It comes about because a radioactive decay in the pressure sphere can produce a burst of photons lasting several μs. The second correlation arises from the cosmic-ray muon background; a single cosmic ray shower can produce a bundle of muons which is seen by hundreds of optical modules.

The 5160 photomultipliers are sufficiently far apart such that the probability to detect light from a single interaction in more than one DOM is small. Effectively, each DOM independently monitors several cubic-meters of ice. The detection principle was demonstrated with the AMANDA experiment, IceCube’s predecessor (Ahrens et al. 2002).

The inverse beta process dominates supernova neutrino interactions with energy in ice or water, leading to charged particle tracks of about 0.5      cm·Eν/   MeV length. Considering the approximate dependence of the cross section, the light yield per neutrino roughly scales with . Due to the low rate of galactic supernovae, it is imperative that the detector operates stably for a long time. IceCube was designed to operate for at least 10 years and is well suited for such a purpose owing to an automated online data acquisition, analysis software and alert system. As neutrinos may escape from an exploding supernova at much higher matter densities than photons, neutrinos will be observable several hours before their optical counterpart. The detailed observation of the onset of a supernova explosion is of much interest to astronomers. Since 2009, IceCube has been sending real-time datagrams to the Supernova Early Warning System (SNEWS) (Antonioli et al. 2004) when detecting supernova candidate events. SNEWS has been set up to broadcast a reliable alert to the astronomy community when a supernova has been detected by several neutrino detectors within seconds of each other. Currently, Super-Kamiokande (Fukuda et al. 2002; Ikeda et al. 2007), LVD (Aglietta et al. 2002), Borexino (Alimonti et al. 2009) and IceCube (Ahrens et al. 2004) contribute to SNEWS, with a number of other neutrino and gravitational wave detectors planning to join in the near future.

This paper describes the technical details and expected physics capability of IceCube as a detector for core collapse supernovae. It also summarizes the performance of the detector while it was still under construction. The outline of the paper is as follows: Sect. 2 describes physics processes in supernovae for selected models and oscillation scenarios that are relevant to the performance studies presented in this paper, Sect. 3 describes the aspects of the IceCube detector relevant to the detection of MeV supernova neutrinos. In Sect. 4, we discuss the processes that lead to a detectable signal in IceCube as well as the online analysis that processes and monitors the data, triggers events and sends out alerts to SNEWS. Section 5 describes the performance of the detector over two years and the systematic uncertainties expected when assessing the sensitivity of the detector. Section 6 discusses IceCube’s potential in the study of astrophysical and neutrino properties, and finally, conclusions are given in Sect. 7.

2. Supernovae and neutrinos

After the core of an aging massive star ceases generating energy and the corresponding radiative pressure from nuclear fusion processes, it undergoes a sudden gravitational collapse as soon as its inactive core grows beyond the Chandrasekhar mass limit. After several steps to relieve thermal and degeneracy pressure from the dense electron gas, the collapse stops once nuclear densities are reached and an incompressible proto-neutron star is formed. Matter falling on its surface is promptly stopped and its momentum is inverted forming an outward moving shock wave.

Neutrinos of different flavors are initially trapped in their relative neutrino spheres as the mean free path of neutrinos is smaller than the size of the supernova core at densities larger than 1013 kg/m3. The shock wave following the collapse dissociates nuclei, which suddenly increases the number of protons, resulting in an increase in electron capture and the production of a burst of νe. The timescale of this neutronization burst (“deleptonization peak”) is on the order of 10 ms during which much of the energy driving the shock wave is carried away. The shock stalls but is presumably soon revived by interactions of the large flux of neutrinos generated in the proto-neutron star. The models describing the prompt neutronization burst appear to be robust and consistent (Kachelriess et al. 2005). The proto-neutron star subsequently cools over  ~20 s. The neutrino flux decreases until neutrinos are no longer produced in the cooled down proto-neutron star (see e.g. Fischer et al. 2010 and references therein).

The released gravitational potential is carried away by huge numbers of neutrinos and to a small extent by heating and expelling the star’s outer layers. Less than 1% of the gravitational binding energy of a supernova is emitted as kinetic energy of matter and optically visible radiation. The remaining 99% is released as neutrino energy, of which about 1% will be carried by electron neutrinos from the initial neutronization burst. Most neutrinos and antineutrinos, distributed among all flavors, are created during the subsequent cooling processes. An estimated  (Burrows et al. 1992) is carried away by the intense neutrino burst produced predominantly through thermal Kelvin-Helmholtz cooling reactions (Suzuki 1991). Here is the time dependent all flavor supernova neutrino and anti-neutrino luminosity. According to Thompson et al. (2003) and  Buras et al. (2003), the mean energy is expected to be about (13–14) MeV for νe, (14–16) MeV for and (20–21) MeV for all other flavors (νx); the Garching model (Kitaura et al. 2006) differs in that the mean energies for and νx turn out to be approximately equal. For a supernova at d = 10 kpc distance and an average neutrino energy 15 MeV, the summed flux of all neutrino and antineutrino types, , flowing through the detector is  ≈ 1016 m-2. More on the theory of core collapse supernova can e.g. be found in Janka et al. (2006) and references therein.

In the following paragraphs, we briefly introduce the Lawrence-Livermore (Totani et al. 1997) and Garching models (Kitaura et al. 2006) that are used as benchmarks. In addition, we introduce specific models by Dasgupta et al. (2010) and Sumiyoshi et al. (2007) that were selected to demonstrate IceCube’s physics performance in Sect. 6. We also discuss the effect of neutrino oscillations on the expected signals, and introduce the parametrization of the energy spectra chosen for this paper.

The spherically symmetric Lawrence-Livermore simulation was performed from the onset of the collapse to 18 s after the core bounce, encompassing the complete accretion phase and a large part of the cooling phase. It is modeled after SN 1987A and assumes a 20 M progenitor. The total emitted energy is , of which 16% is carried by with 15.3 MeV energy on average.

The newer spherically symmetric Garching simulations include more detailed information on neutrino energy spectra amd use a sophisticated neutrino transport mechanism. They cover 0.80 s following the collapse of an O-Ne-Mg 8–10  M progenitor star, that is destabilized due to rapid electron capture on neon and magnesium. This class of stars may represent up to 30% of all core collapse supernovae. Recent simulations by Fischer et al. (2010) and Hüdepohl et al. (2010) extend over 22 s from the collapse of a 8.8  M progenitor to the completed formation of the deleptonized neutron star. They are the only examples so far where one-dimensional simulations obtain neutrino-powered supernova explosions and two-dimensional simulations yield only minor dynamical and energetic modifcations. In Table 4 we also refer to a two-dimensional axisymmetric simulation by Marek et al. (2009) of a 15  M progenitor star that covers 0.38 s following the collapse. Results with full multi-angle neutrino transport in two dimensions have been reported by Brandt et al. (2011).

Following original work by Takahara & Sato (1988), recent simulations (Dasgupta et al. 2010) of certain stellar core-collapse supernovae predict a sharp burst of several hundred milliseconds after the prompt νe neutronization burst associated with a quark-hadron phase transition at high baryon densities. A detection of this prominent feature would constitute direct evidence of quark matter.

The gravitational collapse of less than solar metallicity stars exceeding 25 solar masses will lead to a limited stellar explosion, while stars exceeding 40 solar masses are not expected to explode at all. In both cases a black hole will develop after bounce. At this point, the neutrino emission quickly comes to an end, providing a unique signature for black hole formation (Sumiyoshi et al. 2007) and a model independent time-of-flight measurement of the neutrino mass (Beacom et al. 2001).

Neutrinos streaming out of the core will encounter matter densities ranging from 1013 kg/m3 to zero. Assuming an energy of 15 MeV, they pass through an MSW-resonance layer at  ≈ 2 × 106 kg/m3 associated with or , depending on the neutrino mass hierarchy, followed by a second layer at  ≈ 2 × 104 kg/m3 associated to the quadratic neutrino mass difference . Both mix the initial fluxes of νe, and νx depending on the survival probabilities. Although the survival probabilities depend on the details of the density profiles and generic predictions are impossible, we consider three limiting cases as benchmarks to discuss the effect of the assumed neutrino hierarchy on the spectra observed with IceCube. Scenario A describes the normal neutrino hierarchy case and Scenario B represents the inverted hierarchy case with a static density profile of the supernova, both paired with a relatively large mixing angle θ13 > 0.9°. In Scenario C, the mixing angle θ13 is assumed to be very small (θ13 < 0.09°) and the hierarchy may be either normal or inverted. One should be aware that the predictions are affected by the unknown density profile of the collapsing star. In addition, forwards or backwards running single or multiple shock waves or bubbles can form within the supernova, causing steep density gradients and – in some cases – changes in the oscillation behavior (Tomàs et al. 2004; Choubey et al. 2006).

As the neutrinos propagate through the Earth, they undergo matter induced oscillations (MSW effect). The neutrino flux (Giunti & Kim 2007; Dighe et al. 2004) decreases by up to 8%. The effect depends sensitively on the zenith angle, the supernova model and the assumed neutrino properties. Given the systematic uncertainties, it will be difficult to establish this effect with IceCube. We will therefore include earth oscillation effects in the systematic uncertainty.

For this paper, the supernova is considered to be close to a blackbody source of neutrinos while it is cooling down. The neutrino energies then follow a modified Fermi-Dirac distribution. Many model predictions discussed in this paper adopt the following parametrization for the neutrino differential flux at the position of the Earth at distance d from the supernova: (1)where (2)is the normalized energy distribution depending on a shape parameter αν (Keil et al. 2003). Theory provides the time dependent supernova luminosity for the neutrino species with corresponding energies Eν. Other model predictions are transferred to this framework by fitting the provided spectra.

3. The IceCube detector

IceCube (Ahrens et al. 2004) was installed in the Antarctic ice sheet at the geographic south pole between January, 2005 and December, 2010 by lowering cable assemblies, called strings, into holes drilled in the ice using hot water. IceCube instruments a volume of about 1 km3 of clear ice between depths of 1450 m and 2450 m below the surface with a coarse lattice of 5160 Digital Optical Modules (DOMs). Each DOM consists of a photomultiplier tube housed in a pressure-resistant glass sphere. Once the water in the holes refreezes, the DOMs are embedded into the ice sheet with good optical coupling. The DOMs are installed on 86 cable strings, of which 80 will be separated by roughly 125 m forming a triangular grid, with each string containing 60 DOMs vertically spaced by 17 m. More details on the detector can be found in Achterberg et al. (2006). The remaining 6 strings constitute the denser DeepCore sub-array. DeepCore strings are separated by approximately 60 m and are located near the center of IceCube. Each of these strings contains 60 high quantum efficiency DOMs, with the bottom 50 DOMs vertically spaced by 7 m and located between depths of (2107–2450) m below a dusty ice layer with reduced transparency. The remaining 10 DOMs, vertically separated by 10 m and located above the dust layer, instrument a volume between (1750–1860) m.

IceCube is complemented by a surface array called IceTop, consisting of a pair of DOMs encased in ice tanks near the top of each string. Due to their higher noise and sensitivity to the fluctuating solar particle flux (Abbasi et al. 2008), the IceTop DOMs do not contribute to the IceCube supernova detection system.

The DOM is the fundamental element in the IceCube architecture.Each DOM is housed in a 13′′ (33 cm) diameter, 0.5′′ (1.27 cm) thick borosilicate glass pressure sphere. It contains a Hamamatsu R7081-02 (R7081-MOD in case of DeepCore) 10′′ (25.4 cm) hemispherical photomultiplier tube (Abbasi et al. 2010) as well as several electronics boards containing a processor, memory, flash file system and realtime operating system that allows each DOM to operate as a complete and autonomous data acquisition system (Abbasi et al. 2009). It stores the digitized data internally and transmits the information to a surface data acquisition system on request.

The supernova detection relies on continuous measurements of photomultiplier rates. The rate information is stored and buffered on each DOM in a 4-bit counter in 1.6384 ms time bins (216 cycles of the 40 MHz clock). The main data acquisition system  (Abbasi et al. 2009) transfers the data asynchronously to the independently operating supernova data acquisition system (SNDAQ). For the real-time processing, the information is synchronized by a GPS clock and regrouped in 2 ms bins.

The south pole is out of reach for most communication satellites and high bandwidth connectivity is available only for about 6 h per day. Therefore, a dedicated Iridium-satellite (Pratt et al. 1999) connection is used by the SNDAQ host system to transmit urgent alerts. In that case, a short datagram is sent to the northern hemisphere. The receiving system parses the message and forwards information on the supernova candidate event to the international SNEWS group. The time delay between photons hitting the optical module and the arrival of the datagram at SNEWS stands at about 6 min, providing close to real-time monitoring and triggering. In order to test the signal path, an internal trigger threshold is adjusted to transmit 1–2 background triggers per day.

Due to satellite bandwidth constraints, the data are re-binned in 0.5 s intervals and then subjected to a statistical online analysis described in Sect. 4.2; the fine time information in 2 ms intervals is transmitted for a period starting 30 s before and ending 60 s after a trigger flagging a candidate supernova explosion. The system is surveyed by the IceCube experiment control and monitoring system (“IceCube Live”); supernova alerts are immediately distributed by e-mail to notify experts.

The optical and noise properties of the DOMs are crucial for the understanding of IceCube’s supernova detection and will hence be discussed in more detail in the following subsections.

3.1. Optical properties of the digital optical module

The photomultiplier was chosen on the basis of low dark counts and good time and charge resolution. Its bialkali photocathode has a spectral response in the range 300 nm to 600 nm with a peak quantum efficiency of (25 ± 1)% at 420 nm, well-matched to the Cherenkov signal spectrum and the optical properties of the glacial ice. Dark count rates for standard efficiency DOMs of around 540 Hz are typical for DOMs at ice temperatures between  − 43   °C at –1450 m depth and  − 20   °C at –2450 m depth. The quantum efficiency of high efficiency DOMs is roughly 1.35 times higher, while the noise increases approximately by a factor 1.25. The glass of the pressure sphere was selected for high transmission in the sensitive region of the photomultiplier and a low rate of background photons from intrinsic radioactivity in the glass. Optical transparency extends well into the near-UV, with 50% transmission at , but drops to a few percent at 310 nm.

thumbnail Fig. 1

Overall DOM efficiency versus wavelength for head-on illumination of the 0.0856 m2 DOM cross section. The average value in the 300–600 nm range, weighted by the wavelength dependence of Cherenkov light emission, is  ≈ 7.1%.

Open with DEXTER

The photomultiplier is mechanically attached to the glass pressure housing with a silicone elastomer gel (GE6156 RTV). This gel matches the refractive index of the glass (n = 1.48) to reduce optical losses at the medium interfaces. The spectral transparency of the gel extends to approximately 250 nm with . The combined response of the glass, gel, and photomultiplier is a critical input to the IceCube Monte Carlo simulation package. The detection efficiency of the DOM to a head-on parallel light beam is shown in Fig. 1.

Most of the PMTs are operated at a gain of 107, so single photoelectrons produce pulses of approximately 8 mV amplitude and 10 ns width across the load impedance. The programmable front end pulse discriminator is set to 2 mV, a factor of  ≈ 10 above the rms noise level (Abbasi et al. 2010). On average, 85% of all single photo-electron pulses pass the discriminator threshold.

3.2. Noise properties of the digital optical module

Several effects contribute to the prevailing average rate of 540 Hz for standard efficiency DOMs. Atmospheric muons (16 Hz), thermal emission from the photocathode (<10 Hz), and photomultiplier induced afterpulses ( ≈ 30 Hz) all play a role, but the majority of hits are due to radioactive decays of which a large fraction initiates bursts of hits lasting for up to 15 ms. These bursts are presumably scintillation of residual cerium in the glass of the photomultiplier and the pressure sphere energized by β and α decays. The 40K content in the IceCube pressure spheres is specified to produce less than 100 Bq leaving trace elements from uranium and thorium decay chains as the main source of radioactivity.

The main characteristics of DOM noise were determined using a minimum bias data set with pulses recorded by 120 IceCube DOMs. The observed time difference between sequential noise hits deviates from an exponential distribution expected for a Poissonian process (see Fig. 2). The inset shows a hit sequence from a single DOM. Photomultiplier related afterpulses, which occur with  ≈6% probability on time scales of 0.3–11 μs (Abbasi et al. 2010), cannot explain the high occupancy bursts. We infer that the bursts are caused by an event within a DOM, but external to the electron amplification of the photomultiplier.

These observations are consistent with a study by Meyer (2010), who showed that a photomultiplier with bialkali photocathode produces bursts that increase in rate and size as the photomultiplier is chilled. Such behavior could result from increased efficiency for radiative decay of excited states in the glass. In addition, Richardson’s law describes the increase in thermal emission with photomultiplier temperature. The deployment of IceCube DOMs on vertical strings places them in environments ranging from  −43   °C to  −20   °C (Price et al. 2002), warming with depth. DOM temperatures are some 10   °C warmer due to the energy dissipated in the electronics as monitored by a sensor mounted on the mainboard, but the photomultiplier and DOM glass temperature is somewhat uncertain.

thumbnail Fig. 2

Probability density distribution of time differences between pulses for noise (bold line) and the exponential expectation for a Poissonian process fitted in the range 15 ms  < ΔT <  50 ms (thin line). The excess is due to bursts of correlated hits, as can be seen from the 50 ms long snapshot of hit times shown in the insert.

Open with DEXTER

To confirm these temperature patterns, we divide the DOM noise into two contributions. The first is the rate of random arrivals as determined by fitting the slope of the interval distribution as in Fig. 2. The second is the rate of events contributing to the excess of short intervals. These contributions are further divided into six temperature bands, and displayed in Fig. 3. The fitted excess contributions are then compared to an empirical exponential ansatz (Meyer 2010), (3)where T is the absolute temperature, AC = 0.055 m2 is the cathode surface of the deployed R7081-02 photomultiplier (Hamamatsu 2007) and G = 5 × 104 /m 2/s is a fixed constant taken from  Meyer (2010). The fit results in Tr ≈ 115 K. The Poisson component is fitted to the Richardson-type law on thermal emission plus an ad-hoc constant noise term C(4)where k denotes Bolzmann’s constant, W ≈ 0.5 eV is the work function for Bialkali cathodes, and A, C are fit parameters. The curves match the observed temperature dependence of the noise rates fairly well, and support the hypothesis that the DOM noise is primarily due to a temperature dependent spectrum of bursts.

The data acquisition was designed to reduce the noise rate by eliminating the excess hits, while keeping the random arrivals. The signal-to-noise ratio of the measurement can be improved by enforcing an artificial dead time τ after every count, configured to 250 μs by a field programmable gate array in the DOM. This reduces the noise rate from 540 Hz to 286 Hz at the cost of some 13% dead time for signal. The choice of 250 μs optimizes sensitivity to the Lawrence-Livermore model (Totani et al. 1997) for distances up to 75 kpc, when neglecting the effect of afterpulses following the signal. A dead time τ > 110 μs guarantees that the 4-bit counters do not overflow. The rate decrease due to dead time can be corrected for, however, the corresponding uncertainty increases once the measured rate approaches 1/τ. An improved data acquistion that would avoid distortions of the rate measurement for supernova distances  <  is under discussion. Various dead time implementations, e.g. schemes masking hits that arrive within the dead time caused by a previous hit or schemes allowing each hit to restart the dead time, were tested with only slight differences observed. By eliminating the initial hits of the bursts, the noise rate can be reduced by up to 100 Hz. Other optimizations may be possible but require a thorough understanding of the effect on signal hits. Design and implementation of a new data acquisition system with more efficient noise rejection will be reported in a subsequent publication.

thumbnail Fig. 3

Measured average noise rates of 120 DOMs as function of DOM temperature. The excess of short time intervals due to bursts (solid) is fitted to the empirical model of Meyer (2010). The Poissonian contribution (dashed) has not been corrected for the depth (and thus temperature) dependent contribution of atmospheric muons (see Fig. 7). The dotted line is a comparison to the predicted thermal noise from Richardson’s law plus a constant rate of C = 225 ± 6 Hz.

Open with DEXTER

4. Neutrino interaction, detection and analysis

In this section we first discuss the processes that lead to a detectable signal in IceCube, starting with the relevant neutrino interaction cross sections and continuing with the Cherenkov light production, propagation and detection. The real-time analysis introduced to monitor collective rate changes in IceCube’s light sensors is described in the second part of this section.

4.1. Cherenkov photon signal in IceCube

Table 1

Major neutrino reactions.

Neutrinos of different species will be detected in IceCube via the interactions listed in Table 1. The table also includes the number of observed photon hits and the corresponding fractions as expected from the Garching model. The inverse beta reaction dominates in the ice. The small contribution by neutrino-electron scattering processes poses a challenge to the detection of the deleptonization peak. Note that the νe and cross sections on 16O are strongly energy dependent due to the high reaction thresholds of 15.4 MeV and 11.4 MeV, respectively. While their contribution to the hit rate in case of the Garching model is small ( ≈ 3%), the contribution can be as high as 20% for a 40 M progenitor (Sumiyoshi et al. 2007) with average neutrino energies of 25 MeV and beyond. The νe cross sections on the rare isotopes 18O, 17O and on , with 18O giving the dominant contribution, add a small signal due to their low reaction thresholds (Haxton 1987). As the cross sections are only given for electron energies between (5–13) MeV (Haxton 1999), they were extrapolated assuming an dependence. While the energy deposition due to positron annihilation, neutron capture and photon induced compton electrons arising in the de-excitation of giant O* resonances in the neutral current interactions νX + 16O → νX + 16O ∗  → νX + 15O/15N + n/p + γ  (Langanke et al. 1996) have been included, we have not yet considered delayed β and βγ decays from excited nuclei. The cross section uncertainties of the reactions on protons and electrons are estimated in the references of Table 1 to be smaller than 1%; uncertainties on oxygen reactions are hard to assess and the cross section may be only known up to a factor of two.

Reactions producing electrons or positrons in the final state radiate Nγ Cherenkov photons along their flight path x, as long as their kinetic energies exceed the Cherenkov threshold of 0.272 MeV. Integrating the Frank-Tamm formula between (300–600) nm and accounting for the dispersion in ice, one obtains Nγ = (316 ± 9) cm-1·x (x in cm). For the inverse beta decay, the total average positron energy is calculated from the average energy by the relation with . Due to the approximately quadratic energy dependence of the interaction cross section, the observed positron energies are on average higher than those of the incoming neutrino; the number of Cherenkov photons approximately rises with (see Fig. 4).

thumbnail Fig. 4

Example for an anti-electron neutrino energy spectrum with α = 2.92 predicted for an 8.8 solar mass O-Ne-Mg core supernovae collapse (Hüdepohl et al. 2010) one second after the onset of the burst (solid line). Also shown is the cross section weighted energy spectrum of produced positrons and electrons from inverse beta decay (dashed line) as well as a measure of the detectable energy, which is proportional to the number of Cherenkov photons Nγ in the (300–600) nm range (dotted line). The relation Nγ = a·Ee with a = 178 MeV-1 is used.

Open with DEXTER

The mean travel path for electrons from νe and positrons from , including secondary leptons with energies above the Cherenkov threshold as well as positron annihilation, was determined with a GEANT-4 Monte Carlo simulation. The linear relationship Ee + /   MeV was found for positrons. The corresponding relationship for electrons was determined to be consistent within errors.

The optical scattering and absorption in glacial ice at the south pole has been studied extensively (Ackermann et al. 2006) by the AMANDA and IceCube Collaboration with pulsed and continuous light sources embedded in the ice with the neutrino telescope. The detectors span depths ranging from (1300–2500) m in the ice where the scattering coefficient varies by a factor of seven and absorptivity can vary by a factor of three depending on the wavelength. The data (Bramall et al. 2005) are consistent with the variations in dust impurity concentration seen in ice cores sampled at other Antarctic sites to track climatological changes. In the simulation applied for this paper, the ice is assumed to be homogeneous in the horizontal plane despite an observed slight tilt.

We use two alternative procedures to calculate the number of detected signal hits from the number of neutrinos crossing the detector: the first approach is based on separate simulations of particle interactions, Cherenkov photon creation, propagation and detection, the second GEANT-3.21 GCALOR-based (Zeitnitz et al. 1994) simulation combines all the steps in one program.

IceCube’s standard simulation of photon propagation within the ice relies on predetermined tables (Lundberg et al. 2007), created to track photons across the Antarctic ice. The tables store the detection probability and the arrival time distribution for given source and detector locations as well as their orientation. It includes the source wavelength, angular and intensity information, DOM parameters such as the glass and gel transparency and the quantum efficiency of the photomultiplier tubes. It also contains information about the ice such as the depth-dependent absorption and scattering lengths.

The signal hit rate per DOM for a specific reaction and target is given by: (5)where ntarget is the density of targets in ice, d is the distance of the supernova, its luminosity, is the normalized neutrino energy distribution defined in Eq. (2) and Ee denotes the energy of electrons or positrons emerging from the neutrino reaction. denotes the effective volume for a single photon and Nγ(E) ≈ 178·Ee is the energy dependent number of radiated Cherenkov photons; their numerical values depend on the selected wavelength range, chosen as (300–600) nm throughout this paper. The artificial dead time τ (see Sect. 3.2) reduces the total rate of hits. Comparing the observed signal, defined as the net increase over the nominal noise level, to the full rate of signal hits defines the dead time efficiency ϵdeadtime. The approximate expression ϵdeadtime ≈ 0.87/(1 + rSN·τ) is found as function of signal rate rSN by adding Poissonian signal to the measured sequence of noise hits and applying a non-paralyzable dead time τ = 250 μs.

The single photon effective volume varies strongly with the photon absorption. As a first approximation, can be estimated by the product of the Cherenkov spectrum and DOM sensitivity weighted absorption length ( ≈ 100 m), DOM geometric cross section (0.0856 m2), Cherenkov spectrum weighted optical module sensitivity ( ≈ 0.071), average angular sensitivity including cable shadowing effects ( ≈ 0.32), and the fraction of single photon hits passing the electronic DOM threshold ( ≈ 0.85).

was simulated by randomly placing 107 photons with (300–600) nm wavelengths within a sphere of radius 250 m around each DOM. We made the simplifying assumption that the Cherenkov light arrives at the DOMs isotropically from all angles. Note that the directions of positrons from the dominant inverse beta decay reaction are very weakly correlated with those of the incoming neutrinos.

was determined as function of depth in the ice (see Fig. 5). Averaging over all DOMs in one string one obtains . The systematic uncertainty is discussed in Sect. 5.5.

thumbnail Fig. 5

Effective volume per DOM (left axis) for detection of Cherenkov photons with (300–600) nm wavelength plotted as a function of depth. The effective positron volume can be read off the right axis. DeepCore strings are not included in these plots.

Open with DEXTER

The energy dependent effective volume for detecting an electron or positron is obtained by multiplying with the number Nγ(E) of Cherenkov photons. The mean number of photons recorded by an optical module averaged over energy is then given by , where is the neutrino density. For positrons with a cross section weighted average energy of e.g. MeV (see Fig. 4) one would obtain the average effective volume for standard efficiency DOMs. This volume corresponds to an envisioned sphere of  ≈ 5.2   m radius centered at the optical module position, with full sensitivity inside and zero outside. With 5160 optical modules deployed, IceCube thus roughly corresponds to a dedicated 3.5 Mton supernova search detector in terms of geometry. Due to the presence of noise, a fair comparison in terms of statistical accuracy needs to take into account the signal over background ratio as function of time and distance. To give an example, a study of the initial 380 ms of the burst in the Lawrence Livermore model (see Table 4) at distances of 10 kpc (5 kpc) would require a 0.45 (1.6) Mton background free detector to statistically compete with IceCube.

The second approach was to apply a GEANT-3.21 GCALOR based simulation of individual events that includes νe and on protons, electrons and 16/17/18O, positron annihilation and neutron capture, the photon propagation in the ice including the effect of dust layers, detector geometry, and the DOM response (Richard 2008). The cross section parametrization of Vogel & Beacom (1999), which is in good agreement with Strumia & Vissani (2003), was used. Positron annihilation and hydrogen capture of neutrons produce photons of 0.51 MeV and 2.22 MeV energy, respectively. These add, predominantly by Compton scattering and subsequent Cherenkov emission,  ≈ 1 MeV to the recorded energy. Rates from neutrino interactions on electrons reveal a 20% dependence on the incoming neutrino direction due to the small angle between neutrinos and scattered electrons and a directional dependence of the DOM efficiencies. Figure 6 shows the clustering of detected inverse beta neutrino interactions at the position of the detector strings to visualize the effective volumes.

thumbnail Fig. 6

Detected neutrino inverse beta decay interaction vertices projected onto the horizontal plane based on a GEANT-3.21 simulation with 10 million neutrino interactions.

Open with DEXTER

The use of events with two or more DOMs detecting photons from the same positron to improve upon IceCube’s sensitivity at large supernova distances and to track relative changes in the average neutrino energy will be discussed in a future paper. If several photons arrive close in time at the same DOM they will be counted as one hit; if one of the photons is delayed by scattering it will be rejected by the artificial dead time requirement. The two independent approaches for the determination of the detected number of events agree within 10%.

One may obtain a rate estimate from measured data by scaling the 11 events Kamiokande-II observed during the supernova SN1987A neutrino burst to IceCube’s effective volume of Antarctic ice. Assuming the ν energy spectrum of Vissani & Paglioroli (2009), accounting for the Kamiokande-II energy threshold and positron detection efficiency, and taking into account the loss due to IceCube’s artificial dead time we determine a signal expectation of 113 ± 36 detected photons per IceCube module within the first 15 s for a SN1987A like supernova near the galactic center at 10 kpc distance. The results are consistent with earlier simulations (Feser 2004; Jacobsen 1996) performed for AMANDA that assumed homogeneous ice, after correcting for the different photomultiplier sensitive areas, optical module transparencies and dust layers in the ice.

4.2. Real-time analysis method

The analysis monitors the collective rate increase Δμ in all DOMs induced by Cherenkov photons uniformly distributed in the ice. As discussed in Sect. 4.1, the photons are radiated by e ±  produced by reacting supernova neutrinos. Counting Ni pulses during a given time interval Δt, rates ri = Nit for DOM i, are derived. The index i ranges from 1 to the total number of operational optical modules NDOM. With sufficiently large Δt’s, the distributions of the ri’s can be described by lognormal distributions that, for simplicity, are approximated by Gaussian distributions with rate expectation values  ⟨ ri ⟩  and corresponding standard deviation expectation values  ⟨ σi ⟩ . These expectation values are computed from moving 300 s time intervals before and after the investigated time interval. Shorter time intervals reduce the sensitivity of the analysis. At the beginning and the end of a SNDAQ-run, asymmetric intervals are used. The time windows exclude 30   s before and after the investigated bin in order to reduce the impact of a wide signal on the mean rates.

The most likely collective rate deviation Δμ of all DOM noise rates ri from their individual  ⟨ ri ⟩ ’s, assuming the null hypothesis of no signal, is obtained by maximizing the likelihood (6)Here ϵi denotes a correction for module and depth dependent detection probabilities. An analytic minimization of  − lnℒ leads to (7)with an approximate uncertainty of (8)Note that Δμ has the structure of a weighted average sum: each optical module contributes with the deviation of its expected noise rate weighed by ϵi/ ⟨ σi ⟩ 2. Assuming uncorrelated background noise and a large number of contributing DOMs, the significance ξ = Δμ/σΔμ should approximately follow a Gaussian distribution with unit width centered at zero. In practice, the width turns out to be larger (see Sect. 5.3). The likelihood that a deviation is caused by an isotropic and homogeneous illumination of the ice can be calculated from the χ2-probability (9)In order to suppress high rate deviations due to a temporary malfunction of individual detector modules, we reject supernova candidate events with a -probability  < 0.001.

For short time bases Δt, the Gaussian approximation is no longer valid and Poissonian probabilities must be used. The collective rate deviation can then be obtained from the equation (10)which can no longer be solved analytically. The same goes for the corresponding uncertainty, which is derived by identifying a drop in lnℒ by 0.5. The required numerical minimization prevents an online analysis of the raw data in 2 ms time intervals. However, fine time data in intervals of 30 s before and 60 s after a trigger are transmitted by satellite to perform a more detailed analysis offline. For instance, the onset of the neutrino emission can be determined with better than 5 ms accuracy for supernovae with less than 15 kpc distance conservatively assuming low mass O-Ne-Mg core supernovae (Hüdepohl et al. 2010). For similar studies see Pagliaroli et al. (2009a) and Halzen & Raffelt (2009). This information can then be used to triangulate the supernova direction with other neutrino experiments.

The optimal time base Δt to detect faint signals depends on the expected signal shape. A simple generic description incorporates a fast rise of the neutrino flux followed by an exponential decrease as expected during proto neutron star cooling with a time constant of . Maximizing (11)leads to . As the realtime analysis operates on bins of 0.5 s length, a time window length of 4 s has therefore been chosen as the best available setting for this particular model assumption. Assuming the Livermore model (Totani et al. 1997), with a pronounced flux during the first seconds due to the high mass progenitor, the optimal time window is determined to be 1.6 s.

To cover these model uncertainties, additional analyses with time bases of 0.5 s and 10 s are run in parallel with the one with Δt = 4 s. The 0.5 s analysis aims at short neutrino bursts (i.e. from soft gamma ray repeater sources or from supernovae collapsing into a black hole). The 10 s time base accounts for the observed time window of the detection of the neutrinos from the supernova SN1987A by Kamiokande-II (Hirata et al. 1987). By removing the cut on χ2 for the 0.5 s binning, the trigger has been made sensitive to partial illuminations of the detector. This gives the possibility to record hypothetical exotic particles emitting considerable amounts of light and thereby acting as a slowly moving source (such as ultra-heavy magnetic monopoles in some theories).

The collective rate deviation Δμ and its uncertainty σΔμ in the time bases of 4 s and 10 s are calculated using sliding windows in 0.5 s steps and extracting the maximal significance. This procedure ensures that the signal detection efficiency is not reduced by binning effects.

5. Detector performance

In this section we will characterize the detector performance based on two years of data taking experience, discuss detector qualification criteria and summarize the expected systematic uncertainties. The data were taken with 22 operating strings (211 days between Aug. 2, 2007 and April 5, 2008) and with 40 operating strings (345 days between April 9, 2008 and April 15, 2009).

5.1. DOM stability requirements

The stability of DOM noise rates is crucial for IceCube’s sensitivity to detect supernovae. Faulty modules are removed from the analysis using automatic procedures that are applied in real time. In the 40 string configuration, 41 DOMs out of 2400 deployed DOMs showed no signal ( ≈ 1.7%); all module rates fulfilled the requirement  ⟨ ri ⟩  < 10   000   Hz. Operational modules are removed from the analysis if they exhibit a variance  ⟨ σi ⟩ 2 much larger than the Poissonian expectation  ⟨ ri ⟩  or high skewness  | s | . In the very rare case, where the number of qualified modules drops below a threshold of 100, the corresponding time periods are discarded as a safeguard to prevent sending false alarms to SNEWS.

The filter results in Table 2 show the excellent data quality of IceCube. Taking the 40 string configuration as an example, 98% of the disqualifications were due to just 11 DOMs. Thus the module disqualification has a negligible effect on the signal significance which changes as the square root of active DOMs.

Table 2

Module disqualification.

5.2. Long term stability

thumbnail Fig. 7

Top: rate of a typical DOM as function of time covering 556 days of lifetime as measured in 0.5 s bins (baseline suppressed). The line corresponds to a rate fit according to Eq. (12). Bottom: parameter c2 and estimated muon induced rate as function of depth. The variation with depth is mostly due to the optical properties of the ice and muons ranging out.

Open with DEXTER

The DOM rates are characterized by an exponential rate decrease over long time periods and a slight seasonal modulation. For the purpose of this analysis, the formula (12)represents the effects sufficiently well, as can be seen in Fig. 7. The decay of the rate is likely due to a decrease of triboluminescence in the ice with time, a byproduct of the freezing process. For DOMs that have been in the ice for more than 3 years, the fall time τ exceeds 40 years, except for very deep DOMs where the freezing process takes longer (τ ≈ 4 years). In any case, the effect is negligible for the analysis which requires a stable rate within the analysis time window of 10 min. The slightly skewed rate distribution of a single DOM is better described by a lognormal distribution than by a Gaussian (see Fig. 8 with 250 μs dead time applied). The average rates for standard efficiency and high efficiency DOMs are determined to be 286   Hz and 359   Hz, respectively. Thanks to the tight quality control, variations between DOMs of the same type are small showing standard deviations of 26   Hz and 36   Hz, respectively. The seasonal DOM rate modulation is assumed to arise from a change in the atmospheric muon flux (Tilav et al. 2009). Fitting the time varying component, the parameter c2 can be extracted (see Fig. 7), which tracks the effect of dust layers similarly to what was observed in the determination of effective volumes (see Fig. 5). If one interprets the effect as being due to stratospheric temperature variations measured to modulate the muon flux by Δr ≈ 8.3% in 2008, the averaged muonic contribution to single DOM rates is . As will be discussed in the following section, statistical fluctuations in the atmospheric muon rate, despite the small average muon contribution to the DOM rate of 16 Hz, distort the significance spectrum considerably.

thumbnail Fig. 8

Rate distribution of a typical standard efficiency DOM taken over 29 consecutive days. Each measurement corresponds to 0.5 s integration time. Gaussian and lognormal fits are shown.

Open with DEXTER

5.3. Background significance distribution

For purely uncorrelated background and high statistics, the significance ξ is expected to be Gaussian distributed with width σ = 1 (see Sect. 4.2). As can be seen in Fig. 9, the measured distribution is broader than expected and can be fairly well fitted by a Gaussian with width σ = 1.27. The broadening increases with the size of the detector and has reached σ = 1.43 with 79 operating strings. This broadening is due to non-Poissonian fluctuations in the number of hits deposited by atmospheric muons: highly energetic muons or muon bundles clustering in time leave correlated hits that will in general pass the χ2 cut. In the offline analysis, one can partly remove this effect as the number of muon induced coincident hits in neighboring DOMs is recorded for all triggered events. For the 79 (40) string configuruation, the broadening is thus reduced to σ = 1.06 (1.05). As this section describes the results of the online analysis, we do not apply this correction in the discussions below.

An effective significance threshold of ξ = 6.0 provides an internal trigger for testing the system one to two times per day, while a threshold at ξ = 7.1 satisfies the SNEWS requirement of one false background trigger approximately once per 10 days. The Gaussian curve shown in Fig. 9 predicts one false background trigger within ten years at a threshold at ξ = 11. These thresholds are also depicted in Fig. 12. The entries at ξ = 8 and ξ = 9.5 are due to test runs with artificial light sources.

5.4. Future improvements

Further optimizations may be applied to the data acquisition and analysis in the future, e.g. by incorporating a more sophisticated method to remove correlated noise, by excluding the bin-by-bin contribution of measured cosmic ray muon hits to the online rate measurement, by storing time stamps of all hits in case of a significant alarm to e.g. improve on the timing resolution and to track the average neutrino energy (Baum et al. 2011; Demiroers et al. 2011), and by employing temporal templates in likelihood or cross-correlation studies.

thumbnail Fig. 9

Significance distribution in 0.5 s binning for a detector uptime of 556 days with 22 and 40 strings deployed. The two outliers at ξ = 8 and 9.5 occured during test runs employing artificial light. The dashed line shows a Gaussian fit with σ = 1.27.

Open with DEXTER

5.5. Systematics

Table 3

Summary of statistical and systematic uncertainties.

There are three types of systematics relevant to this paper. The first type has to do with time dependent changes in the noise rate in all or a subset of DOMs that can mimic a supernova signal. These include high voltage variations, longterm trends such photomultiplier aging, weather effects and other experimental effects. For the supernova monitoring, realtime analysis as well as triggering, longterm trends are accounted for by calculating the rate expectation values  ⟨ri⟩  and their standard deviation  ⟨σi⟩  from a rolling average of the noise rate for each DOM over 300 s on either side of the time of interest as was described in Sect. 4.2. The second type affects our understanding of the overall sensitivity of the detector. Ice properties, the wave length dependent quantum efficiency and the DOM thresholds all fall under this category. The third type is due to our current knowledge of relevant cross sections, the distance to supernovae, the neutrino-type dependent luminosity and energy, as well as oscillation effects in the star and in the Earth.

Detector stability and environmental effects

The detector behaves very stably under normal operation. Periods during drilling, tests with artificial light sources and periods with data acquisition problems as well as a few noisy modules are excluded (see Sect. 5.1) from the analysis. As discussed in 5.2, annual variations as well as shorter term modulations in the atmospheric muon flux change the observed rates which, however, are tracked by the rolling average. As hits from muons penetrating the detector are recorded simultaneously by the data acquisition system, they can be subtracted from the supernova rate measurements offline. Overall, the uncertainty on the supernova sensitivity associated with the detector stability is estimated to be small (1.6%).

The data were checked for other external sources of rate changes such as seismic activity and varying magnetic or electric fields as tracked by magnetometers and riometers at the south pole. Only magnetic field variations show a slight, albeit insignificant, influence on the rate deviation of  − 1.3 × 10-6 Hz/nT. The influence is 30 times lower in IceCube than in AMANDA due to a wire mesh μ metal shielding in IceCube DOMs.

Ice properties and sensitivity of the detector

Dust and air bubbles in the natural ice medium cause photons to scatter, while dust and the ice itself determine the absorption length. The range of ice densities ρice = (919.6 ± 1.6) kg/m3 (Price et al. 2002) reflects the 0.4% density decrease due to the temperature increase between (1.4–2.4) km depth. Scattering dominates in the shallow ice above 1400 m and possibly in the ice of the hole around the DOMs, which refreezes soon after deployment. Uncertainties in the optical properties of the hole ice are estimated to affect the effective volume determination by  <1%. More important are the uncertainties in the description of ice properties of the Antarctic glacier.

The distributions of the photon arrival times and number of photons received at AMANDA modules from artificial light sources were used to derive the scattering length at different depths. Pulsed and continuous LED and laser sources give complementary measurements. The measurements of ice properties are consistent to within 6% of each other including statistical and systematic uncertainties both for the scattering and absorption measurements. The information on photon propagations is stored in tables contributing an estimated 1% uncertainty due to the finite binning. Our knowledge of ice properties and corresponding simulation methods continue to improve. Variation of DOM optical sensitivites and effects of the photomultiplier threshold on single photo-electron pulses lead to an  ≈ 10% uncertainty. We assumed a (7 ± 3)% loss of light due to cables that shadow the photomultiplier surface. The effects discussed in this paragraph add up to an overall 12% uncertainty.

The track length of a positron or electron, including that of secondaries with kinetic energies above the Cherenkov threshold of 0.272 MeV, depends linearly on the initial lepton energy. The statistical uncertainty of the GEANT-4 calculation, including a systematic difference between electrons and positrons, is 0.3%. The implementations of low energy electromagnetic processes have been cross checked between GEANT and NIST ESTAR-ICRU37 compilations. Good agreement has been found in particular for electron ranges (Amako et al. 2005). NIST quotes a (2–5)% systematic uncertainty on their implementation of electromagnetic cross sections in the energy range relevant to supernovae. Event to event statistical fluctuations in the track length and in the number of Cherenkov photons ( ≈ 2%) are negligible when investigating the ensemble of all DOMs.

External sources of systematics

The estimated uncertainties of the cross sections are listed in Table 1. Those associated with oxygen scattering processes are large and difficult to assess. Due to the strong energy dependence of 16O neutrino cross sections, the impact of this uncertainty depends on the energy spectra of particular models and the assumed oscillation scenarios. Processes involving only νe scattering are particularly affected. The total systematic uncertainty from detector effects and cross sections on the total rate of all neutrinos (electron neutrinos) is  ≈ 14% (25%).

The distance to stars in our Galaxy is typically known to 25% accuracy (Scheffler & Elsasser 1998). However, the distance of a supernova can in principle be measured by interpreting its light curve with an accuracy of (5–10)% (Eastman et al. 1996). Unfortunately, a considerable fraction of supernovae occurring in our Galaxy may be obscured by dust at optical wavelengths.

The uncertainties in the supernova collapse models are large and difficult to assess. The νe rate from the neutronization burst is largely independent of the progenitor mass; the corresponding uncertainties are estimated to be around 10%; uncertainties arising from neutrino oscillations are estimated to be below 5% for a normal hierarchy (Kachelriess et al. 2005).

Oscillations in the Earth strongly depend on the incoming neutrino direction and may lead – depending on the neutrino hierarchy – to a maximal rate decrease of 8% and 3% during the cooling phases of the Lawrence Livermore and Garching models, respectively. The differences between various oscillation scenarios may be as large as 30% or even 50% in the case of black hole formation.

6. Performance simulations

In this section we will discuss the capability of IceCube to characterize details of the core collapse of massive stars and of the supernova remnant, as well as the insights IceCube may provide into the properties of neutrinos and their interactions. There remain significant uncertainties in our understanding of the neutrino emission from supernova explosions, necessitating comparisons between several models to map the parameter space. In order to illustrate IceCube’s performance, we will refer to specific models chosen to span the possible range of supernova progenitor masses and neutrino energy spectra. We will also refer to more speculative models in order to demonstrate IceCube’s high statistical precision in the detection of modulations of the neutrino light curve from astrophysical effects.

When discussing the complete accretion and cooling phase extending to 15 s, we refer to recent O-Ne-Mg core models (Hüdepohl et al. 2010) and the older Lawrence-Livermore model (Totani et al. 1997) as examples with low and high progenitor star masses. The calculations consider only the radial dimension as a parameter. When discussing the first 800 ms of the burst we also refer to the Garching model (Kitaura et al. 2006) as an example for calculations with sophisticated transport mechanisms. Because it assumes only half of the initial star mass, (8–10) M instead of 20 M, it predicts fewer neutrinos than the Lawrence-Livermore model.

In order to be compatible with other studies, we will usually show experimentally predicted neutrino light curves for distances of 10 kpc, roughly corresponding to the center of our Galaxy. Depending on the model for the supernova precursor distribution, between 44% (Ahlers et al. 2009) and 53% (Bahcall & Piran 1983) of all core collapse candidate stars in the Milky Way are expected to occur within this distance. About 90% of all supernovae are predicted to occur within 15.4 kpc (Mirizzi et al. 2006) to 17.5 kpc (Ahlers et al. 2009) distance from the Earth.

In the study of star matter oscillation effects, we restrict ourselves to the comparisons of the three scenarios A–C for neutrino hierarchy and θ13 mixing angles that were introduced in Sect. 2. For some comparisons, we also show distributions with star matter oscillations turned off.

All simulations are performed for the final IceCube array with 4800 standard and 360 high efficiency DOMs. We assume that 2% of the DOMs are excluded from the analysis, either because they are not working or they give unstable rates. The background noise was accounted for in two different way. For the determination of the significance and galaxy coverage, the simulated signal was randomized assuming a Poissonian distribution and added to noise data taken from experimental measurements and analyzed with the real-time reconstruction programs. For the simulation and comparision of various models, we added the calculated and randomized signal rates to the noise of the floor drawn from a Gaussian with mean value and standard deviation derived from data.

Due to correlated pulses from radioactive decays and atmospheric muons, the measured sample standard deviation in data taken with 79 strings is  ≈ 1.3 and  ≈ 1.7 times larger than the Poissonian expectation for 2 ms and 500 ms bins, respectively. It is possible to subtract roughly half of the hits introduced by atmospheric muons from the total noise rate in the offline analysis, as the number of coincident hits in neighboring DOMs is recorded for all triggered events. We apply this correction to all Monte Carlo analyses described in this section, as this procedure lowers the standard deviation to , slightly dependent on the binning.

Unless noted otherwise, we will use a likelihood ratio method to determine the range within which models can be distinguished. From sets of several thousand test experiments, we will typically determine limits at the 90% confidence level, while requiring that the tested scenario is detected in at least 50% of the cases. Note that the ranges obtained should be interpreted as optimal as we assume that the model shapes are perfectly known and only the overall flux is left to vary; we also disregard the possibility that multiple effects, such as matter induced neutrino oscillations and neutrino self-interactions, could co-exist and thus may be hard to disentangle.

6.1. Expected supernova signal

Evaluating Eq. (5) one obtains the rate spectra of Fig. 10 for a supernova at 10 kpc distance. With a maximal signal-over-noise ratio of  ≈ 55 for the Lawrence-Livermore model, the neutrino burst can clearly be detected with IceCube. Also, the still hypothetical accretion phase lasting from (0–0.5) s can be separated from the subsequent cooling phase with high statistical precision. The study of the cooling phase is limited by the photomultiplier noise in particular for the case of the light O-Ne-Mg model by Hüdepohl et al. (2010).

thumbnail Fig. 10

Expected rate distribution at 10 kpc distance for the Lawrence-Livermore model (dashed line) and O-Ne-Mg model by Hüdepohl et al. (2010) with the full set of neutrino opacities (solid line). The 1   σ-band corresponding to measured detector noise (hatched area) has a width of about  ± 330 counts.

Open with DEXTER

The oscillation scenario B for an inverted neutrino mass hierarchy shows the largest signal for the Lawrence-Livermore and Garching models because energetic will oscillate into , harden their spectrum and thus increase the detection probability. The scenario without any oscillation is presented as a reference and leads to the weakest signal. Scenario A (normal hierarchy) and Scenario C (very small θ13 < 0.09°) are hard to distinguish due to their very similar effect on neutrino mixing.

thumbnail Fig. 11

Top: expected rate distribution at 10 kpc supernova distance for oscillation scenarios A (normal hierarchy) and B (inverted hierarchy). Fluxes and energies in the left plot are taken from the Lawrence-Livermore model and in the right plot from the Garching model using the equation of state of Lattimer & Swesty (1991). Scenario C (not shown) is almost indistinguishable from Scenario A. The case of no oscillation is given as a reference. Bottom: expected average signal rate distribution at 10 kpc distance in finest 2 ms binning for Scenarios A and B using the Garching model; the unlikely case of no oscillation is given as a reference. The left plot shows the expected νe induced signal. As can be seen from the right plot, the signal is no longer apparent, once the large contribution due to the inverse beta decay and the expected DOM noise are added. The 1σ-bands corresponding to measured detector noise (hatched area) have a width of about  ± 215 counts for a 20 ms binning and  ± 70 counts for a 2 ms binning.

Open with DEXTER

Clear differences between the oscillation scenarios in absolute rate and shape appear in Fig. 11. Assuming that the model shapes are known but not necessarily the overall normalization, the inverted hierarchy can be distinguished from the null hypothesis of a normal hierarchy up to distances of 16 kpc.

6.2. Significance and Galaxy coverage

The simulation of an expected signal from a supernova within the Milky Way has to take into account the number of likely progenitor stars in the Galaxy as a function of the distance from Earth. The expected significances of supernova signals according to the Lawrence-Livermore model for three oscillation scenarios are shown in Fig. 12. For this particular model, the significances for the 4 s and 10 s binning turn out to be approximately 20% and 50% lower than for 0.5 s, respectively. For the graph, the supernova progenitor distribution predicted by Bahcall & Piran (1983) was used. For the Magellanic Clouds, which contain roughly 5% of the stars in the Milky Way, a uniform star distribution along the diameters of the galaxies was assumed for simplicity.

IceCube is able to detect supernovae residing in the Large Magellanic Cloud (LMC) with an average significance of (5.7 ± 1.5)σ in a 0.5 s binning, assuming the Lawrence-Livermore model. The uncertainty reflects different oscillation scenarios. Supernovae in the Small Magellanic Cloud (SMC) can be detected with an average significance of (3.2 ± 1.1)σ and will in general not trigger sending an alarm to SNEWS, as indicated by a horizontal line in Fig. 12. IceCube will observe supernovae in the entire Milky Way with at least a significance of 12σ at 30 kpc distance.

6.3. Onset of neutrino production

The analysis of the deleptonization peak that immediately follows the collapse is of considerable interest, since its magnitude and time profile are rather independent of the initial star mass and of the nuclear equation of state; the variation is estimated by (Keil et al. 2003) to be around 6%. Thus the electron neutrino luminosity may be used as a standard candle to measure the distance to the supernova.

As the deleptonization peak lasts for only 10 ms, the data are evaluated in the finest available time binning of 2 ms, as depicted in Fig. 11. The deleptonization signal is detected by the elastic νe + e −  → νe + e −  reaction with a cross section times the number of targets  ≈ 50 times smaller than for the interaction. As the flux rises rapidly following the collapse, the deleptonization peak remains almost completely hidden, especially when neutrinos oscillate in the star. In this case the subtle structure may be resolved only for distances .

Largely independently of the model, each oscillation scenario shows a characteristic slope of the rate increase around the deleptonization peak. Quantifying this by a series of several thousand simulations for the Garching and Lawrence-Livermore models and considering oscillation Scenarios A–C and the case of no oscillation, it is possible to establish the inverse hierarchy (Scenario B) w.r.t. the normal hierarchy (Scenario A) with 90% C.L. for distances (corresponding to 21% of all progenitor stars, when accounting for the effect of spiral arms, Ahlers et al. 2009).

thumbnail Fig. 12

Significance versus distance assuming the Lawrence-Livermore model. The significances are increased by neutrino oscillations in the star by typically 15% in case of a normal hierarchy (Scenario A) and 40% in case of an inverted hierarchy (Scenario B). The Magellanic Clouds as well as center and edge of the Milky Way are marked. The density of the data points reflect the star distribution.

Open with DEXTER

6.4. Shock waves

For an inverted hierarchy (Scenario B), the rate distribution should reveal the effects of forward and backward moving shock waves traveling through the collapsing star during the cooling phase, (3–10) s after bounce. Assuming the specific model of Tomàs et al. (2004, see Fig. 13), scenarios with a static density profile and one forward shock wave can be distinguished at 90% C.L. up to distances of 13 kpc; the distance reduces to 10 kpc in a scenario with one forward and one reverse shock wave (not shown).

thumbnail Fig. 13

Effect of a forward moving shock wave applied to supernova at 10 kpc distance, modelled according to the Lawrence-Livermore model assuming an inverted hierarchy and θ13 > 0.9° was assumed. A forward shock wave can be distinguished from a static density profile and the case of no star matter effect. The 1   σ-band corresponding to measured detector noise (hatched area) has a width of about  ± 1150 counts.

Open with DEXTER

6.5. Quark star and black hole formation

IceCube is particularly well suited to study fine details of the neutrino flux as function of time. As an example, Fig. 14 shows a simulation based on the prediction of Dasgupta et al. (2010) for the formation of a quark star. The model predicts a sudden spike in the flux lasting for a few ms while the neutron star turns to a quark star; the time of the QCD phase transition can be determined with sub-ms accuracy. The likelihood ratio test gives a deviation larger than 5σ from the hypothesis of no quark star formation for distances up to 30 kpc. Height and shape of the peak depend on the neutrino hierarchy. Scenarios A and B can be distinguished at 90% C.L. up to distances of 30 kpc.

thumbnail Fig. 14

Comparison of the neutrino light curve with quark-hadron phase transition for a 10 M progenitor at 10 kpc distance. Three neutrino oscillation scenarios are shown (see legend). The observation of the sharp induced burst, 257 ms  < t <    261 ms after the onset of neutrino emission, would constitute direct evidence of quark matter. The hatched 1   σ-band corresponding to detector noise has a width of about  ± 70 counts.

Open with DEXTER

Figure 15 shows a simulation based on the prediction of Sumiyoshi et al. (2007) for the formation of a black hole following a collapse of a 40 solar mass progenitor star. Neutrinos reach energies up to 27 MeV (νe and ) and 40 MeV (νμ and ντ), carry a correspondingly large detection probability and thus produce very clear evidence for the formation of the black hole after 1.3 s, when the neutrino emission is expected to fade exponentially (not realized in the simulation). For Fig. 15, a hard equation of state (Shen 1998) was chosen, leading to black hole formation after 1.3 s. This corresponding drop can be identified at higher than 90% C.L. for all stars in our Galaxy and the Magellanic Clouds.

thumbnail Fig. 15

Expected neutrino signal from the gravitational collapse of a non rotating massive star of 40 solar masses into a black hole at 10 kpc distance for a hard equation of state (Shen 1998) following Sumiyoshi et al. (2007). The 1   σ-band corresponding to detector noise (hatched area) has a width of about  ± 70 counts.

Open with DEXTER

6.6. Neutrino hierarchy sensitivity and rate summary

The number of standard deviation with which normal and inverted ν hierarchies (Scenarios A and B) can be distinguished are plotted in Fig. 16 as function of the supernova distance for selected models. The values represent the optimal cases when model shapes (but not necessarily the absolute fluxes) are perfectly known. Table 4 lists the number of neutrino induced photon hits that would be recorded by IceCube on top of the DOM noise for various supernova models. Note that the number of expected signal hits scales with 1/distance2; the dependence of the detection significance as function of distance can be read from Fig. 12.

thumbnail Fig. 16

Number of standard deviation with which scenarios A (normal hierarchy) and B (inverted hierarchy) can be distinguished in at least 50% of all cases as function of supernova distance for some of the models listed in Table 4. A likelihood ratio method was used assuming known model shapes.

Open with DEXTER

Table 4

Expected rates.

7. Conclusion

A high statistics observation of the supernova neutrino flux would provide valuable information on astrophysics and the properties of neutrinos. IceCube was completed in December 2010 and monitors  ≈1 km3 of deep Antarctic ice for particle induced photons with 5160 photomultiplier tubes. Since 2009 it supersedes AMANDA in the SNEWS network. With a 250 μs artificial dead time setting, the average DOM noise rate is 286 Hz. The rates remain constant over time with a small modulation induced by changes in the atmospheric muon flux; they hardly vary across the detector once the DOMs have been frozen in for a sufficiently long period. The data taking is very reliable and covers the whole calendar year, including periods when new strings were deployed. The uptime has continuously improved toward a goal of  >98% and reached 96.7% in 2009. IceCube’s sensitivity corresponds to a megaton scale detector for galactic supernovae, triggering on supernovae with about 200, 20, and 6 standard deviations at the galactic center (10 kpc), the galactic edge (30 kpc), and the Large Magellanic Cloud (50 kpc). IceCube cannot determine the type, energy, and direction of individual neutrinos and the signal is extracted statistically from rates that include a noise pedestal. On the other hand, IceCube is currently the world’s best detector for establishing subtle features in the temporal development of the neutrino flux. The statistical uncertainties at 10   kpc distance in 20   ms bins around the signal maximum are about 1.5% and 3% for the Lawrence Livermore and Garching models, respectively.

Depending on the model, in particular the progenitor star mass, the assumed neutrino hierarchy and neutrino mixing, the total number of recorded neutrino induced photons from a burst 10 kpc away ranges between  ≈ 0.17 × 106 (8.8 M O-Ne-Mg core),  ≈ 0.8    × 106 (20 M iron core) to  ≈ 3.4    × 106 for a 40 M progenitor turning into a black hole. For a supernova in the center of our Galaxy, IceCube’s large statistics would allow for a clear distinction between the accretion and cooling phases, an estimation of the progenitor mass from the shape of the neutrino light curve, and for the observation of short term modulation due to turbulent phenomena or forward and reverse shocks during the cooling phase. The deleptonization peak associated with the neutron star formation, however, may be hard to observe since the electron neutrino cross section in ice is small. IceCube will be able to distinguish inverted and normal hierarchies for the Garching, Lawrence-Livermore and black hole models for a large fraction of supernova bursts in our Galaxy provided that the model shapes are known and θ13 > 0.9°. The slope of the rising neutrino flux following the collapse can be used to distinguish both hierarchies in a less model dependent way for distances up to 6 kpc at 90% C.L. As in the case of the inverted hierarchy, coherent neutrino oscillation will enhance the detectable flux considerably. A strikingly sharp spike in the flux, detectable by IceCube for all stars within the Milky Way, would provide a clear proof of the transition for neutron to a quark star as would be the sudden drop of the neutrino flux in case of a black hole formation.

Acknowledgments

We acknowledge the support from the following agencies: US National Science Foundation-Office of Polar Programs, US National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin – Madison, the Open Science Grid (OSG) grid infrastructure; US Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; National Science and Engineering Research Council of Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Research Department of Plasmas with Complex Interactions (Bochum), Germany, Research Center Elementary Forces and Mathematical Foundations (Mainz), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; A. Groß acknowledges support by the EU Marie Curie OIF Program; J. P. Rodrigues acknowledges support by the Capes Foundation, Ministry of Education of Brazil. We would like to thank G. Fogli, H. T. Janka, P, Mertsch, B. Müller, G.G. Raffelt, K. Sumiyoshi, I. Tamborra, and R. Tomàs for providing supernova model data and for helpful discussions.

References

All Tables

Table 1

Major neutrino reactions.

Table 2

Module disqualification.

Table 3

Summary of statistical and systematic uncertainties.

Table 4

Expected rates.

All Figures

thumbnail Fig. 1

Overall DOM efficiency versus wavelength for head-on illumination of the 0.0856 m2 DOM cross section. The average value in the 300–600 nm range, weighted by the wavelength dependence of Cherenkov light emission, is  ≈ 7.1%.

Open with DEXTER
In the text
thumbnail Fig. 2

Probability density distribution of time differences between pulses for noise (bold line) and the exponential expectation for a Poissonian process fitted in the range 15 ms  < ΔT <  50 ms (thin line). The excess is due to bursts of correlated hits, as can be seen from the 50 ms long snapshot of hit times shown in the insert.

Open with DEXTER
In the text
thumbnail Fig. 3

Measured average noise rates of 120 DOMs as function of DOM temperature. The excess of short time intervals due to bursts (solid) is fitted to the empirical model of Meyer (2010). The Poissonian contribution (dashed) has not been corrected for the depth (and thus temperature) dependent contribution of atmospheric muons (see Fig. 7). The dotted line is a comparison to the predicted thermal noise from Richardson’s law plus a constant rate of C = 225 ± 6 Hz.

Open with DEXTER
In the text
thumbnail Fig. 4

Example for an anti-electron neutrino energy spectrum with α = 2.92 predicted for an 8.8 solar mass O-Ne-Mg core supernovae collapse (Hüdepohl et al. 2010) one second after the onset of the burst (solid line). Also shown is the cross section weighted energy spectrum of produced positrons and electrons from inverse beta decay (dashed line) as well as a measure of the detectable energy, which is proportional to the number of Cherenkov photons Nγ in the (300–600) nm range (dotted line). The relation Nγ = a·Ee with a = 178 MeV-1 is used.

Open with DEXTER
In the text
thumbnail Fig. 5

Effective volume per DOM (left axis) for detection of Cherenkov photons with (300–600) nm wavelength plotted as a function of depth. The effective positron volume can be read off the right axis. DeepCore strings are not included in these plots.

Open with DEXTER
In the text
thumbnail Fig. 6

Detected neutrino inverse beta decay interaction vertices projected onto the horizontal plane based on a GEANT-3.21 simulation with 10 million neutrino interactions.

Open with DEXTER
In the text
thumbnail Fig. 7

Top: rate of a typical DOM as function of time covering 556 days of lifetime as measured in 0.5 s bins (baseline suppressed). The line corresponds to a rate fit according to Eq. (12). Bottom: parameter c2 and estimated muon induced rate as function of depth. The variation with depth is mostly due to the optical properties of the ice and muons ranging out.

Open with DEXTER
In the text
thumbnail Fig. 8

Rate distribution of a typical standard efficiency DOM taken over 29 consecutive days. Each measurement corresponds to 0.5 s integration time. Gaussian and lognormal fits are shown.

Open with DEXTER
In the text
thumbnail Fig. 9

Significance distribution in 0.5 s binning for a detector uptime of 556 days with 22 and 40 strings deployed. The two outliers at ξ = 8 and 9.5 occured during test runs employing artificial light. The dashed line shows a Gaussian fit with σ = 1.27.

Open with DEXTER
In the text
thumbnail Fig. 10

Expected rate distribution at 10 kpc distance for the Lawrence-Livermore model (dashed line) and O-Ne-Mg model by Hüdepohl et al. (2010) with the full set of neutrino opacities (solid line). The 1   σ-band corresponding to measured detector noise (hatched area) has a width of about  ± 330 counts.

Open with DEXTER
In the text
thumbnail Fig. 11

Top: expected rate distribution at 10 kpc supernova distance for oscillation scenarios A (normal hierarchy) and B (inverted hierarchy). Fluxes and energies in the left plot are taken from the Lawrence-Livermore model and in the right plot from the Garching model using the equation of state of Lattimer & Swesty (1991). Scenario C (not shown) is almost indistinguishable from Scenario A. The case of no oscillation is given as a reference. Bottom: expected average signal rate distribution at 10 kpc distance in finest 2 ms binning for Scenarios A and B using the Garching model; the unlikely case of no oscillation is given as a reference. The left plot shows the expected νe induced signal. As can be seen from the right plot, the signal is no longer apparent, once the large contribution due to the inverse beta decay and the expected DOM noise are added. The 1σ-bands corresponding to measured detector noise (hatched area) have a width of about  ± 215 counts for a 20 ms binning and  ± 70 counts for a 2 ms binning.

Open with DEXTER
In the text
thumbnail Fig. 12

Significance versus distance assuming the Lawrence-Livermore model. The significances are increased by neutrino oscillations in the star by typically 15% in case of a normal hierarchy (Scenario A) and 40% in case of an inverted hierarchy (Scenario B). The Magellanic Clouds as well as center and edge of the Milky Way are marked. The density of the data points reflect the star distribution.

Open with DEXTER
In the text
thumbnail Fig. 13

Effect of a forward moving shock wave applied to supernova at 10 kpc distance, modelled according to the Lawrence-Livermore model assuming an inverted hierarchy and θ13 > 0.9° was assumed. A forward shock wave can be distinguished from a static density profile and the case of no star matter effect. The 1   σ-band corresponding to measured detector noise (hatched area) has a width of about  ± 1150 counts.

Open with DEXTER
In the text
thumbnail Fig. 14

Comparison of the neutrino light curve with quark-hadron phase transition for a 10 M progenitor at 10 kpc distance. Three neutrino oscillation scenarios are shown (see legend). The observation of the sharp induced burst, 257 ms  < t <    261 ms after the onset of neutrino emission, would constitute direct evidence of quark matter. The hatched 1   σ-band corresponding to detector noise has a width of about  ± 70 counts.

Open with DEXTER
In the text
thumbnail Fig. 15

Expected neutrino signal from the gravitational collapse of a non rotating massive star of 40 solar masses into a black hole at 10 kpc distance for a hard equation of state (Shen 1998) following Sumiyoshi et al. (2007). The 1   σ-band corresponding to detector noise (hatched area) has a width of about  ± 70 counts.

Open with DEXTER
In the text
thumbnail Fig. 16

Number of standard deviation with which scenarios A (normal hierarchy) and B (inverted hierarchy) can be distinguished in at least 50% of all cases as function of supernova distance for some of the models listed in Table 4. A likelihood ratio method was used assuming known model shapes.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.