A&A 479, 347-353 (2008)
DOI: 10.1051/0004-6361:20078898

Mass limits for heavy neutrinos

E. Elfgren - S. Fredriksson

Department of Physics, Luleå University of Technology, 971 87 Luleå, Sweden

Received 22 October 2007 / Accepted 10 December 2007

Abstract
Context. Neutrinos heavier than $M_Z/2\sim 45$ GeV are not excluded by particle physics data. Stable neutrinos heavier than this might contribute to the cosmic gamma ray background through annihilation in distant galaxies, as well as to the dark matter content of the universe.
Aims. We calculate the evolution of the heavy neutrino density in the universe as a function of its mass, MN, and then the subsequent gamma ray spectrum from annihilation of distant $N\bar{N}$ (from 0<z<5).
Methods. The evolution of the heavy neutrino density in the universe is calculated numerically. To obtain the enhancement due to structure formation in the universe, we approximate the distribution of N to be proportional to that of dark matter in the GalICS model. The calculated gamma ray spectrum is compared to the measured EGRET data.
Results. A conservative exclusion region for the heavy neutrino mass is 100 to 200 GeV, both from EGRET data and our re-evalutation of the Kamiokande data. The heavy neutrino contribution to dark matter is found to be at most 15%.

Key words: elementary particles - neutrinos - cosmology: dark matter - gamma rays: observations

1 Introduction

The motivation for a fourth generation neutrino comes from the standard model of particle physics. In fact, there is nothing in the standard model stating that there should be exactly three generations of leptons (or of quarks, for that matter).

The present limits on the mass of a fourth generation of neutrinos are only conclusive for $M_N\la M_Z/2 \approx 46$ GeV (Yao et al. 2006, p. 35). This limit is obtained from measuring the invisible width of the Z0-peak in LEP, which gives the number of light neutrino species, as $N_\nu = 2.9841$ $\pm$ 0.0083 (The LEP Collaborations 2001).

In Maltoni et al. (2000), a fourth generation of fermions is found to be possible for $M_{N}\sim 50$ GeV, while heavier fermions are shown to be unlikely. However, this constraint is only valid when there is a mixing between the generations (Novikov et al. 2002); and since this is not necessarily true, we will not take it for certain.

In the context of cosmology and astrophysics there are other contraints. Light neutrinos, with $M_N\la 1$ MeV, are relativistic when they decouple, whereas heavier neutrinos are not. The light neutrinos must have $\sum m_\nu\la 46$ eV in order for $\Omega_\nu h^2<1$ to be valid (Hannestad 2006b). For the dark matter (DM) content calculated by Spergel et al. (2003), the bound is $\sum m_\nu\la 12$ eV. The number of light neutrino species are also constrained to $N_\nu = 4.2^{+1.2}_{-1.7}$ by the cosmic microwave background (CMB), large scale structure (LSS), and type Ia supernova (SNI-a) observations at 95% confidence (Hannestad 2006a).

Neutrinos heavier than about 1 MeV, however, leave thermal equilibirum before decoupling and therefore their number density drops dramatically, see for example Dolgov & Zeldovich (1981). This will be discussed in more detail in Sect. 2.

The most important astrophysical bound on heavy neutrinos comes from Kamiokande (Mori et al. 1992) and this will be considered separately in the end.

In Fargion et al. (1995), it is found that the mass range $60\la M_N \la 115$ GeV is excluded by heavy neutrino annihilation in the galactic halo. However, according to Dolgov (2002, p. 57) this constraint is based on an exaggerated value of the density enhancement in our galaxy. Other works constraining the heavy neutrino mass include Fargion et al. (1998,1999) and Belotsky et al. (2004). There has also been a study of the gamma ray spectrum of dark matter (DM) in general (Ando et al. 2007).

For an exhaustive review of modern neutrino cosmology, including current constraints on heavy neutrinos, see Dolgov (2002). It is concluded that there are no convincing limits on neutrinos in the mass range $50\la M_N \la 1000$ GeV. A review of some cosmological implications of neutrino masses and mixing angles can be found in Kainulainen & Olive (2003).

In this paper we consider a stable fourth-generation heavy neutrino with mass $M_N \ga 50$ GeV possessing the standard weak interaction. We assume that other particles of a fourth generation are heavier and thus do not influence the calculations.

We assume a $\Lambda$CDM universe with $\Omega_{\rm tot} = \Omega_{\rm m} + \Omega_\Lambda = 1$, where $\Omega_{\rm m} = \Omega_{\rm b} + \Omega_{\rm DM} = 0.135/h^2$, $\Omega_{\rm b} = 0.0226/h^2$ and h = 0.71 (Spergel et al. 2003), using WMAP data in combination with other CMB datasets and large-scale structure observations (2dFGRS + Lyman $\alpha$). Throughout the article we use natural units, so that the speed of light, Planck's reduced constant and Boltzmann's constant equal unity, $c = \hbar = k_{\rm B} = 1$.

If heavy neutrinos ($M_N \ga 50$ GeV) exist, they were created in the early universe. They were in thermal equilibrium in the early stages of the hot big bang, but froze out relatively early. After freeze-out, the annihilation of $N\bar{N}$ continued at an ever decreasing rate until today. Since those photons that were produced before the decoupling of photons are lost in the CMB, only the subsequent $N\bar{N}$ annihilations contribute to the photon background as measured on earth.

The photon intensity from $N\bar{N}$-annihilation is affected by the number density of heavy neutrinos, nN, whose mean density decreases as R-3, where R is the expansion factor of the universe. However, in structures such as galaxies the mean density will not change dramatically, and since the number of such structures is growing with time, this will compensate for the lower mean density. Note that the photons are also redshifted with a factor R due to their passage through space-time. This also means that the closer annihilations will give photons with higher energy than the farther ones.

   
2 Evolution of neutrino density

Let us recapitulate the results of Dolgov & Zeldovich (1981).

The cosmic evolution of the number density, $n_{\rm X}$, of a particle X can, in general, be written as

 \begin{displaymath}%
\dot n_{\rm X} = -n_{\rm X}^2 \left<\sigma v\right> - 3H(t)n_{\rm X} + \psi(t),
\end{displaymath} (1)

where $\left<\sigma v\right>$ is the thermally averaged product of the mean velocity and total annihilation cross section for the particle, and $H(t)=\dot R/R$ is the Hubble constant. The term  $-3H(t)n_{\rm X}$ represents the expansion of the universe, and the production term is $\psi(t) = n_{\rm Xeq}^2 \left<\sigma v\right>$, where $n_{\rm Xeq}$ is the equilibrium concentration of particle X.

If we write $r_{\rm X} = n_{\rm X}/n_\gamma$, Eq. (1) can be expressed as

 \begin{displaymath}%
\dot r_{\rm X} = -\left< \sigma v\right> n_\gamma(r_{\rm X}^2 - r_{\rm Xeq}^2),
\end{displaymath} (2)

where

 \begin{displaymath}%
r_{\rm Xeq} \approx \left\{ \begin{array}{ll}
1 & \textrm{...
...e}^{-1/\theta} & \textrm{if $\theta<1$ }.
\end{array} \right.
\end{displaymath} (3)

Here $\zeta(3)\approx 1.2020569$ is the Riemann zeta function, T is the temperature, $m_{\rm X}$ is the mass of particle X and $g_{\rm s}$ is the number of spin states. For photons and electrons, $g_{\rm s}=2$, while for massless left-handed neutrinos, $g_{\rm s}=1$. For reference, $\frac{(2\pi)^{-3/2}}{2\cdot \zeta(3)/\pi^2} \approx \frac 14$.

The value of the relative equilibrium concentration, $r_{\rm Xeq}$, is derived from

\begin{displaymath}%
r_{\rm Xeq} \equiv n_\gamma^{-1} n_{\rm eq} = \frac{1}{2T^3...
...c{1}{(2\pi)^3} \int \frac{4\pi p^2{\rm d}p }{{\rm e}^{E/T}+1},
\end{displaymath} (4)

where the expressions for $n_\gamma$ and $n_{\rm eq}$ were taken from Dolgov (2002, Eq. (30)).

According to Dolgov & Zeldovich (1981, Eq. (2.9)), freeze-out (equilibrium destruction) occurs when the rate of change of the equilibrium concentration due to the temperature decrease is higher than the reaction rates, which means that $2\left<\sigma v\right> n_\gamma r_{\rm Xeq} t T / m~>~1$. Until freeze-out, the relative particle density follows the equilibrium density closely: $r_{\rm fX} \approx r_{\rm Xeq}$. Hence, the relative density at the moment of freeze-out is

 \begin{displaymath}%
r_{\rm fX} = (2\left<\sigma v\right> n_\gamma t_{\rm f}\theta_{\rm f})^{-1} \approx r_{\rm Xeq},
\end{displaymath} (5)

where $t_{\rm f}$ and $\theta_{\rm f}=T_{\rm f}/m_{\rm X}$ are the time and relative temperature at freeze-out.

As the temperature decreases, the production term $r_{\rm Xeq}$ will drop exponentionally, such that the relic concentration of X will be more or less independent of $r_{\rm Xeq}$. With this approximation ( $r_{\rm Xeq} = 0$), Eq. (2) can be solved for $t\rightarrow \infty$:

 \begin{displaymath}%
r_{\rm0X} \approx \frac{1}{2\left<\sigma v\right> n_\gamma ...
...18}}{\sqrt{g_*(T_{\rm f})}} T_{\rm f}^{-2}(1+\theta_{\rm f})},
\end{displaymath} (6)

where we have used $tT^2 \approx 3.677$ $\times$ $10^{18}/\!\sqrt{g_*}$ (Dolgov 2002, Eq. (37)), with $g_*(T_{\rm f})$ from Kolb & Turner (1990, p. 65) being the number of relativistic species in thermal contact with the photons. Furthermore, $n_\gamma(t_0) = 2T_0^3\zeta(3)/\pi^2 \approx 0.24 T_0^3$ is the photon density today, where the photon temperature today is T0=2.725 K (Mather et al. 1999). According to the standard model of particle physics, g*=106.75 for $T\ga 100$ GeV ( $g_* \approx 200$ for supersymmetry models at yet higher temperatures). If we assume that $\theta_{\rm f} \ll 1$ (which we will later show to be reasonable), we obtain $r_{\rm0X} \approx r_{\rm fX} \theta_{\rm f}$, which differs by a factor two from the result of Dolgov & Zeldovich (1981, Eq. (2.11)). This is natural if they consider the density of $n_{N+\bar N}$ since our $r_{\rm0X}$ is valid for N and $\bar N$ separately.

In order to take into account the increase in temperature due to entropy conservation after freeze-out of particle X, we must take

 \begin{displaymath}%
\left(\frac{n_{\rm0X}}{{\rm m}^{-3}}\right) = r_{\rm0X}\fra...
...{\rm f}T_{\rm f}(1+T_{\rm f}/m_{\rm X})\sqrt{g_{\rm *f}}}\cdot
\end{displaymath} (7)

(In fact $g_{\rm *f}^{-1/2}$ should be written $g_{\rm *Sf}^{-1}\cdot g_{\rm *f}^{1/2}$ but for $T_{\rm f}>0.1$ GeV, $g_{\rm *Sf}=g_{\rm *f}$.)

We now turn to the case of heavy neutrinos. Since we wish to avoid the lenghty calculations of the cross sections of heavy neutrinos (Enqvist et al. 1989), we use Fargion et al. (1995, Fig. 1 and Eq. (4)# and solve for $\left<\sigma v\right>$. We assume that they use $g_*=g_*(T_{\rm f}) \approx g_*(M_N/30)$, but the exact value does not change the result in any significant way. The resulting $\left<\sigma v\right>$ is presented in Fig. 1. The cross section drops from $M_N\sim 45$ GeV, where the Z0 resonance peaks until the W+W- annihilation channel starts to dominate at $M_N \ga 100$ GeV.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig1.eps}\end{figure} Figure 1: The cross section times the velocity (in m3/s) of heavy neutrino annihilation $N\bar{N}$ as a function of their mass (in GeV) at freeze-out, $T=T_{\rm f}$.
Open with DEXTER

According to Fargion et al. (1995), the cross sections of heavy neutrinos can be estimated using the annihilation channels


  $\displaystyle N\bar N \rightarrow Z^0 \rightarrow f\bar f$ (8)
    $\displaystyle N\bar N \rightarrow Z^0 \rightarrow W^+W^-.$ (9)

There are several other possible annihilation channels for $ N\bar N \rightarrow W^+W^-$, like $N\bar N \rightarrow L\bar L,~ H^0H^0,~ Z^0Z^0 \rightarrow W^+W^-$ and also interference between L and Z0, as well as between L and H0. However, in the limit $s \rightarrow 4M_N^2$, which is valid for cosmological heavy neutrinos, the dominant channel is through s-channel $N\bar N \rightarrow Z^0$ (Enqvist et al. 1989, p. 656). Furthermore, the other annihilation products, $N\bar N \rightarrow H^0H^0,~ Z^0Z^0$, are suppressed with respect to W+W--production (Enqvist et al. 1989, p. 651, 656). Hence, the above estimation of the $\left<\sigma v\right>$ should be fairly accurate. If anything, it is slightly underestimated.

Using Eqs. (5) and (3), we can solve for $T_{\rm f}=\theta_{\rm f}\cdot M$. The result is presented in Fig. 2. Note that although it looks like a straight line, it really is slightly curved. We notice that $T_{\rm f}/M_N \sim 1/30$, which shows our assumption $M_N\gg T_{\rm f}$ to be valid. This is also in agreement with previous results, see e.g. Kolb & Turner (1990), where a value of $T_{\rm f}/M_N \sim 1/20$ is quoted.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig2.eps} \end{figure} Figure 2: The freeze-out temperature (in GeV  $\approx 1.16\times 10^{13}$ K) of heavy neutrinos as a function of their mass (in GeV).
Open with DEXTER

We now return to Eq. (7) and apply it to the case of a heavy neutrino. We plot the resulting relative relic neutrino density as a function of the mass MN in Fig. 3 using $\Omega_N = 2M_N\cdot n_N(T_0)/\rho_{\rm c}$, where $\rho_{\rm c}\approx 9.47\times10^{-27}$ kg/m3 is the critical density of the universe. The resulting heavy neutrino density is very similar to the one obtained by Fargion et al. (1995, Fig. 1). The numerical simulation also shown in the figure will be the subject of the next section.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig3.eps} \end{figure} Figure 3: The relic relative density of heavy neutrinos as a function of their mass (in GeV).
Open with DEXTER

   
3 Numerical simulation of the neutrino density

For comparison, we evaluate the evolution of the heavy neutrino density numerically. Equation (1) can be rewritten in terms of the temperature, T:

 \begin{displaymath}%
\frac{{\rm d}n}{{\rm d}T} = -\frac{{\rm d}t}{{\rm d}T} \lef...
...<\sigma v\right> \left(n(T)^2 - n_{\rm eq}(T)^2\right)\right],
\end{displaymath} (10)

where

\begin{displaymath}%
n_{\rm eq}(T) = r_{\rm eq}n_\gamma = 2T^3\left(\frac{M_N}{2\pi T}\right)^{3/2} {\rm e}^{-M_N/T}, \quad\quad (T<M_N)
\end{displaymath} (11)

and the relation between time and temperature is given by

\begin{displaymath}%
\frac{{\rm d}t}{{\rm d}T} = \frac{-1}{H(T)}\left( \frac{1}{T} + \frac{{\rm d}g_{\rm *S}/{\rm d}T}{3g_{\rm *S}}\right)\cdot
\end{displaymath} (12)

Here the Hubble constant is $H(T)=H_0\sqrt{\Omega(T)}$, where the total relative energy density of the universe is

\begin{displaymath}%
\Omega(T)= \Omega_R(T)\cdot R^{-4} + \Omega_{\rm M}\cdot R^{-3} + \Omega_k\cdot R^{-2} + \Omega_\Lambda.
\end{displaymath} (13)

The curvature term $\Omega_k = 0$ and the radiation density is

\begin{displaymath}%
\Omega_R(T) = \Omega_{R} \frac{g_*(T)}{g_*(T_0)}
\end{displaymath} (14)

due to the reheating as particles freeze out. The reheating also means that $R=g_{\rm *S}^{-1/3}T_0/T$ (Kolb & Turner 1990, p. 68). The number of relativistic species still in thermal contact with the photons, $g_{\rm *S}(T)$, is given in Coleman & Roos (2003, Fig. 1#. For the critical region 0.15<T<0.30 GeV their Eqs. (8), (9) have been used to calculate ${\rm d}g_{\rm *S}/{\rm d}T$. This updated value of $g_{\rm *S}(T)$ is needed to evaluate ${\rm d}g_{\rm *S}/{\rm d}T$ properly.

Using a fifth-order Runge-Kutta method with adaptive stepsize control, taken from Numerical Recipes (Press et al. 1992, Ch. 16.2), we solve for n(T) in Eq. (10) using the initial condition $n_i=n_{\rm eq}(T_i=M_N/15)$, which is well within the region of thermal equilibrium for the heavy neutrinos. The resulting relative relic neutrino density is presented in Fig. 3, where $\Omega_N = 2M_N\cdot n_N(T_0)/\rho_{\rm c}$ as before. We notice that the peak of the curve is $\Omega_N(M_N=140~{\rm GeV}) \approx 0.04$, which would then account for $\sim$15% of the dark matter content of the universe.

For comparison, we plot the number density of heavy neutrinos (in m-3) as a function of Tfor masses 50, 70, 90, 150, 500 and 1000 GeV in Fig. 4. As we can see, the transition between thermal equilibirum density and completely decoupled neutrino density is not sharp. This is one of the reasons for the difference between the analytical and the numerical relative density in Fig. 3. Another reason for the difference is the inclusion of the change in $g_{\rm *S}$ in the evaluation of ${\rm d}t/{\rm d}T$. The evolution of $g_{\rm *S}(T)$ is the cause of the small ``knee'' in Fig. 4 seen at $T\sim0.2$ GeV (the reheating from the quark-hadron transition). Furthermore, when electrons fall out of thermal equilibirum at $T\sim 1$ MeV there is another small knee, reducing again the heavy neutrino density somewhat.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig4.eps}\end{figure} Figure 4: The number density of heavy neutrinos (in m-3) as a function of T for masses 50, 70, 90, 150, 500 and 1000 GeV (increasing from left to right in the upper right corner). The dashed vertical lines represent the calculated value of $T_{\rm f}$ in Fig. 2. Below T=0.01 GeV, the curves evolve as $(T/T_0)^3\cdot g_{\rm *S}$.
Open with DEXTER

   
4 Dark matter simulations

In Sect. 3, we calculated the mean density of neutrinos in the universe as a function of redshift and the mass of the heavy neutrinos. However, the neutrino annihilation rate, and thus the intensity from their gamma spectrum, is proportional to the square of the neutrino density. This means that inhomogeneities in the universe will tend to enhance the gamma ray signal.

In this section we describe how we calculate the inhomogeneities as a function of space and time, assuming only gravitational interaction between the dark matter consisting of heavy neutrinos and other DM particles. The clumping factor (also known as the boost factor) can then be used to calculate the actual intensity

\begin{displaymath}%
\frac{{\rm d}I}{{\rm d}z} = C(z)\frac{{\rm d}I_0}{{\rm d}z},
\end{displaymath} (15)

where ${\rm d}I_0/{\rm d}z$ is the intensity contribution from redshift slice ${\rm d}z$for a homogeneous universe and C(z) is the enhancement due to the clumping at redshift z.

The clumping factor has been calculated in different settings before, ranging from Berezinsky et al. (2006) for local clustering giving a clumping factor of $\sim$5to Diemand et al. (2005) for mini-halos giving a clumping factor of two orders of magnitude. For a discussion about the accuracy of approximating the enhancement with a single clumping parameter, see Lavalle et al. (2007), though they focus on antiprotons.

The spatial and temporal distribution of DM in the universe is calculated with the GalICS program. The cosmological N-body simulation that we are referring to throughout this paper is done with the parallel tree-code developed by Ninin (1999). The initial mass power spectrum is taken to be a scale-free ( $n_{\rm s} = 1$) one, evolved as predicted by Bardeen et al. (1986) and normalized to the present-day abundance of rich clusters with $\sigma_8$ = 0.88 (Eke et al. 1996). The DM density field was calculated from z=35.59 to z=0, giving 100 ``snapshots'', spaced logarithmically in the expansion factor.

The basic principle of the simulations is to distribute a number of DM particles N3 with mass  $M_{{\rm DM}}$ in a box of size L3. Then, as time passes, the particles interact gravitationally, clumping together and forming structures. When there are at least 20 particles together, it is considered to be a DM halo. It is supposed to be no other forces present than gravitation, and the boundary conditions are assumed to be periodic.

In the GalICS simulations the side of the box used was L=100 h-1 Mpc, and the number of particles was set to 2563, which implies a particle mass of $\sim$5.51 $\times$ $10^{9}~h^{-1}~M_\odot$. Furthermore, for the simulation of DM, the cosmological parameters were set to $\Omega_\Lambda = 2/3$, $\Omega_{\rm m} = 1/3$ and h=2/3. The simulations of the DM were done before the results from WMAP were published, which explains the difference between these parameters and the values used elsewhere in this paper, as stated in the introduction. Nevertheless, the difference is only a couple of percent and should not seriously alter the results.

Between the initial halo formation at $z\sim 11$ and the current epoch in the universe, there are 72 snapshots. In each snapshot a friend-of-friend algorithm was used to identify virialized groups of at least 20 DM particles. For high resolutions, it is clear that the mass resolution is insufficient. Fortunately, the first 20-particle DM clump appears at z=11.2, while the bulk of the clumping comes from $z\la 5$, where the lack of resolution is no longer a problem.

In order to make a correct large-scale prediction of the distribution of the DM, the size of the box would have to be of Hubble size, i.e., $\sim$ 3000 h-1 Mpc. However, for a given simulation time, increasing the size of the box and maintaining the same number of particles would mean that we lose in mass resolution, which is not acceptable if we want to reproduce a fairly realistic scenario for the evolution of the universe.

We will make the approximation that our single box, at different time-steps, can represent the line of sight, and since we are only interested in the general properties of the dark matter clumping, this approximation should be acceptable.

4.1 Validity of simulation

GalICS is a hybrid model for hierarchical galaxy formation, combining the outputs of large cosmological N-body simulations with simple, semi-analytic recipes to describe the fate of the baryons within DM halos. The simulations produce a detailed merging tree for the DM halos, including complete knowledge of the statistical properties arising from the gravitational forces.

The distribution of galaxies resulting from this GalICS simulation has been compared with the 2dS (Colless et al. 2001) and the Sloan Digital Sky Survey (Szapudi et al. 2001) and found to be realistic on the angular scales of $3' \la \theta \la 30'$, see Blaizot et al. (2006). The discrepancy in the spatial correlation function for other values of $\theta$ can be explained by the limits of the numerical simulation. Obviously, any information on scales larger than the size of the box ($\sim$45') is not reliable. The model has also proven to give sensible results for Lyman break galaxies at z=3(Blaizot et al. 2004). It is also possible to model active galactic nuclei (Cattaneo et al. 2005).

Since it is possible to reproduce reasonable correlations from semi-analytic modelling of galaxy formation within this simulation at z=0-3, we now attempt to do so also for somewhat higher redshifts.

4.2 Clumping of dark matter

We proceed to calculate the clumping factor C(z). The inhomogeneities of the DM distribution can be calculated using the relative clumping of dark matter halos: $\bar\rho_i = \rho_i/\rho_{\rm mean}$, where $\rho_{\rm mean}$ is the mean density of the dark matter in the universe and $\rho_i$ is the mean density of DM halo i.

As matter contracts, the density increases, but since the gamma ray emitting volume also decreases, the net effect is a linear enhancement from the quadratic dependence on the density. This means that the DM halos will emit as:

\begin{displaymath}%
\frac {I_{\rm halos}}{I_0} = \frac{\sum_{i}m_i\bar\rho_i}{\sum_i m_i}\cdot C_{\rm halo},
\end{displaymath} (16)

where I0 is the intensity for a homogeneous universe and the summation is done over all DM halos and thus $\sum_i m_i = m_{\rm halos}$. The factor  $C_{\rm halo}$ accounts for the modification from the form and properties of the halo itself. A simple conic DM distribution would give $C_{\rm halo}=1.6$. The more realistic distribution $\rho(r)=\rho_0\cdot[(1+r)(1+r^2)]^{-1}$, where r is the radial coordinate relative to the halo radius, gives $C_{\rm halo}=1.1$. However, the radiation from within the denser part of the halo will also be subject to more absorption, and so for the sake of simplicity we use $C_{\rm halo}=1$. We notice that the average relative density over all the halos in the simulation is fairly constant, $\left< \bar\rho_i \right> \sim 70$ for z<5.

Simultaneously, the DM background (the DM particles that are not in halos) will decrease, both in density by a factor $(m_{\rm tot}-m_{\rm halos})/m_{\rm tot}$ and because of their decreasing fraction of the total mass in the box  $m_{\rm tot}$:

\begin{displaymath}%
\frac {I_{\rm DM-background}}{I_0} = \left( \frac{m_{\rm tot}-m_{\rm halos}}{m_{\rm tot}}\right)^2\cdot
\end{displaymath} (17)

This means that the total clumping factor is

\begin{displaymath}%
C = \frac {I_{\rm halos}}{I_0} + \frac {I_{\rm DM-backgroun...
...\left( \frac{m_{\rm tot}-m_{\rm halos}}{m_{\rm tot}}\right)^2,
\end{displaymath} (18)

where the second term starts as unity whereafter it decreases and quickly becomes negligeable with respect to the first term, which starts at zero, but then rapidly increases. The total clumping is plotted in Fig. 5 along with the competing $(n_N/{\rm m}^{-3})^2$ effect, as well as the product, all as a function of the redshift z. The number density of heavy neutrinos in the figure is taken for the mass MN=150 GeV. We notice that the clumping enhancement remains $\sim$30 for z<1 and that the clumping is $\sim$1 for z>5. This is mainly due to the proportion of mass within the halos compared to the total DM mass. The clumping enhancement lies between the two extreme values by Berezinsky et al. (2006) and Diemand et al. (2005) quoted above.

In fact, the clumping factor can be even higher if other halo shapes are assumed with smaller radii (Ullio et al. 2002). The densities in the halos considered in the present work have been evaluated at the virial radius.

We also point out that before the reionization, at $z\ga 5$, there is absorption from neutral hydrogen in the interstellar medium (ISM), also known as the Gunn-Petersen effect (Gunn & Peterson 1965). This means that photons from higher redshifts will be highly attenuated. For z=5.3, the emission drops by roughly a factor of 10, and for $z\sim 6$the opacity is $\tau_{\rm eff}>20$ (Becker et al. 2001). Hence, any gamma ray signal prior to this epoch would have been absorbed.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig5.eps} \end{figure} Figure 5: The clumping factor (C, dotted line) compared to the competing effect of the decreasing heavy neutrino number density squared (nN2, dashed line) for MN=150 GeV and the product of the two (solid line). Different neutrino masses scale as in Fig. 4.
Open with DEXTER

5 Photon distribution from ${\rm N\bar N}$-collisions

In order to evaluate the photon spectrum from $N\bar{N}$-collisions we use PYTHIA version 6.410 (Sjöstrand et al. 2006). According to Enqvist et al. (1989, Eq. (13)) the centre of mass energy squared is $E_{\rm CM}^2 = 4M_N^2 + 6M_N T_{\rm f}$ and $T_{\rm f} \approx M_N/30$ as estimated above.

We generate 100,000 $N\bar{N}$ events for each mass MN=50,60,...,1000 GeV and calculate the photon spectrum and mean photon multiplicity and energy. We assume that $N\bar{N}$ collisions at these energies and masses can be approximated by $\nu_\tau\bar\nu_\tau$ collisions at the same $E_{\rm CM}^2$. This is obviously not equivalent, but $N\bar{N}$ cannot be directly simulated in PYTHIA. Nevertheless, with the approximations used in calculating $\left<\sigma v\right>$, the only difference between $\nu_\tau\bar\nu_\tau$ and $N\bar{N}$ collisions (except in the cross section) is the t-channel production of W+W- through $\tau$. However, since the heavy neutrinos are non-relativistic when they collide, the two Ws will be produced back-to-back, which means that the inclusion of the t-channel is unimportant.

In order to verify this, we study the difference in the photon spectrum for W decay at 0 and 90 degrees, and despite an increasing difference between the two cases, even at MN=1000 GeV, the difference is not strong enough to change our conclusions.

The resulting photon distribution is presented in Fig. 6. We note that the photon energies peak at $E_{\rm CM}/2$, which is natural since the decaying particles can each have at most half of the centre of mass energy. The curves continue to increase as $\propto$E-1 as E decreases further. Note that the noise in the curves for lower E is due to lacking statistics for these rare events, but it does not affect the outcome of the calculations. We also calculate the mean photon energy and find it to be $\bar E_\gamma \approx 0.21 E_{\rm CM}$for all masses. The curve is normalized such that the integral over $\frac{{\rm d}n_\gamma}{{\rm d}E} $ is unity. The average number of photons, $N_\gamma$, produced for an $N\bar{N}$-collision is shown in Fig. 7. The sharp rise in the curve at $M_N\sim100$ GeV is due to the jets from the emerging W+W--production.


  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{8898fig6.eps} \end{figure} Figure 6: The relative energy distributions of photons from $N\bar{N}$-collisions for heavy neutrino masses MN= 50, 70, 90, 150, 500, 1000 GeV. $E_{\rm CM}=2M_N$ is the centre of mass energy.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig7.eps} \end{figure} Figure 7: The average number of photons produced for an $N\bar{N}$-collision as a function of heavy neutrino mass MN in GeV.
Open with DEXTER

6 Gamma ray spectrum

The $N\bar{N}$-collisions from the reionization at $z_i\sim 5$ until today give an integrated, somewhat redshifted, gamma spectrum for a heavy neutrino with a given mass:

\begin{displaymath}%
I =
\int_{T_i}^{T_0}
C(T) \frac{n^2\left<\sigma v\right> ...
...rt _{E\frac{T_0}{T}}\right.\frac{{\rm d}t}{{\rm d}T} {\rm d}T,
\end{displaymath} (19)

where C(T) is the clumping factor in Fig. 5 and $\frac{{\rm d}n_\gamma}{{\rm d}E} $ is the photon distribution in Fig. 6. T0=2.725 K is the temperature of the CMB today and Ti is the reionization temperature, which we set to $T_i = 5\cdot T_0$.

The resulting E2I is presented in Fig. 8. When we compare the calculated heavy neutrino signal with data from EGRET (Sreekumar et al. 1998), we see that only neutrino masses around $M_N\sim100$ or 200 GeV would be detectable, and then only as a small bump in the data around $E_\gamma\sim 1$ GeV. For intermediary neutrino masses, the signal would exceed the observed gamma ray data. In Fig. 9, the peak intensity for the different heavy neutrino masses is plotted, as well as EGRET data for the corresponding energy with error bars. The data represent the observed diffuse emission at high latitudes ( $\left\vert b\right\vert>10$ degrees), where first the known point sources were removed and then the diffuse emission in our galaxy was subtracted.

We have also compared the height of the curves, both with and without clumping, and the integrated difference is roughly a factor of 30.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig8.eps} \end{figure} Figure 8: Cosmic gamma radiation from photons produced in $N\bar{N}$-collisions as a function of photon energy for neutrino masses MN=50, 70, 100, 140, 200, 500, 1000 GeV. The dotted line represents MN=50 GeV and the dot-dashed MN=1 TeV. The solid lines are the masses in between. The circles represent data from EGRET (Sreekumar et al. 1998), with error bar, as derived for extragalactic sources.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig9.eps} \end{figure} Figure 9: Maximum cosmic gamma radiation from photons produced in $N\bar{N}$-collisions as a function of neutrino mass (in GeV). The marked region is excluded since $\Omega _N>\Omega _{\rm DM}$ within. The data (with error bars) are taken at the energy corresponding to the maximum in Fig. 8.
Open with DEXTER

   
7 Discussion and conclusions

The numerical calculation of the evolution of the heavy neutrino number density indicates that in the mass region $100\la M_N \la 200$, the cosmological neutrinos would give a cosmic ray signal that exceeds the measurements by the EGRET telescope (Sreekumar et al. 1998). Note that the clumping factor for these limits is rather conservative. In Ullio et al. (2002), this factor is much larger, which would also produce a stronger limit on the heavy neutrino mass.

We can also compare our neutrino density with the results from the Kamiokande collaboration (Mori et al. 1992). We scale the neutrino signal in their Fig. 2 to $\Omega_N/\Omega_{\rm DM}$, where we use h0=0.71, $\Omega_{\rm m}=0.2678$ and $\Omega_{\rm b}=0.044$. This is shown in Fig. 10, where we compare our numerical results for the relic neutrino density to the observed muon flux in the Kamiokande detector. This gives an exclusion region of $80\la M_N \la 400$ GeV. Our analytical results, which are comparable to the traditional relic neutrino densities, is about a factor two lower, giving an exclusion region of $90\la M_N \la 300$ GeV. The model that gives these limits (Gould 1987) is rather complicated and not verified experimentally, so these results cannot be taken strictly. Note also that in the three-year WMAP analysis (Spergel et al. 2007), the value of $\Omega_{\rm DM}$ depends on which other data the WMAP data are combined with. For WMAP+CFHTLS $\Omega_{\rm DM}$ can be as high as 0.279 and for WMAP+CBI+VSA it can be as low as 0.155. The higher of these possibilities would give an exclusion region of $85\la M_N \la 350$ GeV. The lower boundary value would give an exclusion region of $75\la M_N \la 500$ GeV. A conservative limit based on the Kamiokande data gives the exclusion region $100\la M_N \la 200$ GeV.


  \begin{figure}
\par\includegraphics[width=8.6cm,clip]{8898fig10.eps} \end{figure} Figure 10: Predicted signal from enhanced $N\bar{N}$ annihilation in the earth and the sun compared to the measured signal in the Kamiokande. On the y-axis: the number of muons (per 100 m2 year) produced by muon neutrinos resulting from heavy neutrino collisions in the sun and the earth, as evaluated by Mori et al. (1992), but scaled to our $\Omega _N(M_N)$. On the x-axis: the heavy neutrino mass in GeV.
Open with DEXTER

If a heavy neutrino exists with a mass $M_N\sim100$ GeV or $M_N\sim200$ GeV it would give a small bump in the data at $E_\gamma\sim 1$ GeV. Currently the data points are too far apart and the error bars too large to neither exclude nor confirm the eventual existence of such a heavy neutrino. Most of this part of the gamma ray spectrum is usually attributed to blazars, which have the right spectral index, $\sim$2 (Mukherjee et al. 1997).

We note that there could be an enhancement in the signal due to the higher DM densities within galaxies compared to the mean density in the halos. On the other hand, from within galaxies there will also be an attenuation due to neutral hydrogen, thus reducing the enhancement. There will also be a certain degree of extinction of the signal due to neutral hydrogen along the line of sight, but even if we assume complete extinction above z=4 the resulting spectrum decreases with only about 20%.

We are also aware of the ongoing debate concerning the antiprotons - whether or not the DM interpretation of the EGRET gamma excess is compatible with antiproton measurements (Bergström et al. 2006; de Boer et al. 2006). We note the argument by de Boer that antiprotons are sensitive to electromagnetic fields, and hence their flux need not be directly related to that of the photons, even if they too were produced by $N\bar{N}$ annihilation.

In the advent of the Large Hadron Collider, we also point out that there may be a possibility to detect the existence of a heavy neutrino indirectly through the invisible Higgs boson decay into heavy neutrinos (Belotsky et al. 2003).

It will of course be interesting to see the results of the gamma ray large area space telescope (GLAST). It has a field of view about twice as wide (more than 2.5 steradians), and sensitivity about 50 times that of EGRET at 100 MeV and even more at higher energies. Its two-year limit for source detection in an all-sky survey is 1.6 $\times$ 10-9 photons cm-2 s-1 (at energies >100 MeV). It will be able to locate sources to positional accuracies of 30 arcsec to 5 arcmin. The precision of this instrument could well be enough to detect a heavy neutrino signal in the form of a small bump at $E\sim 1$ GeV in the gamma spectrum, if a heavy neutrino with mass $\sim$100 or 200 GeV would exist.

There are also some other possible consequences of heavy neutrinos that may be worth investigating. The DM simulations could be used to estimate the spatial correlations that the gamma rays would have and to calculate a power spectrum for the heavy neutrinos. This could be interesting at least for masses $M_N\sim100$ GeV and $M_N\sim200$ GeV. The annihilation of the heavy neutrinos could also help to explain the reionization of the universe. Another possible interesting application of heavy neutrinos would be the large look-back time they provide (Silk & Stodolsky 2006), with a decoupling temperature of $\ga$1013 K (Enqvist et al. 1989).

Acknowledgements
E.E. would like to express his gratitude to Konstantine Belotsky, Lars Bergström, Michael Bradley, Alexander Dolgov, Kari Enqvist, Kimmo Kainulainen and Torbjörn Sjöstrand for useful discussions and helpful comments and explanations. We are both grateful to the GalICS group, who has provided the complete dark matter simulations and finally to the Swedish National Graduate School in Space Technology for financial contributions.

References

 

Copyright ESO 2008