A&A 375, 711-738 (2001)
DOI: 10.1051/0004-6361:20010706
1 -
Observatoire de Genève,
Chemin des Maillettes 51,
1290 Sauverny,
Switzerland
2 - Physikalisches Institut,
Universität Bern,
Sidlerstrasse 5,
3012 Bern,
Switzerland
Received 8 February 2001 / Accepted 23 April 2001
Abstract
We have developed a new simulation code aimed at studying the
stellar dynamics of a galactic central star cluster surrounding a
massive black hole. In order to include all the relevant physical
ingredients (2-body relaxation, stellar mass spectrum, collisions,
tidal disruption, ...), we chose to revive a numerical scheme
pioneered by Hénon in the 70's (Hénon 1971b,a; Hénon 1973.
It is basically a Monte Carlo resolution of the Fokker-Planck
equation. It can cope with any stellar mass spectrum or velocity
distribution. Being a particle-based method, it also allows one to take
stellar collisions into account in a very realistic way. This first
paper covers the basic version of our code which treats the
relaxation-driven evolution of a stellar cluster without a central BH.
A technical description of the code is presented, as well as the
results of test computations. Thanks to the use of a binary tree to
store potential and rank information and of variable time steps,
cluster models with up to
particles can be simulated
on a standard personal computer and the CPU time required scales
as
with the particle number
.
Furthermore, the number of simulated stars needs not
be equal to
but can be arbitrarily larger. A
companion paper will treat further physical elements, mostly
relevant to galactic nuclei.
Key words: methods: numerical - stellar dynamics - galaxies: nuclei - galaxies: star clusters
The study of the dynamics of galactic nuclei is likely to become a
field to which more and more interest will be dedicated in the near
future. Numerous observational studies, using high resolution
photometric and spectroscopic data, point to the presence of
"massive dark objects'' in the centre of most bright galaxies
(see reviews by Kormendy & Richstone 1995; Rees 1997; Ford et al. 1998; Magorrian et al. 1998; Ho 1999; van der Marel 1999; de Zeeuw 2001), including our
own Milky Way (Eckart & Genzel 1996, 1997; Genzel et al. 1997; Ghez et al. 1998, 1999b,a; Genzel et al. 2000).
As the precision of the measurements and the rigor of their
statistical analysis both increase, the conclusion that these central
mass concentrations are in fact black holes (BHs) with masses ranging
from
to a few
becomes difficult to
evade, at least in the most conspicuous cases (Maoz 1998). Hence,
the time is ripe for examining in more detail the mutual influence
between the central BH and the stellar nucleus in which it is
engulfed.
A few particular questions we wish to answer are the following:
Thus, the evolution of galactic nuclei is a complex problem. As it is mostly of stellar dynamical nature, our approach is grounded on a new computer code for cluster dynamics. Its "core'' version treats the relaxational evolution of spherical clusters. This is the object of the present paper. The inclusion of further physical ingredients such as stellar collisions and tidal disruptions will be explained in a complementary article (Freitag & Benz 2001d, see also Freitag & Benz 2001b,c). In order to obtain a realistic prescription for the outcome of stellar collisions, we compiled a huge database of results from collision simulations performed with a Smoothed Particle Hydrodynamics (SPH) code (Benz 1990). This work will be described in a third paper (Freitag & Benz 2001a).
This paper is organized as follows. In the next section, we briefly review the available numerical schemes that have been applied in the past to simulate the evolution of a star cluster and explain the reasons that led us to choose one of these methods. Sections 3 to 6 contain a detailed description of the code. Test calculations are presented in Sect. 7. Finally, in Sect. 8, we summarize our work and discuss future improvements.
In the previous section, we briefly reviewed the main processes that should play an important role in the evolution of galactic nuclei. To treat them realistically, any stellar dynamics code has to meet the following specifications. It must:
As they directly integrate the particles' equations of motion,
N-body simulations are conceptually straightforward and do not rely
on any important simplifying assumptions. Unfortunately, to correctly
compute relaxation effects, forces have to be evaluated by direct
summation (see, however, Jernigan & Porter 1989; McMillan & Aarseth 1993, for possible alternatives). Hence, even with specialized hardware such
as GRAPE boards (Makino 1996; Taiji et al. 1996; Spurzem & Aarseth 1996), following the
trajectory of several 106 stars over tens of relaxation times
remains impossible (but see Makino 2000 for a peek at the
future of such systems). A solution would be to extrapolate the
results of an N-body simulation with a limited number of particles
to a higher N. However, this has been shown to be both tricky and
risky (McMillan 1993; Heggie 1997; Aarseth & Heggie 1998; Heggie et al. 1999). The difficulty resides
in the fact that various processes (relaxation, evaporation, stellar
evolution)
have time scales and relative importances that depend
in different ways on the number of simulated stars. Thus, in
principle, simulating a cluster of
stars with a lower number
of particles,
,
could only be done if all these processes
are somehow scaled to the correct
.
Unfortunately, such
scalings are difficult to apply, precisely because the N-body method
treats gravitation in a direct, "microscopic'' way with very little
room for arbitrary adjustments. Furthermore, these modifications
would rely on the same kind of theory, and hence, of simplifying
assumptions, as other simulation methods.
To circumvent these difficulties, a class of methods has been developed in which a stellar cluster containing a very large number of bodies is regarded as a fluid (Larson 1970b,a; Lynden-Bell & Eggleton 1980; Louis & Spurzem 1991; Giersz & Spurzem 1994, 2000, amongst many others). Such simulations have proved to be very successful in discovering new phenomena like gravo-thermal oscillations (Bettwieser & Sugimoto 1984) but rely on many strong simplifying assumptions. Most of them are shared by Fokker-Planck and Monte Carlo methods (see next paragraphs). However, this approach, which is based on velocity-moments of the collisional Boltzmann equation, further assumes, as a closure relation, some local prescription for heat conduction through the stellar cluster. This is appropriate for a standard gas, where the collision mean free path is much smaller than the system's size (Hensler et al. 1995). Quite surprisingly, it still seems to be valid for a stellar cluster even though the radial excursion of a typical star is not "microscopic'' (Bettwieser & Sugimoto 1985; Giersz & Spurzem 1994; Spurzem & Takahashi 1995). However, we discarded such an approach because, due to its continuous nature, integrating the effects of collisions in it seems difficult.
A very popular intermediate approach is the "direct'' numerical resolution of the Fokker-Planck equation (FPE) (Binney & Tremaine 1987; Spitzer 1987; Saslaw 1985, for example), either by a finite difference scheme (Cohn 1979, 1980; Lee 1987; Murphy et al. 1991; Takahashi 1995; Drukier et al. 1999, and many other works) or using finite elements (Takahashi 1993, 1995). Here again, the main difficulty resides in the treatment of collisions. From a practical point of view, these methods represent the cluster as a small set of continuous distribution functions for discrete values of the stellar mass. A realistic modeling of collisional effects would then require one to multiply the number of these mass classes, at the price of an important increase in code complexity and computation time. From a theoretical point of view, collisions don't comply with the requirement of small orbital changes that is needed to derive the FPE.
So, we have finally turned our attention to the so-called "Monte Carlo'' (MC) schemes. Even though their underlying hypotheses are similar to those leading to the FPE, being particle methods, they inherit some of the N-body philosophy which allows us to extend their realm beyond the set of problems to which direct FPE resolutions apply. In the Monte Carlo approach, the evolution of the (spherically symmetric) cluster is computed by following a sample of test-particles that represent spherical star shells. They move according to the smooth overall cluster (+BH) gravitational potential and are given small orbital perturbations in order to simulate relaxation. These encounters are randomly generated with a probability distribution chosen in such a way that they comply with the diffusion coefficients appearing in the FPE.
The most recently invented MC code is the "Cornell'' program by Shapiro and collaborators (Shapiro 1985, and references therein) with which these authors conducted seminal studies of the evolution a star cluster hosting a central BH. At the end of each time step, the distribution function (DF) is identified with the distribution of test-particles in phase-space. Then the potential is re-computed and so are diffusion coefficients that are tabulated over phase-space. This allows to evolve the DF one step further by applying a new series of perturbations to the test-particles' orbits. This code features ingenious improvements, such as the ability to follow the orbital motion of test-particles threatened by tidal disruption and to "clone'' test-particles in the central regions in order to improve the numerical resolution. Despite its demonstrated power, we did not adopt this technique because of its already important complexity, which would increase significantly if additional physics such as stellar collisions, a stellar mass spectrum, etc, are introduced.
Spitzer and collaborators (Spitzer 1975; Spitzer & Shull 1975a,b; Spitzer & Mathieu 1980) and
Hénon (1966, 1971b, 1971a, 1973) developed
simpler MC schemes which show more potential for our purpose. The
simulation of relaxation proceeds through 2-body gravitational
encounters between neighboring shell-like particles. The deflections
are tailored to lead to the same diffusion of orbits that a great
number of very small interactions would cause in a real cluster.
After the encounter, the shell is placed at some radius R on its
modified orbit. At that time, the potential is updated so it remains
consistent with the positions of the shells. The main difference
between the "Princeton'' code by Spitzer et al. and Hénon's
algorithm is that the former uses a fraction of the orbital time as
its time step while the latter uses a fraction of the relaxation time.
It follows that out-of-equilibrium dynamical processes, like violent
relaxation, can be computed with the Princeton code whereas Hénon's
code can only be applied to systems in virial equilibrium, which
imposes an age greatly in excess of the relaxation time. Needless to
say, this restriction is rewarded by an important gain in speed. In
this scheme, instead of being determined by the equations of orbital
motion, the radius R of a given shell is randomly chosen according
to a probability distribution that measures the time spent at each
radius on its orbit:
,
where
is its radial
velocity at R. R-dependent time steps can be used to track the huge
variation of the relaxation time from the cluster's centre to its
outskirts.
We chose to follow the approach developed by Hénon to write our own Monte Carlo code. It indeed appeared as an optimal compromise in terms of physical realism and computational speed. On the one hand, it allows for all the key physical ingredients listed at the beginning of this section. On the other hand, high resolution simulations are carried out in a few hours to a few days on a standard personal computer. This will enable a wide exploration of the parameter space.
Since Hénon's work, this approach has been extensively modified and
successfully applied to the study of the dynamical evolution of
globular clusters by Stodókiewicz
(1982, 1986) and Giersz
(1998, 2000a, 2000b). Another Hénon-like MC
code has recently been written by Joshi et al. (Joshi et al.
2000; Rasio 2000; Watters et al. 2000; Joshi et al. 2001). As far as we know, however, no one
ever applied this simulation method to galactic nuclei.
The Monte Carlo scheme relies on the central assumption that the
stellar cluster is always in dynamical equilibrium. This is the case
for well relaxed systems. Sufficient observational resolution has only
recently been obtained to allow an estimate of the relaxation times in
the nearest galactic nuclei. Lauer et al. (1998) report relaxation times
of about
,
and
at
for M 31, M 32 and M 33 respectively. As for the Milky Way,
Alexander (1999) deduces
at
(core radius), a value that does
not change significantly at smaller radii if a
cusp model is assumed with the parametrisation of Genzel et al. (1996) for
the velocity dispersion. Genzel et al. (1994) get
for the central value but the meaning of
this value is unclear, as the dynamical influence of the BH was
neglected in its derivation. These few values indicate that, perhaps,
only a subset of all galactic nuclei are amenable to the kind of
approach we are developing. However, these very dense environments
with relaxation times lower than the Hubble time are precisely the
ones which expectedly lead to non-vanishing rates for the disruptive
events we are primarily interested in. Furthermore, note that,
contrary to the case of globular clusters, stellar evolution of a
population of massive stars (e.g. in a starburst) can probably not
disrupt dynamical equilibrium (through mass-loss effects) as its time
scale (
a few 106yrs) is longer than orbital periods in a
galactic nucleus (
a few 105yrs).
In our work, we focus at the long term evolution of star clusters, on
time scales much exceeding the dynamical (crossing) time,
,
where
is the cluster's total mass and
a
quantity indicating its size (for instance the half-mass radius). We
thus make no attempt at describing evolutionary processes that occur
on a
time scale, most noticeably phase mixing and
violent relaxation, which are thought to rule early life phases of
stellar systems (see, for instance, Sect. 4.7 of Binney & Tremaine 1987 or
Sect. 5.5 of Meylan & Heggie 1997 and references therein). Hence, we
assume that the cluster has reached a state of dynamical equilibrium.
Its subsequent evolution, driven by processes with time scales
(2-body relaxation, collisions, tidal disruptions and
stellar evolution), passes through a sequence of such states.
This reasonable restriction allowed Hénon to devise a simulation scheme whose time step is a fraction of the relaxation time instead of the dynamical time. Naturally, this leads to an enormous gain in computation speed compared to codes that resolve orbital processes like N-body programs or the Princeton Monte Carlo code (Spitzer & Hart 1971; Spitzer & Thuan 1972).
Another strong simplifying assumption the scheme heavily relies on is that of spherical symmetry. This makes the cluster's structure effectively one-dimensional which allows a simple and efficient representation for the gravitational potential (see Sect. 5.1) and the stars' orbits and furthermore leads to a straightforward determination of neighboring particle pairs. Of course, such a geometry greatly facilitates the computation of any quantity describing the cluster's state, such as density, velocity dispersion and so on. An obvious drawback of this assumption is that it forbids the proper treatment of any non-spherical feature as overall rotation (Arabadjis 1997; Einsel 1998; Einsel & Spurzem 1999; Spurzem & Einsel 1999; Boily 2000), an oscillating or a binary black hole (Begelman et al. 1980; Lin & Tremaine 1980; Makino 1997; Quinlan & Hernquist 1997; Gould & Rix 2000; Merrit et al. 2000) or an accretion disk interacting with the star cluster (Syer et al. 1991; Rauch 1995; Armitage et al. 1996; Vokrouhlický & Karas 1998a,b).
In Hénon's scheme, the numerical realization of the cluster is a set
of spherical thin shells, each of which is given a mass M, a radius
R, a specific angular momentum J and a specific kinetic energy
T. In the remainder of the paper, we refer to these
particles as "super-stars'' because we regard them as spherical
layers of synchronized stars that share the same stellar properties,
orbital parameters and orbital phase, whose mass is spread over the
shell and which experience the same processes (relaxation,
collision, ...) at the same time. In recent implementations of
Hénon's method by Giersz (1998) and Joshi et al. (2000), each
particle represents only one star. This avoids scaling problems
connected with the computation of the rate of 2- (or many) body
processes but would impose too large a number of particles for
galactic nuclei simulations (
106-108 stars). In our code, each
super-star consists of many stars. Hence, a cluster containing
stars may be represented by a smaller number of
super-stars,
.
The number of stars per
super-star,
,
is the same for every
super-star, in order to conserve energy and angular momentum (as
well as mass when collisions are included) when simulating 2-body
processes.
The super-stars' radii being known, the potential can be computed at
any time and any place so that the orbital energies of all super-stars
are straightforwardly deduced from their kinetic energies and
positions. Hence the set of super-stars can be regarded as a
discretized representation of the one-particle distribution function
(DF) f which, by virtue of Jean's theorem, only depends on the specific
orbital energy E and angular momentum J:
The main difference between the MC code and a spherical 1D N-body simulation is that the former does not explicitly follow the continuous orbital motion of particles which preserves E and J. However these orbital constants, as well as other properties of the super-stars, are modified by "collisional'' processes to be incorporated explicitly, namely 2-body relaxation, stellar collisions and tidal disruptions. So, in the version of the code described here, the MC simulation proceeds through millions to billions of steps, each of them consisting of the selection of super-stars (Sect. 4.2.2), the modification of their properties to simulate the effects of relaxation (Sect. 4.2) and the selection of radial positions corresponding to their new orbits (Sect. 5.2).
In the rest of this paper, unless otherwise stated, we use the
"N-body'' units as prescribed by Heggie & Mathieu (1986). In this system,
![]() |
(1) |
![]() |
(2) |
![]() |
(3) |
See next section for further explanations of the
relaxation time and, in particular the value of .
The theory of relaxation in stellar systems can be found in classical textbooks (Hénon 1973; Saslaw 1985; Spitzer 1987; Binney & Tremaine 1987, for instance) and will not be presented here. However, the treatment of relaxation is the backbone of Hénon's Monte Carlo scheme. Hence a short summary of these issues is particularly worthwhile, not only to expose the inner workings of the MC method, but also to understand the limitations it suffers from (as other statistical cluster dynamics approaches) that stem from relaxation theory's simplifying assumptions. Furthermore, its specific advantages are also to be explained in that framework.
The basic idea behind the concept of relaxation is that the
gravitational potential of a stellar system containing a large number
of bodies can be described as the sum of a dominating smooth
contribution (
)
plus a small "granular'' part
(
). When only the former is taken into account, the
phase-space DF of the cluster
obeys the collisionless Boltzmann equation. In the long run,
however, the fluctuating
makes E and J slowly change
and the DF evolve. The basic simplifying assumption underlying
relaxation theory is to treat the effects of
as the sum
of multiple uncorrelated 2-body hyperbolic gravitational encounters
with small deviation angles. Under these assumptions, if a test star
"1'' travels through a homogeneous field of stars "2'' which all
share the same properties (masses and velocities) during
,
its trajectory will deviate from the initial direction by an angle
with the following statistical properties:
As Eq. (5) is of central importance for the
simulation of relaxation in the Monte Carlo code, we comment on the
main assumptions on which its derivation relies. First, as already
stated, deflections are assumed to be of very small amplitude and
uncorrelated with each other. The contribution of encounters with
impact parameters between b1 and b2 is of order
so equal logarithmic intervals of bcontribute equally to
and most of
the relaxation is indeed created by "distant encounters'':
.
Moreover, the derivation applies in principle only to homogeneous
systems with a finite size. However, a real cluster is grossly
inhomogeneous, with large density gradients. Applying
Eq. (5) for realistic clusters forces us into
assuming the "local approximation'', i.e. stating that typically
.
Then, not only can we neglect the effect of
during an encounter and treat the trajectories as
Keplerian hyperbolae, but, as an added benefit, we can use the
local properties of the cluster (density and velocity
distribution) as representative of field stars met by the
test-particle. Admittedly, this is a bold assumption only partially
justified by the "
'' argument. The validity of these
approximations has been assessed by comparing results of codes based
on relaxation theory with N-body simulations which do not rely on
such assumptions
(Giersz & Heggie 1994a; Spurzem & Aarseth 1996; Giersz 1998; Portegies Zwart et al. 1998; Takahashi & Portegies Zwart 1998; Spurzem 1999).
Contrary to Fokker-Planck codes, Hénon's method was devised to avoid
the computational burden and the necessary simplifications connected
with the numerical evaluation of diffusion coefficients (DCs). It does
so through a direct use of Eq. (5) whose repeated
application to a particular super-star "1'' is equivalent to a Monte
Carlo integration of the DCs, provided the properties of field particles "2'' are
correctly sampled. Under the usual assumption that encounters are
local, this latter constraint is obeyed if we take these properties to
be those of the closest neighboring super-star. Furthermore, this
allows us to actually modify the velocities of both super-stars at a
time, each one acting as a representative from the "field'' for the
other. Hence, in the Hénon code (as well as in ours), super-stars are
evolved in symmetrical pairs. This does not only speed up the
simulations by a factor
2, but also ensures proper local
conservation of energy, a feature which turned out to be a
prerequisite for correct cluster evolution. Unfortunately this
pairwise approach also impose heavy constraints on the code's
structure and (perhaps) abilities as we shall show later on.
So the elementary ingredients in the heart of Hénon's scheme are
simulated 2-body gravitational deflections between neighboring
super-stars. However, instead of being direct one-to-one counterparts
to real individual encounters - which would lead to much too slow a
code with a (huge) number of computational steps scaling as
- these are actually "super-encounters'',
devised to statistically reproduce the cumulative effects of the
numerous physical deflections taking place in the real system over a
time span
.
Thus, such a numerical encounter has a double
nature. First, it is computed as a (virtual) 2-body gravitational
interaction with deflection angle
in the pair CM
frame. But being in charge of representing all the (small-angle)
deflections that test-star "1'' experiences during
when
meeting field-stars "2'', it also has to obey
Eqs. (7, 8). Consequently,
has to equate the root mean squared cumulative
deflection
We now describe the computation of a particular numerical encounter. It decomposes into the following steps:
The computations in steps 2-4 are described in
Appendix C. The only physical content of all this
procedure resides in the determination of
.
Everything else is a matter of elementary frame transformations and
correct random sampling of free parameters in the Monte Carlo spirit.
With no other physical process than relaxation included, each individual step in our algorithm comprises three operations:
The choice procedure is mainly constrained by the necessity of
allowing super-stars to have individual time-steps
that
reflect the enormous variations of the relaxation time between the
central and outer parts of the stellar cluster. When collisions are
included, the dynamical range of evolution times can be even
larger. Unless this specification is met, the code's efficiency would
be very low as the overall
would have to reflect the very
short central evolution time. One could also be concerned by the
orbital time exceeding
for a large fraction of super-stars,
a situation inconsistent with the "orbital average'' approach
implicitly assumed in the Monte Carlo scheme. However, this problem is
actually nonexistent in a purely relaxational system whose evolution
- under the assumptions made in the standard relaxation theory - is
independent of
,
provided
is large enough (
100).
The other important constraint is the need to evolve both super-stars
in an interacting pair. If the same time step is not used for both
super-stars, energy is not conserved and a very poor cluster simulation
ensues.
But adjacent super-stars only form a pair during a unique interaction
and then break apart as each one is attributed a new radius. So,
momentary neighboring super-stars have to be given similar
.
This strongly suggests use of local time steps, i.e.
should be a function of R alone.
Naturally, the time steps have to be sufficiently smaller than the
time scale for the physical processes driving the cluster evolution,
namely the relaxation in the present case. Hence, we impose:
In Eq. (11), n (star number density),
(average squared velocity) and
(average stellar mass) are R-dependent properties of the cluster. As
the only role of
is to provide short enough
s, an approximate evaluation of these quantities (using a
coarse mesh or a sliding average) is sufficient. On the other hand,
too short
s would fruitlessly slow down the code and
should be avoided by considering
in
Eq. (10) not only as an upper limit for the time step
but also as an optimal value to be approached as closely as possible.
As the members of a pair arrived at their present position at
different times but have to leave it at the same time, once the
super-encounter is performed, imposing the same
to both
super-stars is impossible. So, building on the statistical nature of the
scheme, instead of trying to maintain a super-star at radius Rduring exactly
,
we only require the mean waiting
time for super-stars at R to be
.
As explained by
Hénon (1973), this constraint is fulfilled if the probability for a
pair lying at R to be selected and evolved (and thus, taken away
from R) during a time span
is
.
This is realised in the following way:
![]() |
(12) |
![]() |
(13) |
![]() |
(14) |
![]() |
(15) |
![]() |
Figure 1:
Selection probability in a Plummer cluster consisting of
4000 super-stars. Solid curves correspond to
![]() |
Open with DEXTER |
As evaporation, collisions and tidal disruptions remove stars from
the cluster, the number of super-stars
generally
decreases between two successive computations of the selection
probabilities. To avoid this problem,
,
and
are actually defined as functions of
.
For the sake of efficiency, we should manage to get for
a function that is quickly evaluated. We
explored two solutions. We first mimicked Hénon's recipe, using the
functional forms:
However, being rather arbitrary, Hénon's parameterization could lead
to values of
that poorly match the shape of
with the effect of forcing low
and hence slowing down the simulation. This concern motivated
another approach. The full rank range is sliced in a few (typically 20)
bins, in each of which the selection probability is set to a
constant (either proportional to the maximum of
in the bin or to its mean value). Such a piecewise approximation is
naturally expected to adjust better to any cluster structure, as shown
in Fig. 1.
An additional constraint to be mentioned is the need to restrict the
ratio of the longest time step to the shortest one,
,
to ensure that the outermost
shells (the cluster's halo) evolves correctly. Otherwise these
super-stars, most probably placed near their apo-centre positions,
where relaxation (and, hence selection probability) is vanishing, would
never be given any opportunity to visit more central regions where
they can react to the adiabatic relaxation-induced modification of the
central potential and experience 2-body encounters
. Finally, for reasons to be presented in
Appendix B, it is necessary that the selection
probability is a decreasing function of R (i.e. of the rank). In
practice, these added constraints are applied to modify unnormalised
selection probabilities which are then rescaled to 1. Once the
probabilities have been worked out, we ensure
inequality 10 is satisfied everywhere by setting
![]() |
Figure 2:
Evolution of the time dispersion of super-stars during the
simulation of a core-collapsing cluster. Left panel: time
versus step number. The median time
![]() ![]() ![]() ![]() |
Open with DEXTER |
As super-stars are randomly chosen to be evolved and advanced in time,
strict synchronization is never realized (except at t=0). Each
super-star k has its own individual time t(k) and
synchronization is only achieved statistically by requiring
that, at every stage in the cluster's evolution, the expectation
values
E(t(k)) of all t(k)s are the same. An equivalent
statement is to impose an equal expectation value
for the individual time
increase of any super-star during any evolution step. At the beginning
of a given step, the super-stars are ranked according to their distance
to the centre. Selection probability and time step depend only on the
rank number i(k) so that
![]() |
(21) |
![]() |
(22) |
In Sect. 4.1, we explain how relaxation theory,
as adopted in this work, relies on the assumption that the cluster's
gravitational potential
can be described as a dominating smooth
contribution
whose evolution time scale is much
longer than the typical orbital time plus a relatively small
fluctuating
.
This latter contribution being further
reduced to the sum of numerous 2-body encounters, we are left with the
numerical representation of
.
As spherical symmetry introduces many simplifications, going beyond
this central approximation deeply built into Hénon's scheme, seems
nearly impossible. Its most prominent merit is to ensure that stellar
orbits, when considered on time scales much shorter that
,
are easily dealt-with planar rosettes.
Therefore, angular
-fluctuations are removed by construction and
we represent
as the sum of the contributions of the
super-stars, i.e., spherical infinitely thin shells of stars. As a
consequence, radial graininess is still present but its effect turns
out to be insignificant compared to "genuine''
relaxation
. To support this claim, we switched
off simulated physical relaxation and got a cluster showing no sign of
evolution (apart from Monte Carlo noise) for a number of steps at
least three times larger than needed to accomplish deep core collapse
when relaxation is included.
Between two successive super-stars of rank i and i+1, the smooth
potential felt by a thin shell of mass M with radius
is then simply
![]() |
(23) |
At each step in the numerical simulation, two super-stars are evolved
which are given new radii and masses (if collisions or stellar
evolution is simulated). An easy way of obtaining exact overall energy
conservation and proper account of the adiabatic energy drift of
super-stars is to update the Ai and Bi coefficients after every
such orbital displacement. Doing so also ensures that we never put a
super-star at a radius which turns out to be forbidden (either lower
than peri-centre or larger than apo-centre) in the updated potential.
To sum it up, this choice spares us much trouble connected with a
potential that lags behind the super-star distribution.
Stodokiewicz (1982) and Giersz (1998) describe these difficulties, as
well as recipes to overcome them in their programs. Similar problems
are certainly present in the code of Joshi et al. (2000) as they recompute
the potential only after all the super-stars have been assigned new
radii. However, performing potential updates only after a large number
of super-star moves has advantages of its own. In particular, in such
a code, the computing time should scale linearly with the number of
super-stars (for a complete cluster evolution). This also allowed Joshi et al. (2000)
to develop parallelized versions of the Monte Carlo scheme.
If we implement the Ais and Bis as linear arrays, however, a
large fraction of their
elements would have to be
modified after each step, so the number of numerical operations
required to evolve the system to a given physical time would scale
like
.
This steep dependency could be avoided by
using a binary tree data structure to store the potential (and
ranking) information (Sedgewick 1988). This essential adaptation
was alluded to by Hénon himself (Hénon 1973) who never published
it though.
At any given time, each super-star is represented by a node in the
tree. Each node is connected to (at most) two other nodes that we
shall call his left and right children, which are themselves, when
present, the ``roots'' of their father's left and right "sub-trees''.
The rules underlying the tree structure are that all the nodes in
the left "child-tree'' of a given node correspond to super-stars
with lower radii and all the nodes in its right "child-tree''
to super-stars with higher radii. If we define
and
to be the sets of nodes in the left and right
child-trees of node k, then
![]() |
Figure 3:
Diagram of a binary tree with 12 super-stars showing node
properties that encode potential (
![]() ![]() ![]() |
Open with DEXTER |
The spherical potential is represented by
and
coefficients attached to nodes. A third value,
,
allows the determination of the radial rank of any super-star.
These
properties, illustrated in Fig. 3, are defined by
![]() |
|||
![]() |
(25) |
Let's consider a star with known energy E and angular momentum Jorbiting in a spherically symmetric potential .
Its distance
to the centre R oscillates between
(peri-centre
radius) and
(apo-centre radius) which are the two solutions of
the equation
Our Monte Carlo scheme avoids explicit computation of the orbital
motion. It instead achieves correct statistical sampling of the orbit of any
given super-star by ensuring that the expectation value for the
fraction of time spent at R complies with Eq. (27). Let
the sought-for probability of placing the super-star at
be
.
According to
Eq. (19), if the super-star is placed at R, it will
stay there for an average time
.
Then, combining both relations, the average
ratio of times spent at two different radii R1,R2 on the orbit is
![]() |
= | ![]() |
(28) |
= | ![]() |
(29) |
Despite its long history, the theoretical understanding of the processes leading stars to escape a stellar cluster is still not complete (Binney & Tremaine 1987 Sect. 8.4.1; Meylan & Heggie 1997 Sect. 7.3; Heggie 2000). Even without considering interaction with binaries, the global picture seems a bit confusing. Nevertheless, for an isolated cluster, the basic mechanism is obvious to grasp, at a "microscopic'' description level: a star can escape after it has experienced a 2-body encounter which resulted in an energy gain large enough to unbound it, i.e., to get to positive energy. Much of the confusion about the prediction of overall escape rates amounts to figuring out whether rare large angle scatterings that are neglected by the standard relaxation theory could dominate this rate. Indeed it can be argued that, in an isolated cluster, stars that are only slightly bound and could be kicked away by weak scatterings populate orbits with huge periods and spend most time near the apo-centre where encounters are vanishingly rare (Hénon 1960). According to that picture, the escape rate could not be predicted by the "standard'' relaxation theory, because individual "not-so-small'' angle scatterings would dominate it.
If this is true, as the MC treatment of 2-body encounters relies on the assumption of small relative changes in orbital parameters, the method cannot be expected to give reliable results for the escape rate from an isolated cluster (Hénon 1971a). Some numerical solution to that problem was introduced by Giersz (1998). However, when the cluster's initial conditions are set to represent a galactic nucleus, the fraction of stars that evaporate during 1010 years is probably very small, so that a precise account of this phenomenon is not really required. It would anyway not make much sense to devise a complicated treatment of evaporation from the nucleus while we neglect the inverse process, i.e. the capture of stars from the galactic bulge. Our procedure is simply to remove any star whose energy is positive after a relaxation/collision process. As can be seen in Fig. 6, this simple prescription leads to an amount of evaporation in good agreement with the result of Giersz (1998).
However, for the sake of comparison with globular cluster
simulations, we also introduced the effects of an external (galactic)
tidal field. Due to the sphericity constraint, the three dimensional
nature of such a perturbation cannot be respected. We resort to the
usual radial truncation approach and consider that a super-star with
apo-centre radius larger than the so-called tidal radius
immediately leaves the cluster
. The value of
is about the size of the cluster's Roche lobe,
with
being the
distance to the centre of the parent galaxy whose mass is
and c is of order unity. This criterion
is clearly a quite unrealistic simplification but we do not question
it in our work as it is used only for comparison purposes. As the
transition from an apo-centre radius slightly below
to a value slightly larger does not imply large changes in the shape
of the orbit, the escape rate in this model is certainly dominated
by small angle, diffusion-like relaxation and must be correctly
captured by our MC approach.
The formation, evolution and dynamical role of binaries in star clusters are complex and fascinating subjects. An impressive number of works have been aimed at the study of binaries in globular clusters (see, for instance Hut et al. 1992, for a review). On the other hand, not much has been done concerning galactic nuclei (see Gerhard 1994).
From a dynamical point of view, only hard binaries, i.e. star
couples whose orbital velocity
is larger than the
velocity dispersion
in the cluster, have to be considered.
In dense systems, they act as a heat source by giving up orbital
energy and contract (thus hardening further) during interactions with
other stars. Of course, the fraction of primordial binaries to be
labeled as "hard'' is a priori much higher in globular
clusters (
of order
)
than
in galactic nuclei (
).
Binaries can also be formed dynamically, either by tidal energy
dissipation during a close 2-body encounter ("tidal binaries'') or as
the result of the gravitational interaction between 3 stars ("3-body
binaries''). The cross section for forming a tidal binary strongly
decreases with relative velocity (at infinity)
(Kim & Lee 1999), so that, in galactic nuclei, such processes imply
hydrodynamic contact interactions that are likely to result in mergers
(Lee & Nelson 1988; Benz & Hills 1992; Lai et al. 1993). Thus these events are implicitly treated in our code as a subset of all the collisions (Freitag & Benz 2001d). An interesting counter-example to which these arguments do not apply is
the nucleus of the nearby spiral galaxy M33 whose central velocity
dispersion is as low as
(Lauer et al. 1998) so that tidal binaries should have formed at an
appreciable rate (Hernquist et al. 1991).
The formation rate of 3-body binaries in galactic nuclei is also
strongly quenched as compared to globular clusters. Indeed, for a
self-gravitating system, the total number of binaries formed through
this mechanism per relaxation time is only of the order of
(Binney & Tremaine 1987; Goodman & Hut 1993) and can be completely neglected unless evolution, through mass-segregation
and core collapse, leads to the formation of a dense auto-gravitating
core containing only a few tens of stars (Lee 1995). Finally,
another, somewhat exotic, possibility is the formation of hard
binaries by radiative energy losses of gravitational waves during
close fly-bys between two compact stars (Lee 1993, for
instance). Note that, if present, hard binaries would not
only have a dynamical role but may also destroy giant stars by
colliding with them (Davies et al. 1998).
For these reasons, we feel justified not to embark on the considerable
burden that incorporation of binary processes in a Monte Carlo scheme
would necessitate. However, this has been achieved with a high level
of realism by Stodo
kiewicz (1986) and Giersz (1998, 2000b).
Such a detailed approach is required to obtain reliable rates for
binary processes of interest, like super-giant destruction by
encounters with binaries (Davies et al. 1998), but, if needed, the basic
dynamical effect of binaries as a heat source could be accounted for
much more easily using the same recipes that proved suitable in direct
Fokker-Planck methods (Lee et al. 1991, for instance).
It should be noted that, even in the absence of any explicit
simulation of binary heating, core collapse is anyway halted and
reversed in most Monte Carlo simulations of globular cluster
evolution! This is due to an effect already described by
Hénon (1975) and Stodokiewicz (1982): a stiff micro-core consisting of
one or a few (
5) super-stars becomes self-gravitating and
misleadingly mimics a small set of hard binaries by contracting and
giving up energy to other super-stars. Due to the self regulating nature
of cluster re-expansion (Goodman 1989), this leads to a
post-collapse evolution of the overall cluster structure that is
extremely similar to what binaries produce. Unfortunately this does
not hold true for the very core whose evolution (for instance, whether
it experiences gravothermal oscillations or not, see
Heggie 1994) depends on the nature of the heat source.
In this section, we briefly describe the results of a series of test simulations we conducted in order to check the various aspects of our code and the results it produces.
The most basic test to be passed is to make sure that when relaxation
and other physical processes are turned off, no evolution occurs in a
cluster model whose initial conditions obey dynamical equilibrium and
radial stability. Beyond stability by itself, the main concern is
about spurious relaxation introduced by the discrete
representation of the cluster by a set of super-stars. In other words,
the supposedly "smooth'' potential
still presents radial
graininess that could induce some kind of unwanted relaxation
(Hénon 1971a). It can easily be shown that the time scale over
which the effects of this spurious relaxation may become of importance
is of order
.
This effect has been tested in computations presented in Appendix D. Their result is that, provided the number of super-stars is larger than a few thousand, there is no sign of significant spurious evolution after a number of numerical steps larger than what is required in any "standard'' simulation. No relaxation being simulated, these bare-bones steps only consist of orbital displacements as described in Sect. 5.2. Consequently, it appears that these radial movements do not introduce appreciable spurious relaxation and that the orbital sampling proceeds correctly.
Reference | Numerical method | Core collapse time |
Hénon (1973, 1975) | HMC | ![]() |
Spitzer & Shull (1975a) | Princeton MC | ![]() |
Cohn (1979) | Anisotropic FP | 15.9 |
Marchant & Shapiro (1980) | Cornell MC | 14.7 |
Cohn (1980) | Isotropic FP | 15.7 |
Stodó |
HMC | ![]() |
Takahashi (1993) | Isotropic FP | 15.6 |
Giersz & Heggie (1994b) | N-body | ![]() |
Takahashi (1995) | Anisotropic FP | 17.6 |
Spurzem & Aarseth (1996) | N-body | 18.2 (2k), 18.0 (10k) |
Makino (1996) | N-body | 16.9 (8k), 18.3 (16k), 17.7 (32k) |
Quinlan (1996) | Isotropic FP | 15.4 |
Giersz (private communication) | HMC | ![]() ![]() |
Lee (private communication) | Isotropic FP | 16.1 |
Drukier et al. (1999) | Anisotropic FP | 17.8 (
![]() ![]() |
Joshi et al. (2000) | HMC | 15.2 (100k) |
This work | HMC | 17.8 (512k) 17.9 (2000k) |
![]() |
Figure 4:
Code benchmarking. CPU time to deep core collapse
(
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Open with DEXTER |
![]() |
Figure 5:
Evolution of Lagrangian radii in a core collapse simulation of an
isolated single mass Plummer model consisting of
![]() ![]() ![]() |
Open with DEXTER |
![]() |
Figure 6: Comparison of our collapse simulation (solid lines) with a computation by Mirek Giersz (dash-dotted lines). Our simulation is the same as in Fig. 5. Giersz used 100000 super-stars. The top panel shows the Lagrangian radii. The bottom panel depicts the total mass. Contrary to ours, Giersz's code is able to go past core collapse by simulating the formation of 3-body binaries and their giving up energy to other stars. |
Open with DEXTER |
The next-to-simplest step was to plug in 2-body relaxation and to find out whether we could reproduce the well studied evolution of an isolated star cluster in which all stars have the same mass. We chose a Plummer model because it has been extensively used in the literature. Previous results for the collapse time are reviewed in Table 1. We refer to these many references for a description and explanation of the physics of core collapse and concentrate on some diagrams that describe our simulation of this system and allow comparisons with others.
The results shown here are taken from calculations with
and
super-stars which took slightly more than 100 and 500
CPU hours, respectively, to complete on a PC with a 400 MHz
Pentium II processor. Benchmarking of the code is presented in
Fig. 4 where we plot the CPU time required to
attain a value of
for the central potential as a
function of the number of super-stars used in the calculation. As most
computing time (
)
is spend in binary tree traversals
with a number of operations that scales logarithmically, we fitted
this data with a relation
![]() |
(31) |
In Fig. 5, we present the evolution of
Lagrangian radii, i.e., radii of spheres that contain a given fraction
(0.1% to 99%) of the cluster's mass. In
Fig. 6, a subset of these radii are used in a
comparison with a simulation by Mirek Giersz (private communication).
We also plot the evolution of the decreasing total mass. Clearly, the
agreement is quite satisfactory. The most obvious difference lies in
our run leading to a somewhat slower evolution, which translates into a
core collapse time
larger by 2-3%. Given
the considerable dispersion present in the literature for the value of
,
we judge this discrepancy to be only of minor
importance. First, due to the stochastic nature of Monte Carlo
simulations, various runs with the same code but using different
random sequences yield results that differ slightly from each other.
This effect is probably of minor importance for particle
numbers as high as
but may affect Giersz's
data
. More importantly,
we stress that although it also stems from Hénon's scheme,
Giersz's code inherited the deep modifications proposed by
Stodó
kiewicz and is actually very different from our
implementation. Also, as Giersz thoroughly and successfully
tested his program against N-body data, this comparison is very
valuable in assessing the quality of our own code.
![]() |
Figure 7: Evolution of the density profile during core collapse for the same simulation as Figs. 5 and 6. The dashed line shows the slope of the power law corresponding to the self-similar deep collapse solution of the Fokker-Planck equation (Heggie & Stevenson 1988). |
Open with DEXTER |
Figure 7 is a more direct representation of the
same information. It shows the density profile
at successive
evolution phases, deeper and deeper in the collapse. According to
semi-analytical (Lynden-Bell & Eggleton 1980; Heggie & Stevenson 1988, for instance) and numerical
(Cohn 1980; Louis & Spurzem 1991; Takahashi 1995; Joshi et al. 2000, amongst others) computations,
the central parts of the cluster evolve self-similarly during the late
phase of core collapse according to a power-law density profile
with
,
that extends inwards. To
illustrate and confirm this behavior, in
Fig. 8, we plot
versus
and compare with curves
obtained by Takahashi (1995) with an anisotropic Fokker-Planck
code. Although the noisy aspect of our Monte Carlo data expectedly
contrast with the smooth curves from Takahashi's finite-difference
code, the agreement is clear. The progressive development of a
R-2.2 in our simulation is thus well established.
As further evidence for the good performance of our code, in
Fig. 9, we follow
the increase of central density
and central 3D velocity
dispersion
in our model and compare them with
data from an isotropic Fokker-Planck computation by H. M. Lee (private
communication). The collapse time of the two simulations being quite
different (
and
,
respectively), we
rescaled the time scale by
in these diagrams to
get more meaningful comparisons. Here again, the level of agreement is
more than satisfactory. Incidentally, we note that a
relation is quickly established with
as previously noted by many authors
(Cohn 1979, 1980; Marchant & Shapiro 1980; Takahashi 1995).
Finally, in Fig. 10, we investigate the evolution of
anisotropy, measured by the usual parameter
where
and
are the tangential and radial
components of the stellar velocity. The development of a radial
anisotropy at every Lagrangian radius is clearly visible. Our curves
are very similar to those obtained by Takahashi (1996) and
Drukier et al. (1999) using anisotropic FP codes. Globally, although the
agreement is not as close as in previous diagrams, this relative
mismatch is weakened when examined in the light of the differences
between both FP simulations. It thus seems reasonable to think that
these differences could be due to the further simplifying assumptions
required in the derivation of direct anisotropic FP schemes.
We also include the Monte Carlo simulation by
Giersz (1998) in the comparison. Due to the relatively low number
of particles used by this author (105), this data contains large
statistical fluctuations. It is nonetheless clear that it shows
significantly more radial anisotropy in the central regions at early
times, compared to the other simulations. The reason for this
difference is unknown to us but may be due to the use of radial
"super-zones'' by Giersz to define block time-steps. In his scheme,
super-stars from an inner super-zone are not allowed to skip to an
outer zone, unless both zones happen to be synchronized at the end
of their respective time-steps. This clearly could lead to some
"particle restraining'' in the central parts but it is unclear why
this effect would appear in the anisotropy profiles but not in the
density data. Comparison with data from an N-body code would
allow us to settle these questions. Unfortunately, present-day
accessible N values (a few 104) yield anisotropy curves still too
noisy to be of real use (Spurzem & Aarseth 1996, for instance).
![]() |
Figure 8:
Logarithmic density gradient for successive stages of the core
collapse of the Plummer model. Points represent our data for the
same times as in Fig. 7 (except the initial
state which is not represented here). Curves are from a simulation
by Takahashi (1995) for stages in core collapse with
about the same central density increase. The leftmost curve
corresponds to a collapse phase slightly deeper than attained by
the Monte Carlo simulation. Dotted vertical lines show the
decreasing values of the core radius
![]() |
Open with DEXTER |
![]() |
Figure 9:
a) Evolution of the central density during the core collapse
of the Plummer Model. On this diagram, the line for our simulation
(
![]() |
Open with DEXTER |
![]() |
Figure 10:
Evolution of the average anisotropy parameter within shells
bracketed by Lagrangian spheres containing 15 to 20%, 45 to 50%,
70 to 75% and 90 to 95% of the cluster mass. Solid lines show
the data from our simulation with
![]() |
Open with DEXTER |
To summarize this subsection, we can safely conclude that, when applied to the highly idealized relaxation-driven evolution of a single-mass cluster, our Monte Carlo implementation produces results in very nice agreement with many other modern numerical methods and theoretical predictions. To step closer to physical realism, we now present the results for multi-mass models.
![]() |
Figure 11:
Collapse times for cluster models consisting of 2 components: MS
stars with mass m1 and stellar BHs with mass m2. The initial
clusters are Plummer models without segregation. The collapse time
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Open with DEXTER |
![]() |
Figure 12:
Evolution of the Lagrangian radii of the components for two models
of Fig. 11. The solid and dashed
lines are for stars with
![]() ![]() |
Open with DEXTER |
As a first validation of our code in the multi-mass regime, we
simulated clusters with two mass components. Such models have been
used by Lee (1995) to study the fate of stellar black holes (SBHs)
in galactic nuclei. He assumed a simple stellar population with all
main sequence stars with mass
and a given fraction
of
SBHs. For a board range in the total mass
fraction of SBHs, his 1-D Fokker-Planck simulations show a collapse
time for the SBH sub-system that is reduced by factors of tens
compared to the single-mass case. As shown in
Fig. 11, we successfully reproduce his results
for various mass fraction of SBHs. Note that the MC method is unable
to simulate reliably clusters containing a very small SBH fraction if
the number of super-stars has to be much lower than the number of
simulated stars. For instance, if we wanted to simulate a
cluster with
where
is the total mass of SBHs, with
super-stars, only 35 of them would represent SBHs, a number clearly
too small to resolve the collapse of the central sub-cluster of
SBHs. To illustrate the process of mass segregation, in
Fig. 12 we plot the evolution of the Lagrangian
radii of both mass components for two of these cluster models.
![]() |
Figure 13:
Evolution of the Lagrangian radii for a cluster with initial
conditions according to Heggie's "collaborative
experiment''. Solid lines are the result of our simulation with
256000 super-stars. Dashed lines with black dots are from a
computation by Mirek Giersz (10, 50 and 100% radii). The tidal
radius is labelled as "100%''. "N-body'' units are used. Unit of
time:
![]() ![]() |
Open with DEXTER |
![]() |
Figure 14:
Mass segregation diagram for a cluster with initial
conditions according to Douglas Heggie's "collaborative
experiment''. Each curve show the evolution of the average mass of
stars in a sphere that contains a given fraction of the total
cluster's mass, as indicated by labels on the right.
Solid lines are the result of our simulation with
256000 super-stars. Dashed lines with black dots are from a
computation by Mirek Giersz (10, 50 and 100% radii). The unit of
time is
![]() |
Open with DEXTER |
To study the evolution of a cluster with a continuous mass function,
we simulated a model with initial conditions set according to Heggie's
"collaborative experiment''
(Heggie et al. 1999). This is a King model with
(Binney & Tremaine 1987, p. 232). A spherical tidal truncation is imposed at
.
The mass spectrum is:
Our code lacks the ability to simulate the formation of binaries and their heating effect. However, as explained in Sect. 6.2, these processes do not switch on before the core has collapsed down to a few tens of stars. As a consequence, we should be able to tackle the evolution of this system up to deep core collapse.
Many researchers, using a variety of simulation methods, from gas
models to N-body codes, have taken part in the "collaborative
experiment''. Their results show a very important dispersion. For
instance, the obtained core-collapse times range from 9 to more than
14 Gyrs while the values for cluster's mass at this time lie between
and
! In our simulations
with 16000 to 256000 super-stars, we find a collapse time of 12.5 to
13.4 Gyrs with a remaining mass varying from
to
.
Factor 2 discrepancies can even been found amongst simulations using
the same scheme, e.g. N-body codes. There is a clear tendency for
N-body to yield values of
shorter than those
produced by other, statistical, methods. Another perplexing fact is
that the results of N-body simulations do not converge to those of
statistical methods as N increases, contrary to naive expectations.
The cause of these unexpected results has been traced by Fukushige & Heggie (2000) to two combining facts. First, for the realistic, non-spherical, tidal
potential used in those simulations, stars with energies above the
escape energy can stay in the cluster for many dynamical times before
they actually leave it. Second, the way the models where initiated led
to clusters containing, from the beginning, a large fraction of such
"potential escapers'', instead of being in equilibrium in the tidal
field.
Given this confusing picture, it seems more sensible to compare our results to those produced by a similar computational approach. Mirek Giersz applied his Monte Carlo code (Giersz 1998, 2000b) to this system. In Fig. 13, we show the evolution of Lagrangian radii for his simulation (up to binary-induced rebound) and ours. Similarly, Fig. 14 compares the evolution of the average mass of stars. This latter diagram clearly shows how strong mass segregation effects are in multi-mass clusters. The relatively good agreement to be read from these figures supports our code's ability to handle star clusters with a mass spectrum.
In this paper, we have presented a new stellar dynamics code we have recently developed. It can be seen as a Monte Carlo resolution of the Fokker-Planck equation for a spherical star cluster. Although stemming from a scheme invented by Hénon in the early 70's, it was deemed optimal for our planned study of the long-term evolution of dense galactic nuclei hosting massive black holes. The main advantages of this kind of approach are a high computational efficiency (compared to N-body codes), on the one hand, and the ability to incorporate many physical effects with a high level of realism (compared to direct Fokker-Planck resolution or to gas methods), on the other hand. These features explain the recent revival of Hénon's approach in the realm of globular cluster dynamics by Giersz (1998) and Joshi et al. (2000). To the best of our knowledge, however, we are the first to apply it to galactic nuclei (see Freitag & Benz 2001b,d).
The version of the code presented here only includes 2-body relaxation. Spherically symmetric self-gravitation is computed exactly. Arbitrary mass spectrum and velocity distribution, isotropic or not, can be handled without introducing any extra computational burden. The test computations we carried out allow us to be confident in the way our code simulates the evolution of spherical star clusters over the long term, as driven by relaxation.
The computational speed of our code is highly satisfying. The
evolution of a single-mass globular cluster with 512000 super-stars
up to core collapse takes about 5 CPU days on a 400 MHz Pentium-II
processor. This can be compared with the three months of computation
required by Makino (1996) to integrate a cluster with 32000
super-stars on a GRAPE-4 computer specially designed to compute forces
in an N-body algorithm. However, the significance of such a
comparison is somewhat blurred by the fact that Makino integrated the
system past core collapse and that the hardware in use is so
different. Nevertheless, the speed superiority of our Monte Carlo
scheme over N-body really lies in a CPU time scaling as
instead of N2-3 (Makino reports
). Furthermore, Monte Carlo simulations do not have
to resolve orbital time-scales; their time step is a fraction of the
relaxation time which is of an order 105 times larger for a
million-star self-gravitating cluster. This ensures that Monte Carlo
simulations will remain competitive in the next few years, even after
the advent of special-purpose N-body computers with highly increased
performances like the GRAPE-6 system (Makino 1998, 2000). Monte
Carlo codes like ours are bound to become the tool of choice to
explore the dynamics of star clusters by allowing investigators to run
many simulations with a variety of initial conditions and physical
processes. Run-of-the-mill personal computers are sufficient to get
quick results without sacrificing too much of the physical realism.
2-body relaxation is only one amongst the many physical processes that are thought to contribute to the long term evolution of dense galactic nuclei or are of high interest of themselves even without a global impact on the cluster. Here, we list the most important of them and comment on those not already discussed in Sect. 1. The order in this list broadly reflects the probable order of inclusion of these effects in our simulations.
In the following paper of this series (Freitag & Benz 2001d), we'll describe how stellar collisions and tidal disruptions are treated. The next effects to which we shall turn are stellar evolution, included in a simplified way in the latest version of the code (Freitag 2000), and GR-captures, for which encouraging results have already been obtained (Freitag 2001).
Acknowledgements
We warmly thank Mirek Giersz and Hyung Mok Lee for helpful discussions and their providing unpublished simulation data. Mirek Giersz has also been a very efficient referee whose comments and suggestions have helped to improve the manuscript of this paper. Most simulations have been realized on the GRAVITOR "Beowulf'' computing farm at Geneva Observatory, with invaluable help from Daniel Pfenniger. This work has been supported in part by the Swiss National Science Foundation.
Step 2. The situation is depicted on top of
Fig. A.1. A local reference frame, at
rest in the cluster, is defined with axis
pointing in the
radial direction from the cluster's centre and
.
From the specific angular momenta Ji, kinetic
energies Ti and distances to centre Ri of the super-stars, the
moduli of the radial and tangential components of the stars'
velocities are deduced:
![]() |
(A.1) |
Step 3. We now switch to the encounter CM reference frame (see
bottom panel of Fig. A.1) which has the
same axis orientation as the cluster frame but translates with
velocity
(
where M1,2 are the masses of the
stars). We use the notation
for the stars' velocities
in the cluster frame and
when we express them in the
encounter frame. Prime (
)
denotes quantities after the
encounter. We build an orthonormal vector set
with
and
.
The
orientation of the orbital plane spanned by
is set through a randomly chosen
angle
defining
.
In this frame, the effect of the
gravitational encounter is simply to rotate the initial velocities (at
infinity)
and
by an angle
,
so:
![]() |
(A.2) |
![]() |
(A.3) | ||
![]() |
(A.4) |
![]() |
Figure B.1:
Diagram of the random selection of an orbital position using a
rejection method with a Keplerian comparison function. A constant
![]() |
Here, we explain how we determine a new radius R for a super-star after it has experienced a super-encounter (see Sect. 5.2).
Our goal is to generate random R values whose distribution density
comply with Eq. (30). The main difficulty lies in the
fact that
is not an easily computable function.
First,
really depends on the rank rather than on R and
has generally no simple analytical expression (see
Sect. 4.2.2). Furthermore,
is a
function of
which is not known analytically either.
Obtaining the value of the rank or the potential (through local Aand B coefficients) at any given R implies a tree traversal. Given
these intricacies, we don't even attempt transforming a
uniform-deviate random variable through the inverse function of the
cumulative probability and turn to a rejection method (Press et al. 1992, Sect. 7.3). The trick is to find a comparison function
,
a more docile probability density which can be made
a uniform upper bound to
(
for some constant
). We then proceed by generating a random
number
according to
and another,
,
with uniform deviation between 0 and 1. If
,
is kept,
otherwise it is rejected and we try again. The accepted
values follow the
distribution. Of
course the closer the comparison function is to
,
the fewer rejection steps and the more efficient is the method
(remember that a tree traversal is realised for each
evaluation).
![]() |
Figure B.2:
Example of the shape of the probability density
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
By construction,
is constrained to decrease with
rank/radius, so an easy but inefficient upper bound is its value at
peri-centre. As for
,
we first applied
Hénon's variable change (
)
to remove the divergences at
peri- and apo-centre but failed to find a safe upper bound for the
resulting probability density of s. We instead use the fact that a
shifted Keplerian potential
equal
to
at
and
is everywhere higher
(or equal) in between, so that
![]() |
(B.1) |
![]() ![]() ![]() |
||
= ![]() |
(B.2) | |
![]() |
![]() |
(B.3) |
![]() |
Figure C.2:
Flow charts for elementary binary tree routines. a)
Search for super-star with rank i; on exit A and B are its
potential coefficients (see Sect. 5.1). b)
Insertion of a super-star of mass
![]() ![]() ![]() ![]() ![]() ![]() |
In this Appendix, we present in some detail the implementation of the binary tree in charge of storing the potential and ranking information of the super-star cluster.
The logical structure of the tree is implemented by three integer
arrays: l_son
, r_son
and father
. l_son(k)
is the number of the "left son'' node of node k
and so on, see
Fig. C.1. When the tree is (re-)built, each node
k
is attributed a super-star super_star(k)
. When this
particle is evolved and moved to another radius, the node becomes
empty and another leaf node is added to host the super-star. Leaving
empty nodes simplifies the tree update procedures with the cost of
increasing its size. This introduces some numerical overhead as it
causes a faster increase of the number of hierarchical levels
(particularly in the central regions where the time steps are shorter,
see Fig. C.1) but this inconvenience is probably not a
serious concern for the tree is rebuilt from scratch from time to time
in order to keep it reasonably well balanced. This operation is
computationally cheap compared to the numerous tree traversals and
would be called for even if empty nodes were not
created.
To find a super-star with rank i and compute the potential at its
position, one traverses the tree from the root to the corresponding
node using the algorithm sketched in Fig. C.2a. At
the end of this tree traversal,
points to the
proper node and the super-star's potential is
.
A very similar routine is used to
compute the potential
at any arbitrary radius R. Flow
charts for addition/removal of a super-star into/from the binary tree
are depicted in Fig. C.2b,c.
Another important role of the binary tree is the computation of peri-
and apo-centre radii for a given super-star, an obvious prerequisite
to the radial placing procedure described in Sect. 5.2.
Note that a high level of precision is called for: unphysical cluster
evolution could expectedly arise if super-stars' orbits suffered from
any systematic bias. For lack of explicit knowledge of ,
solving Eq. (26) is not straightforward. The main
operation, depicted in Fig. C.2d, is a tree traversal
in a search for the node
whose radius lies immediately below the
peri-centre, i.e. with
and
.
This provides us with the local
and
potential coefficients. Hence,
computing
reduces to the solution of the simple
equation:
![]() |
(C.1) |
To measure the amplitude of the spurious relaxation due to the fact
that we represent a smooth potential
by a set of super-stars,
i.e. of spherical shells with zero thickness, we carried out a few
simulations in which relaxation was switched off. In such cases, the
algorithm reduces to moving the super-stars on their orbits again and
again. Ideally, the structure of the cluster should not display any
evolution but statistical fluctuations. Figure D.1 shows
the results of such tests realized with 4000 and 64000 particles.
The computations were stopped when the total number of steps divided
by the number of super-stars reached 4000. For comparison, our
core-collapse simulation for a Plummer model with
particles (Sect. 7.2) required an average of 2300
steps per super-star. From these relaxation-free tests, we conclude
that the amount of spurious relaxation is negligible if the number of
super-stars is of order 16000 or larger.