Issue |
A&A
Volume 513, April 2010
|
|
---|---|---|
Article Number | A36 | |
Number of page(s) | 28 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/200913514 | |
Published online | 21 April 2010 |
EvoL: the new Padova Tree-SPH parallel code for cosmological simulations
I. Basic code: gravity and hydrodynamics
E. Merlin - U. Buonomo - T. Grassi - L. Piovan - C. Chiosi
Department of Astronomy, University of Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy
Received 20 October 2009 / Accepted 22 December 2009
Abstract
Context. We present the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution, EVOL. The basic Tree + SPH code is presented and analysed, together with an overview of the software architectures.
Aims. EVOL is a flexible parallel Fortran95 code,
specifically designed for simulations of cosmological structure
formations on cluster, galactic and sub-galactic scales.
Methods. EVOL is a fully Lagrangian self-adaptive
code, based on the classical oct-tree by Barnes & Hut (1986,
Nature, 324, 446) and on the smoothed particle hydrodynamics algorithm
(SPH, Lucy 1977, AJ, 82, 1013). It includes special features like
adaptive softening lengths with correcting extra-terms, and modern
formulations of SPH and artificial viscosity. It is designed to be run
in parallel on multiple CPUs to optimise the performance and save
computational time.
Results. We describe the code in detail, and present the results of a number of standard hydrodynamical tests.
Key words: methods: simulations
1 Introduction
The importance of numerical simulations in modern science is constantly growing, because of the complexity, the multi-scaling properties, and the non-linearity of many physical phenomena. When analytical predictions are not possible, we are forced to compute the evolution of physical systems numerically. Typical examples in an astrophysical context are the problems of structure and galaxy formation and evolution. Over the past two decades, a number of issues concerning the formation of cosmic systems and their evolution along the history of the Universe were clarified, thanks to highly sophisticated cosmological and galaxy-sized numerical simulations. However, an equivalent number of new questions have been raised, so that a complete understanding of how galaxies and clusters formed and evolved along the Hubble time is still out of our reach. This is especially true in the light of many recent observations that often appear at odds with well established theories (see for instance the long debated question of the monolithic versus hierarchical mechanism of galaxy formation and their many variants) and to require new theoretical scenarios able to match the observational data.
For this purpose ever more accurate and detailed numerical simulations are still needed. They are indeed the best tool at our disposal to investigate such complex phenomena as the formation and evolution of galaxies within a consistent cosmological context. A number of astrophysical codes for galaxy-sized and cosmological simulations are freely and publicly available. They display a huge range of options and functions; each of them is best suited for a set of particular problems and experiments, and may suffer one or more drawbacks. Among the best known, we recall FLASH (Fryxell et al. 2000), GADGET (Springel et al. 2001) and its second release GADGET2 (Springel 2005), GASOLINE (Wadsley et al. 2004), ENZO (O'Shea et al. 2004), HYDRA (Couchman et al. 1995),and VINE (Wetzstein et al. 2008).
EVOL is the new release of the Padova N-body code (PD-TSPH, Carraro et al. 1998; Lia et al. 2002; Merlin & Chiosi 2007). It is a flexible, fully Lagrangian, parallel, and self-adaptive N-body code, written in Fortran95 in a straightforward and user-friendly format.
EVOL describes the dynamical evolution of a system of interacting discrete masses (particles) moving under the mutual gravitational forces and/or the gravitational action of external bodies, plus, when appropriate, under mutual hydrodynamical interactions. These particles can represent real discrete bodies or a fluid contained in volume elements of suitable size. A numerical simulation of a physical system follows the temporal evolution of it using small but finite time-steps to approximate the equations of motion to a finite-differences problem. In the Lagrangian description, no reference grid is superposed to the volume under consideration, while the particles move under their mutual (or external) interactions. Each particles carries a mass, a position, a velocity, plus (when necessary) a list of physical features like density, temperature, chemical composition, etc.
To simulate the dynamical evolution of a region of the Universe, one
has to properly model the main different material components, namely
Dark Matter, gas, and stars), representing them with different
species of particles. Moreover, a physically robust model for the
fundamental interactions (in addition to gravity) is required at
the various scales of interest. Finally, a suitable cosmological
framework is necessary, together with a coherent setting of the
boundary conditions and with an efficient algorithm to follow the
temporal evolution of the system. EVOL is designed to
respond to all of these requirements, improving the previous
versions of the Padova Tree-SPH code in many
aspects. In the following sections, the main features of
EVOL are presented in detail, together with a review of
some general considerations about the adopted algorithms whenever
appropriate.
This paper presents the basic features of the code; namely, the Tree-SPH (i.e. oct-tree plus smoothing particle hydrodynamics) formalism, its implementation in the parallel architecture of the code, and the results of a number of classic hydrodynamic tests. The non-standard algorithms (e.g. radiative cooling functions, chemical evolution, star formation, energy feedback) will be presented in a following companion paper (Merlin et al. 2010, in preparation).
The plan of the paper is as follows. In Sect. 2 we describe the aspects of the code related to the gravitational interaction. In Sect. 3 we deal with the hydrodynamical treatment of fluids and its approximation in the SPH language. Section 4 illustrates some aspects related to the integration technique, like the time steps, the periodic boundary conditions, and the parallelisation of the code. Section 5 contains and discusses numerous tests aimed to assess the performance of the code. Section 6 summarises some concluding remarks. Finally, Appendices A and B contains some technical details related to the N-dimensional spline kernel (Appendix A) and the equations of motion and energy conservation (Appendix B).
2 Description of the code: gravity
Gravity is the leading force behind the formation of cosmic
structures, on many scales of interest, from galaxy clusters down to
individual stars and planets. Classical gravitation is a well
understood interaction. As long as general relativity, particle
physics or exotic treatments like modified dynamics are not
considered, it is described by
Newton's law of universal gravitation
where G is the gravitational constant. Given a density field


and

The gravitational force exerted on a given body (i.e., particle) by the whole system of bodies within a simulation can be obtained by the vectorial summation of all the particles' contributions (without considering, for the moment, the action of the infinite region external to the computational volume; this can indeed be taken into account using periodic boundary conditions, see Sect. 4.3). This is simply the straightforward application of Eq. (1). Anyway, in practice this approach is not efficient, and may also lead to artificial divergences, as explained in the following sections.
2.1 Softening of the gravitational force
Close encounters between particles can cause numerical errors,
essentially because of the time and mass resolution limits.
Moreover, when dealing with continuous fluids rather than with single, isolated objects,
one tries to solve Eq. (2) rather than Eq. (1),
and must therefore try to model a smooth density field given a distribution
of particles with mass. In addition, dense clumps of particles may also steal considerable amounts of
computational time. To cope with all these issues, it is common practice to
soften the gravitational force between close pairs of
bodies: if the distance between two particles becomes smaller than a
suitable softening length ,
the force exerted on
each body is corrected and progressively reduced to zero with
decreasing distance.
Different forms of the force softening function can be used. A
possible expression is given by the formula
where


![[*]](/icons/foot_motif.png)
Note that in Eq. (4) the softening length





The softening lengths can be fixed in space and time, or may be let
to vary with local conditions. If the softening length is kept
constant, the choice of its numerical value is of primary
importance: for too small a softening length it will result in noisy
force estimates, while for too high a value it will systematically
bias the force in an unphysical manner
(Romeo 1998; Price & Monaghan 2007; Merritt 1996; Athanassoula et al. 2000, and see also Sect. 5.9). Unfortunately, the
``optimal'' value for the softening length depends on the particular
system being simulated, and is in general not known a priori. A
standard solution is to assign a softening length to each particle
proportional to its mass, keeping it fixed throughout the whole
simulation, or letting it vary
with time or redshift, if a cosmological background is included.
A clear advantage of keeping
fixed in space is that energy is
naturally conserved, but on the other hand the smallest resolvable length
scale is fixed at the beginning of the simulation and remains the same
in the whole spatial domain independently of the
real evolution of the density field. A collapsing or expanding body may
quickly reach a size where the flow is dominated by the softening in
one region, while in another the flow may become unphysically point-like.
Obviously, the accuracy can be greatly improved if
is let
to vary according to the local particle number density (see e.g.
Dehnen 2001). Moreover, in principle, if particles in a
simulation are used to sample a continuous fluid (whose physics is
determined by the Navier-Stokes or the Boltzmann equations) the
properties of these points should always change accordingly to the
local properties of the fluid they are sampling, to optimise the
self-adaptive nature of Lagrangian methods. On the other hand, if
particles represent discrete objects (single stars, or galaxies in a
cluster, etc.), their softening lengths might perhaps be considered
an intrinsic property of such objects and may be kept constant,
depending on their mass and not on the local density of particles.
In cosmological and galaxy-sized simulations, gas and Dark Matter
particles are point-masses sampling the underlying density field,
and stellar particles represent clusters of thousands of stars prone
to the gravitational action of the nearby distribution of matter;
thus a fully adaptive approach seems to be adequate to describe the
evolution of these fluids. However, it can be easily shown that
simply letting
change freely would result in a poor
conservation of global energy and momentum, even if in some cases
the errors could be small (see e.g. Wetzstein et al. 2008; Price & Monaghan 2007).
To cope with this, EVOL allows for the adaptive evolution
of individual softening lengths, but includes in the equations of motion
of particles additional self-consistent correcting terms.
These terms can be derived (see Price & Monaghan 2007) starting from
an analysis of the Lagrangian describing a self-gravitating, softened,
collisionless system of N particles, which is given by
where

and


where again

The equations of motion are obtained from the Euler-Lagrange
equations,
which give
![]() |
(9) |
The gravitational part of the Lagrangian is
where for the sake of clarity


where
![]() |
(12) |
To obtain the required terms in the second addend on the right part of Eq. (11), the softening length must be related to the particle coordinates so that



where W is the density kernel of Eq. (4). This is the same rule as at the basis of the SPH scheme (see Sect. 3)
![[*]](/icons/foot_motif.png)
Approximating the integral to a summation over particles for practice
purposes, one obtains
where i marks the ``active'' particle that lays at the position at which the function is evaluated, and j are its neighbouring particles (or simply neighbours). The error by doing so depends on the disorder of particles and is in general O(h2)(Monaghan 1992). Thus
where A = n; so there is no need to know the value of the density of neighbouring particles in advance to compute the density of the active one.
To link
and n, a safe prescription is to use
where

![]() |
(17) |
with



The term needed in Eq. (11) is
![]() |
(18) |
The first factor is given by the derivative of Eq. (16),
![]() |
(19) |
whereas the second factor is the spatial derivative of Eq. (15), which results:
![]() |
(20) |
where
![]() |
(21) |
Rearranging and putting

where
The

where, as usual,




In Eq. (22) (and Eq. (12)), the first term is the
classical gravitational interaction, whereas the second term is a
new, extra-term which restores energy conservation for varying
softening lengths. Since
is defined as a negative quantity
for positive kernels, this term acts as a negative pressure,
increasing the gravitational force. To obtain the
and
correcting terms, each particle i must perform a loop
over the other particles, summing the contributions from those
that satisfy the criterion
.
The relation between
and n defined by Eq. (16)
leads to a non-linear equation which can be solved self-consistently
for each particle. Price & Monaghan (2007) proposed an iterative method in
which beginning from a starting guess for both n and
the solution of the equation
where









![]() |
(26) |
To increase the efficiency of this process, a predicted value for both

This formulation can be obtained taking the time derivative of Eq. (15), which yields
while
![]() |
(29) |
Combining Eqs. (28) with (27), one can also see that the velocity divergence at the i particle location is given by
![]() |
(30) |
The adoption of this adaptive softening length formalism with the correcting extra-terms in the equation of motion results in small errors, always comparable in magnitude to the ones found with the ``optimal''

At the beginning of a simulation, the user can select whether he/she prefers to adopt the constant or the adaptive softening lengths formalism, by switching a suitable flag on or off.
2.2 Hierarchical oct-tree structure
A direct summation of the contribution of all particles should in
principle be performed for each particle at each time-step to
correctly compute the gravitational interaction, which would lead to a
computational cost increasing with N2 (N being the number of
particles). A convenient alternative to this unpractical approach
are the so-called tree structures, in which particles are
arranged in a hierarchy of groups or ``cells''. In this way, when the
force on a particle is computed, the contribution by distant groups
of bodies can be approximated by their lowest multipole moments,
reducing the computational cost to evaluate the total force
to
.
In the classical Barnes & Hut (1986) scheme,
the computational spatial domain is hierarchically partitioned into
a sequence of cubes, where each cube contains eight siblings, each
with half the side-length of the parent cube. These cubes form the
nodes of an oct-tree structure. The tree is constructed in a way that
each node (cube) contains either a desired number of particles
(usually one, or one per particles type - Dark Matter, gas, stars), or is
the progenitor of further nodes, in which case the node carries the
monopole and quadrupole moments of all the particles that lie inside
the cube. The force computation then proceeds by walking along the
tree and summing up the appropriate force contributions from tree
nodes. In the standard tree walk, the global gravitational force
exerted by a node of a linear size l on a particle i is considered
only if
,
where r is the distance of the node from
the active particle and
is an accuracy parameter
(usually
1): if a node fulfills the criterion, the tree walk
along this branch can be terminated, otherwise it is ``opened'', and
the walk is continued with all its siblings. The contribution from
individual particles is thus considered only when they are
sufficiently close to the particle i.
To cope with some problems pointed out by Salmon & Warren (1994) about
the worst-case behaviour of this standard criterion for commonly
employed opening angles, Dubinski (1996) introduced the
simple modification
where the

In practice, this whole scheme only includes the monopole moment of the force exerted by distant groups of particles. Higher orders can be included, and the common practice is to include the quadrupole moment corrections (EVOL indeed includes them). Still higher multipole orders would result in a worst performance without significant gains in computational accuracy (McMillan & Aarseth 1993).
Note that if one has the total mass, the centre of mass and the weighted average softening length of a node of the oct-tree structure, the softened expression of the gravitational force can be straightforwardly computed treating the cell as a single particle if the opening criterion is fulfilled but the cell is still sufficiently near for the force to need to be softened.
As first suggested by Hernquist & Katz (1989), the tree structure can be also used to obtain the individual lists of neighbours for each body. At each time step each (active) particle can build its list of interacting neighbouring particles while walking the tree and opening only sufficiently nearby cells, until nearby bodies are reached and linked.
3 Description of the code: hydrodynamics
Astrophysical gaseous plasmas can generally be reasonably approximated to highly compressible, unviscous fluids, in which a fundamental role anyway is played by violent phenomena like strong shocks, high energy point-like explosions and/or supersonic turbulent motions. EVOL follows the basic gas physics by means of the SPH method (Monaghan 1992; Lucy 1977), in a modern formulation based on the review by Rosswog (2009), to which the reader is referred for details. Anyway, some different features are present in our implementation and we summarise them below, along with a short review of the whole SPH algorithm.
To model fluid hydrodynamics by means of a discrete description of a continuous fluid, in the SPH approach the properties of single particles are smoothed in real space through the kernel function W and are thus weighted by the contributions of neighbouring particles. In this way, the physical properties of each point in real space can be obtained by the summation over particles of their individual, discrete properties. Note that on the contrary to what happens when softening the gravitational force (which is gradually reduced to zero within the softening sphere), in this case the smoothing sphere is the only ``active'' region, and only particles inside this region contribute to the computation of the local properties of the central particle.
Starting again from the interpolation rule described in Sect. 2.1, but
replacing the softening length
by a suitable
smoothing length h, the integral interpolating the
function
becomes
(the kernel function W can be the density kernel Eq. (4)).
The relative discrete approximation becomes
and the physical density of any SPH particle can be computed as
(here and throughout this section, it is implied that all summations and computations are extended on SPH particles only).
A differentiable interpolating function can be constructed from its
values at the interpolation points, i.e. the particles
In any case better accuracy is found rewriting the formulae with the density inside operators and with the general rule
(e.g., Monaghan 1992). Thus the divergence of velocity is customarily estimated by means of
![]() |
(37) |
where


The dynamical evolution of a continuous fluid is governed by the well known laws of conservation: the continuity equation which ensures conservation of mass, the Euler equation which represents the conservation of momentum, and the equation of energy conservation (plus a suitable equation of state to close the system). These equations must be written in discretised form to be used with the SPH formalism, and can be obtained by means of the Lagrangian analysis, following the same strategy used to obtain the gravitational accelerations.
The continuity equation is generally replaced by Eq. (34). Like in the case of the gravitational softening
length
,
the smoothing length h may in principle be a
constant parameter, but the efficiency and accuracy of the SPH
method is greatly improved by adapting the resolution lengths to the
local density of particles. A self-consistent method is to adopt the
same algorithm described in Sect. 2.1, i.e. obtaining h from
where of course n is the local number density of SPH particles only, and relating h to n by requiring that a fixed number of kernel-weighted particles is contained within a smoothing sphere, i.e.
Note that in this case, while still obtaining the mass density by summing Eq. (34), one can compute the number density of particles at particle i's location, i.e.:
which is clearly formally equivalent to Eq. (15)
![[*]](/icons/foot_motif.png)
It is worth recalling here that in the first versions of SPH with
a spatially-varying resolution, the spatial variation of the smoothing
length was generally not considered in the the equations of motion,
and this resulted in secular errors in the conservation of entropy.
The inclusion of extra-correcting terms (the so-called
terms) is therefore important to ensure the
conservation of both energy and entropy. For example,
Serna et al. (1996) studied the pressure-driven expansion of a gaseous
sphere, finding that while the energy conservation is generally good
even without the inclusion of the corrections (and sometimes gets
slightly worse if these are included!), errors up to
can
be found in the entropy conservation, while all this does not occur
with the inclusion of the extra-terms. Although the effects of
entropy violation in SPH codes are not completely clear and need to
be analysed in much more detail, especially in simulations where
galaxies are formed in a cosmological framework, there are many
evidences that the problem must be taken into consideration.
Alimi et al. (2003) have analysed this issue for the
collapse of isolated objects and have found that if correcting
terms are neglected the density peaks associated with central cores
or shock fronts are overestimated at a
level.
These
terms were first introduced explicitly
(Nelson & Papaloizou 1994), with a double-summation added to the canonical
SPH equations. Later an implicit formulation was obtained
(Springel & Hernquist 2002; Monaghan 2002), starting from the Lagrangian
derivation of the equations of motion and self-consistently
obtaining correcting terms which accounts for the variation of h.
Obviously these terms are formally similar to those obtained for the
locally varying gravitational softening lengths (Sect. 2.1).
Following Monaghan (2002), the Lagrangian for non-relativistic
fluid dynamics can be written as
![]() |
(42) |
(the gravitational part is not included here since SPH does not account for gravity), which in the SPH formalism becomes
![]() |
(43) |
As already done in Eq. (8), the equations of motion of a particle can be obtained from the Euler-Lagrange equations, giving
To obtain the term


![]() |
(45) |
where


![]() |
(46) |
is an extra-term that accounts for the dependence of the kernel function on the smoothing length. Note that



In case of particles with different mass, the term
is
where
![]() |
(48) |
and
![]() |
(49) |
(Price 2009, private communication). These terms can be easily computed for each particle while looping over neighbours to obtain the density from Eq. (34).
The term
in Eq. (44) can be
derived from the first law of thermodynamics,
,
dropping the dQ term since only adiabatic processes are
considered. Rewriting in terms of specific quantities, the volume
V becomes volume ``per mass'', i.e.
,
and
.
Thus, if no dissipation is present so that the specific entropy
is constant,
![]() |
(50) |
and
Inserting Eqs. (51) and (47) into Eq. (44), the equations of motion finally become
The equation for the thermal energy equation is
It can be shown that if all SPH particles have the same mass, Eqs. (52) and (53) are reduced to the standard Eqs. (2.10) and (2.23) in Monaghan (2002)
![[*]](/icons/foot_motif.png)
Equations (41) (density summation) and (40) form a non-linear system to be solved for h and n, adopting the method described in Sect. 2.1.
A drawback of this scheme can be encountered when a sink
of internal energy such as radiative cooling is included. In this
case, very dense clumps of cold particles can form and remain stable
for long periods of time. Since the contribution of neighbouring
particles to the density summation (Eq. (41)) is weighted by
their distance from the active particle, if this clump is in the
outskirts of the smoothing region, each body will give a very
small contribution to the summation, and this will result in an
unacceptably long neighbour list. For these cases the adoption of a
less accurate but faster method is appropriate. The easiest way to
select h is to require that a constant number of neighbouring
particles is contained within a sphere of a radius
(perhaps
allowing for small deviations). This type of procedure was
generally adopted in the first SPH codes, and provided sufficiently
accurate results. If the scheme in question is adopted
(EVOL may choose between the two algorithms), the terms
are neglected, because the relation Eq. (40) is no longer
strictly valid. Below we will refer to this scheme as to
the standard SPH scheme, and to the previous one based on the
n-h (or
relation) as to the Lagrangian SPH
scheme.
A test to check the ability of the different algorithms to capture
the correct description of an (ideally) homogeneous density field
has been performed with particles of different masses mixed
together in the domain
0<x<1, 0<y<0.1, 0<z<0.1 to obtain the
physical densities
if x < 0.5 and
if
.
For this purpose, a first set of particles were displaced on a
regular grid and were given the following masses:

Then a second set of particles were displaced on another superposed regular grid, shifted along all three dimensions with respect to the previous one by a factor of


Four different schemes were then used to compute the density of particles:
- standard SPH, mass density summation;
- standard SPH, number density summation;
- Lagrangian SPH, mass density summation;
- Lagrangian SPH, number density summation;
![]() |
Figure 1: Determination of the initial density adopting four different SPH algorithms. Left to right, top to bottom: standard SPH, mass density summation; Lagrangian SPH, mass density summation; standard SPH, number density summation; Lagrangian SPH, number density determination. Blue dots: high mass particles; red crosses: low mass particles. |
Open with DEXTER |
3.1 Symmetrisation
To ensure the conservation of energy and momenta, a proper symmetrisation of the equations of motion is required. It is worth noticing that while in the first formulations of SPH such symmetrisation had to be imposed ``ad hoc'', in the Lagrangian derivation it is naturally built-in without any further assumption.
Anyway, the symmetrisation is strictly necessary only when pairwise forces act on couples of particles. Thus it is not needed when obtaining the density from Eqs. (41) and (34), or when the internal energy of a particle is evolved according to Eq. (53). Indeed, symmetrisation of the energy equation may lead to unphysical negative temperatures. For example when cold particles are pushed by a strong source of pressure: in this case, the mechanical expansion leads to a loss of internal energy which, if equally subdivided between the cold and hot phase, may exceed the small energy budget of cold bodies (see e.g. Springel & Hernquist 2002).
Since symmetrisation is not needed in the density summation
algorithm, the smoothing length of neighbouring particles is not
required to compute the density of a particle i. This allows us an
exact calculation by simply gathering the positions and masses of
neighbours, which are known at the beginning of the time-step. On
the other hand, after the density loop each particle has its new
smoothing length h, and it is now possible to use these values in
the acceleration calculations, where symmetrisation is needed.
However, while in the previous case only particles within 2hi were
considered neighbours of the particle i, in this latter case the
symmetrisation scheme requires a particle j to be considered
a neighbour of particle i if
.
Note that with this scheme two routines of neighbour
searching are necessary: the first one in the density summation, and
the second in the force calculation. In practice, when the
evaluation of density is being performed, a gather view of
SPH is adopted (see Hernquist & Katz 1989); thus, each particle isearches its neighbours using only its own smoothing length hi,
until the convergence process is completed. In this way, if a
particle finds too high or too low a number of neighbours (and
therefore must change its smoothing length to repeat the search) it
is not necessary to inform the other particles (including those
belonging to other processors) about the change that is being made.
This would in principle be necessary if a standard
gather/scatter mode is adopted. Thus each particle
evaluates its density independently, adjusting its smoothing length
on its own, and no further communication among processors is necessary
after the initial gathering of particles positions.
Moreover, the search is initially performed with an effective
searching radius which is slightly larger than the correct value
2hi. In this way, the neighbour list will include slightly more
particles than effectively needed. However, if at the end of the
loop the smoothing length has to be increased by a small factor, it
is not be necessary to repeat the tree-walk to reconstruct the
neighbour list. Of course if the adaptive softening length scheme
is adopted the tree-walk will be performed to upgrade h and for SPH particles only, and
and
for
all particles.
After evaluating the forces (hydrodynamical forces and, if the adaptive
softening length scheme is adopted, gravitational forces too),
a new search for neighbours must be made, in which a
particle is considered neighbour to another one if the distance
between them is less than the maximum between the two searching
radii. During this second tree-walk, the gravitational force is
evaluated together with the construction of the particle's neighbour
list. Finally, correcting terms are computed for the hydrodynamic
equations of motion (
terms) and, if necessary, for
gravitational interactions (if softening lengths are adaptive),
summing the neighbouring particles' contributions.
3.2 Smoothing and softening lengths for SPH particles
SPH particles under self-gravity have two parameters characterising
their physical properties: the softening length ,
which tunes
their gravitational interactions with nearby particles, and the
smoothing length h, used to smooth their hydrodynamical features.
In principle these two quantities need not to be related
to one another, because they refer to two distinct actions, and have
somehow opposite functions as noted above. However, Bate & Burkert (1997)
claimed that a quasi-stable clump of gas could became un-physically
stable against collapse if
,
because in this case
pressure forces would dominate over the strongly softened gravity,
or on the other hand, if
,
collapse on scales smaller
than the formal resolution of SPH may be induced, causing artificial
fragmentation. They recommend that gravitational and hydrodynamical
forces should be resolved at the same scale length, imposing
(requiring both of them to fixed, or
to be
adaptive like h, with the introduction of the correcting terms
described above). In other studies (e.g. Thacker & Couchman 2000) the
softening length is fixed, but the smoothing length is allowed to
vary, imposing a lower limit close to the value of the softening
length (this is a typical assumption in SPH simulations of galaxy
formation).
Be that as it may, Williams et al. (2004) have studied the effects of this
procedure, finding that it can cause a number of problems in many hydrodynamical studies. The adoption of high values of h can
result in over-smoothed density fields, i.e. a decrease in the
computed values of the density
and of the density gradients
,
which in turn un-physically decreases the computed pressure
accelerations in the Euler equation. Problems with angular momentum
transfer may also arise. Therefore they suggest to allow h to
freely vary. This eventually yields improved accuracy and also,
contrary to expectations, a significant saving in computational time
(because the number of neighbours is kept fixed instead of steeply
increasing as it may happen in dense environments with fixed h).
Moreover, they pointed out that in contrast with the claims by
Bate & Burkert (1997), the smoothing length should not be kept equal to
the softening length, because in many physical situations (shocks
for example) hydrodynamical forces dominate over gravity and likely
need to be properly resolved on size scales smaller than the
softening length.
Finally, a numerical problem arises when one tries to keep
in cosmological simulations. The presence of a Dark Matter component
makes it impossible to keep the number of both hydrodynamical and gravitational neighbours constant. For example, a
collapsed Dark Matter halo could retain few gaseous particles
(because of shocks and/or stellar feedback), and these will thus
have many gravitational neighbours, but few hydrodynamic neighbours
with a unique softening/smoothing length. The opposite problem could
arise for very dense clumps of cold gas that may form
behind shocks without a corresponding concentration of Dark Matter.
Consequently both
and h are let free to vary
independently in EVOL. Future tests may be done trying to
keep the two parameters linked, for example allowing for some
variations in both their ratio
(imposing it to be
around, but not exactly, 1) and in the number of
hydrodynamical and gravitational neighbours.
3.3 Discontinuities
3.3.1 Artificial viscosity
The SPH formalism in its straight form is not able to handle discontinuities. By construction, SPH smooths out local properties on a spatial scale of the order of the smoothing length h. Still, strong shocks are of primary importance in a number of astrophysical systems. Accordingly, to model a real discontinuity in the density and/or pressure field ad hoc dissipational, pressure-like terms must be added to the perfect fluid equations to spread the discontinuities over the numerically resolvable length. This is usually achieved by introducing an artificial viscosity, a numerical artifact that is not meant to mimic physical viscosity, but to reproduce on a large scale the action of smaller, unresolved scale physics with a formulation consistent with the Navier-Stokes terms.
Among the many formulations that have been proposed, the most
commonly used is obtained adding a term
to Eqs. (52) and (53), so that
and
Here

![]() |
(58) |
and
![]() |
= | ![]() |
(59) |
![]() |
(60) |
(Monaghan 1988).








A more recent formulation is the so-called ``signal velocity''
viscosity (Monaghan 1997), which works better for small pair
separations between particles, being a first-order expression of h/r; it reads
with the signal velocity defined as
One or the other of the two formulations above can be used in EVOL.
In general, artificial dissipation should be applied only where really necessary. However, it has long been recognized that artificial viscosity un-physically boosts the post-shock shear viscosity, suppressing structures in the velocity field on scales well above the nominal resolution limit, and damping the generation of turbulence by fluid instabilities.
To avoid the overestimate of the shear viscosity,
Balsara (1995) proposed to multiply the chosen expression of
by a function
;
the proposed
expression for the correcintg term fi for a particle i is
![]() |
(63) |
where

Another possible cure to the problem is the following. The most
commonly used values for the parameters
and
in Eq. (57) are
and
.
Bulk viscosity is
primarily produced by
,
and this sometimes over-smoothen the
post-shock turbulence. To cope with this, Morris & Monaghan (1997) have
proposed a modification of Eq. (57), in which the parameter
depends on the particle and changes with time according to
where


![$v^{\rm sig}
= \max_j[v_{ij}^{\rm sig}]$](/articles/aa/full_html/2010/05/aa13514-09/img169.png)
![$G_i = 0.75 f_i \max[0,-\vert\nabla\cdot\vec{v}_i\vert]$](/articles/aa/full_html/2010/05/aa13514-09/img170.png)

Finally, we include the cut-off to the viscous acceleration introduced by Springel et al. (2001, their Eq. (50)), to avoid unphysical repulsive forces.
3.3.2 Artificial thermal conductivity
As Price (2008) pointed out, an approach similar to that described above should be used to smooth out discontinuities in all physical variables. In particular an artificial thermal conductivity is necessary to resolve discontinuities in thermal energy (even if only a few formulations of SPH in the literature take this into account).
The artificial thermal conductivity is represented by the term
![]() |
(65) |
Here


![]() |
(66) |
where



![]() |
(67) |
which reduces particles noise;

![]() |
(68) |
We note in passing that adopting the source term suggested by Price & Monaghan (2005)

The term
must finally be added to Eq. (57), giving
As in the case of artificial viscosity, it is worth noting that this conductive term is not intended to reproduce a physical dissipation; instead, it is a merely numerical artifact introduced to smooth out unresolvable discontinuities.
3.4 Entropy equation
Instead of looking at the thermal energy, we may follow the
evolution of a function of the total particle entropy, for instance
the entropic function
![]() |
(70) |
where

where S is the source function describing the energy losses or gains due to non-adiabatic physics apart from viscosity. If

![]() |
(72) |
The entropic function can be used to detect situations of shocks since its value only depends on the strength of the viscous dissipation.
In general, if the energy equation is used (integrated) the
entropy is not exactly conserved, and viceversa. Moreover, if the
density is evolved according to the continuity Eq. (38) and the thermal energy according to Eq. (53), both energy and entropy are conserved; but in
this case the mass is not conserved (to ensure this,
the density must be computed with Eq. (34)).
This problem can be mitigated with the inclusion of the
terms
in the SPH method, as already discussed.
3.5 X-SPH
As an optional alternative, EVOL may adopt the smoothing of
the velocity field by replacing the normal equation of motion
with
(Monaghan 1992) with




4 Description of the code: integration
4.1 Leapfrog integrator
Particle positions and velocities are advanced in time by means of a
standard leapfrog integrator, in the so-called
kick-drift-kick (KDK) version,
where the K operator evolves
velocities and the D operator evolves positions:
![]() |
|||
![]() |
|||
![]() |
A predicted value of physical quantities at the end of each time-step is also predicted at the beginning the same step, to guarantee synchronisation of the calculation of the accelerations and other quantities. In particular, the synchronised predicted velocity

Moreover, if the individual time-stepping is adopted (see below, Sect. 4.2.1), non-active particles use predicted quantities to give their own contributions to forces and interactions.
4.2 Time-stepping
A trade-off between accuracy and efficiency can be reached by a suitable choice of the time-step. One would like to obtain realistic results in a reasonable amount of CPU time. To do so, numerical stability must be ensured, together with a proper description of all relevant physical phenomena, at the same time reducing the computational cost keeping the time-step as large as possible.
For this purpose, at the end of each step, the optimal
time-step for each particle is given by the shortest among the times
![]() |
= | ![]() |
|
![]() |
= | ![]() |
|
![]() |
= | ![]() |
with obvious meaning of the symbols; the parameters

We can compute two more values, the first obtained from the well-known Courant condition and the other constructed to avoid
large jumps in thermal energy:
![]() |
|||
![]() |
where


If the simulation is run with a global time-stepping scheme, all particles are advanced adopting the minimum of all individual time-steps calculated in this way; synchronisation is clearly guaranteed by definition.
If desired, a minimum time-step can be imposed. In this case, however, thresholds should be adopted to avoid numerical divergences in accelerations and/or internal energies. This of course leads to a poorer description of the physical evolution of the system; anyway, in some cases a threshold may be necessary to avoid very short time-steps due to violent high energy phenomena.
4.2.1 Individual time-stepping
Cosmological systems usually display a wide range of densities and temperatures, so that an accurate description of very different physical states is necessary within the same simulation. Adopting the global time-stepping scheme may cause a huge waste of computational time, since gas particles in low-density environments and collisionless particles generally require much longer time-steps than the typical high-density gas particle.
In EVOL an individual time-stepping
algorithm can be adopted, which makes better use of the computational resources.
It is based on a powers-of-two scheme, as in Hernquist & Katz (1989).
All the individual particles' real time steps,
,
are
chosen to be a power-of-two subdivision of the longest allowed
time-step (which is fixed at the beginning of the simulation), i.e.
![]() |
(74) |
where ni is chosen as the minimum integer for which the condition

A particle is then marked as ``active'' if
,
where the latter is the global system time,
updated as
,
where
.
At each time-step only active
particles re-compute their density and update their accelerations
(and velocities) via a summation over the tree and neighboring
particles. Non-active particles keep their acceleration and velocity
fixed and only update their position using a prediction scheme at
the beginning of the step. Note that some other evolutionary
features (i.e. cooling or chemical evolution) are instead updated at
every time-step for all particles.
Since in the presence of a sudden release of very hot material (e.g. during a supernova explosion) one or a few particles may have to drastically and abruptly reduce their time-step, the adoption of an individual time-stepping scheme would clearly lead to unphysical results, since neighbouring non-active particles would not notice any change until they become active again. This is indeed a key issue, very poorly investigated up to now. To cope with this, a special recipe has been added, forcing non-active particles to ``wake up'' if they are being shocked by a neighbour active particle, or in general if their time-step is too long (say more than 4-8 times longer) with respect to the minimum time-step among their neighbours. In this way all the neighbouring particles are soon informed of the change and are able to react properly when an active particle suddenly changes its physical state. Saitoh & Makino (2008) recently studied the problem and arrived at similar conclusions, recommending a similar prescription when modelling systems in which sudden releases of energy are expected.
Note that when a non-active particle is woken up, energy and momentum are not perfectly conserved, since its evolution during the current time-step is not completed. This error is negligible if compared to the errors introduced by ignoring this correction though.
4.3 Periodic boundary conditions
Periodic boundary conditions are implemented in EVOL using
the well known Ewald (1921) method, i.e. adding to the
acceleration of each particle an extra-term due to the replicas of
particles and nodes, expanding in Fourier series the potential and
the density fluctuation, as described in Hernquist et al. (1991) and
in Springel et al. (2001). If
is the coordinate at the
point of force-evaluation relative to a particle or a node j with
mass mj, the additional acceleration which must be added
to account for the action of the infinite replicas of j is given
by
![]() |
= | ![]() |
(75) |
![]() |
|||
![]() |
where







The periodicity in the SPH scheme is obtained straightforwardly finding neighbours across the spatial domain and taking the modulus of their distance to obtain their nearest image with respect to the active particle. In this way, no ghost particles need to be introduced.
4.4 Parallelisation
![]() |
Figure 2: Data structures in EVOL. Void cells are included to allow for new particles to be created and/or imported from other CPUs. Ghost structures are reallocated at each time-step. |
Open with DEXTER |
EVOL uses the MPI communication protocol to run in parallel on multiple
CPUs. First of all, particles must be subdivided among the CPUs, so
that each processor has to deal only with a limited subset of the
total number of bodies. for this purpose the spatial
domain is subdivided at each time-step using the orthogonal recursive
bisection (ORB) scheme. In practice, each particle keeps track of
the time spent to compute its properties during the previous
time-step in which it has been active, storing it in a work-load
variable. At the next time-step the spatial domain is subdivided
trying to assign the same total amount of work-load to each CPU
(only considering active particles if the individual time-stepping
option is selected), re-assigning bodies near the borders among the
CPUs. To achieve this the domain is first cut along the x-axis, at
the coordinate
that better satisfies the equal work-load
condition among the two groups of CPUs formed by those with
geometrical centres
and those
with
.
In practice, each CPU exchanges
particles only if its borders are across
.
Then the two
groups of CPUs are themselves subdivided into two new groups, this
time cutting along the y-axis (at two different
coordinates, because they are independent from one another as they
have already exchanged particles along the x-axis). The process is
repeated as long as required (depending to the number of available CPUS),
cutting recursively along the three dimensions. It should be noted
that this scheme allows one to use only a power-of-two number of
CPUs, whereas other procedures (e.g. a Peano-Hilbert domain
decomposition, see e.g. Springel 2005) can more efficiently
exploit the available resources.
It may sometimes happen that a CPU wants to acquire more particles than permitted by the dimensions of its arrays. In this case, data structures are re-allocated using temporary arrays to store extra-data, as described in the next section.
Apart from the allocation of particles to the CPUs, the other task for the parallel architecture is to spread the information about particles belonging to different processors. This is achieved by means of ``ghost'' structures, in which these properties are stored when necessary. A first harvest among CPUs is made at each time step before computing SPH densities: at this point, positions, masses and other useful physical features of nearby nodes and particles, belonging to other processors, are needed. Each CPU ``talks'' only to another CPU per time, first importing all the data-structures belonging to it in temporary arrays, and then saving within its ``ghost-tree'' structure only the data relative to those nodes and particles which will be actually used. For example, if a ``ghost node'' is sufficiently far away from the active CPU geometrical position so that the gravitational opening criterion is satisfied, there will be no need to open it to compute gravitational forces, and no other data relative to the particles and sub-nodes it contains will be stored in the ghost-tree.
A second communication among CPUs is necessary before computing accelerations. Here each particle needs to known the exact value of the smoothing and softening lengths (which have been updated during the density evaluation stage) as well as many other physical values of neighbouring (but belonging to different CPUs) particles. The communication scheme is exactly the same as before, involving ghost structures. It was found that instead of upgrading the ghost-tree built before the density evaluation, a much faster approach is to completely re-build it.
The dimensions of the ghost arrays are estimated at the beginning of the simulation and every N time-steps (usually N=100), by means of a ``preliminary walk'' among all CPUs. In intermediate steps, the dimensions of these arrays are modified on the basis of the previous step evaluation and construction of ghost structures.
4.5 Data structures
All data structures are dynamically adapted to the size of the problem under exploration. In practice, all arrays are allocated at the beginning of a simulation so that their dimension is equal to the number of particles per processor, plus some empty margin to allow for an increase of the number of particles to be handled by the processor (see Fig. 2).
Whenever a set of arrays within a processor becomes full of data and has no more room for new particles, the whole data structure of that processor is re-scaled increasing its dimensions, to allow for new particles to be included. Of course, the opposite case (i.e. if the number of particles decreases, leaving too many void places) is also considered.
This is done by first storing the data in excess in temporary T arrays. Then the whole ``old'' data structure O is copied onto new temporary N arrays, which have the new required dimensions. Finally, T arrays are copied into the N structure; this substitutes the O structure, which is then deleted. Clearly, this procedure is quite memory-consuming, but ideally it has to be carried out only a few times per simulation, at least in standard cases. Moreover, it is performed after the ``ghost'' scheme described above has been completed, and ghost-structures have been deallocated, leaving room for other memory-expensive routines.
5 Tests of the code
An extended set of hydrodynamical tests was performed to check the performance of EVOL under different demanding conditions:
- rarefaction and expansion problem (Einfeldt et al. 1991, 1D);
- shock tube problem (Sod 1978, both in 1D and in 3D);
- interacting blast waves (Woodward & Colella 1984, 1D);
- strong shock collapse (Noh 1987, 2D);
- Kelvin-Helmholtz instability problem (2D);
- Gresho vorticity problem (Gresho & Chan 1990, 2D);
- point-like explosion and blast wave (Sedov-Taylor problem, 3D);
- adiabatic collapse (Evrard 1988, 3D);
- isothermal collapse (Boss & Bodenheimer 1979, 3D);
- collision of two politropic spheres (3D);
- evolution of a two-component fluid (3D).







5.1 Rarefaction wave
We begin our analysis with a test designed to check the
the code in extreme, potentially dangerous
low-density regions, where some iterative Riemann solvers can return
negative values for the pressure/density. In the Einfeldt rarefaction
test (Einfeldt et al. 1991) the two halves of the computational domain
0<x<1 move in opposite directions, and thereby create a region of
very low density near the initial velocity discontinuity at x=0.5.
The initial conditions of this test are
![]() |
(76) |
The results at t=0.2 for a 1-D, 1000-particle calculation are shown in Fig. 3. Clearly, the low-density region is well described by the code.
![]() |
Figure 3: Left to right, top to bottom: density, internal energy, x-velocity, and pressure profiles at t=0.2 for the Einfeldt test. All quantities are in code units. |
Open with DEXTER |
5.2 Shock tube
The classic Riemann problem to check the shock capturing capability is the Sod's shock tube test (Sod 1978). We ran many different cases changing numerical parameters to check the response to different settings. All tests were started from un-smoothed initial conditions, leaving the code to smear out the initial discontinuities.
5.2.1 1-D tests
The first group of tests was performed in one dimension. 1000 equal mass particles were displaced on the x axis in the
domain
and the inter-particle spacing was changed to
obtain the setting
![]() |
(77) |
(

- T1 - standard viscosity (with
and
) and no artificial conduction;
- T2 - standard viscosity plus artificial conduction;
- T3 - viscosity with variable
plus artificial conduction;
- T4 - the same as T2, but with the same density discontinuity obtained
by using particles with different masses on a regular lattice instead
of equal mass particles on a shrinked grid. To do so,
particles on the left of the tube have the mass
m = 10-3, whereas those on the right have
.
The overall behaviour is good in all cases. We note the typical blipin
the pressure profile and the wall-like overshoot in thermal energy
at the contact discontinuity in the T1 run. The introduction of the
artificial thermal conductivity (T2) clearly helps to mitigate the
problem at the expense of a slightly shallower thermal energy profile.
Introducing the variable -viscosity
(T3), the reduced viscosity causes post-shock oscillations in the
density (and energy), and makes the velocity field noisier and
turbulent, as expected.
Little differences can be seen in the T4 run, except for a slightly smoother density determination at the central contact discontinuity, which also increases the blip problem. The best results are evidently obtained with the T2 configuration.
![]() |
Figure 4: Left to right, top to bottom: density, internal energy, x-velocity and pressure profiles at t=0.12 for test T1 (see text for details). Red solid line: exact solution. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 5: the same as in Fig. 4, but for test T2 (see text for details). |
Open with DEXTER |
![]() |
Figure 6: The same as in Fig. 4, but for test T3 (see text for details). |
Open with DEXTER |
![]() |
Figure 7: The same as in Fig. 4, but for test T4 (see text for details). |
Open with DEXTER |
5.2.2 3-D tests
As pointed out by Steinmetz & Mueller (1993), performing shock tube
calculations with 1-D codes with the particles initially displaced on a
regular grid essentially avoids the problems caused by numerical
diffusivity, but in realistic cosmological 3-D simulations the
situation is clearly more intricated. To investigate this issue, we
switch to a 3-D description of the shock tube problem.
The initial conditions are set using a relaxed
glass distribution of 60 000 particles in a box with
dimensions [0, 1] along the x axis and (periodic) [0, 0.1]along the y and z axis. Particles are assigned a mass
and
on the two sides on the discontinuity at x=0.5, with all other parameters
as in the 1-D case. A first run (T5) was
performed with the T1 configuration (standard viscosity, no artificial
conduction), while a second one (T6) was
done including the artificial conduction term.
![]() |
Figure 8: The same as in Fig. 4, but for test T5 (see text for details). |
Open with DEXTER |
![]() |
Figure 9: The same as in Fig. 4, but for test T6 (see text for details). |
Open with DEXTER |
Figures 8 and 9 show the state of the system at t=0.12 for both cases (note that all particles and no mean values are plotted). As in the 1-D case, the overall agreement with the theoretical expectation is quite good. However, the inclusion of the conductive term on the one hand yields sharper profiles and reduces the pressure blip, on the other hand gives a smoother description of the contact discontinuities (see the density profile at x=0.55). We argue that part of the worse results may be due to the use of particles with a different mass as a similar result was found for the corresponding 1-D test (see T4 above). A limit on the conduction efficiency could give better results, but an ad hoc adjustment should be applied to each case.
5.3 Interacting blast waves
Woodward & Colella (1984) proposed a severe test in 1-D, i.e. two strong blast
waves develop out of a gas of
density
that is initially at rest in the domain 0<x<1, with
a pressure P = 1000 for x < 0.1, P = 100 for x > 0.9 and P =
0.01 in between (the adiabatic index is
). The boundary
conditions are reflective on both sides; this can be achieved introducing
ghost particles created on-the-fly. Evolving in time, the two blast
waves initiate multiple interactions of strong shocks,
rarefaction and reflections.
We performed two runs: one at a high resolution with 1000 equally spaced particles,
and one at a low resolution with only 200 particles. In both cases, the code
included the artificial conduction and viscosity with constant .
Figure 10 shows the density x-profiles for the high
resolution case at t=0.010, 0.016, 0.026, 0.028, 0.030, 0.032,
0.034, 0.038. The results can be compared with those of
Woodward & Colella (1984), Steinmetz & Mueller (1993) and
Springel (2010). The complex evolution of the shock and
rarefaction waves is reproduced with good accuracy and sharp
profiles, in nice agreement with the theoretical expectations. The
very narrow density peak at t=0.028, which should have
,
is well reproduced and only slightly underestimated.
Figure 11 shows the comparison of the density profiles of the high and low resolution cases at t=0.038. The lower resolution causes a substantial smoothing of the contact discontinuities; nevertheless, the overall behaviour is still reproduced, shock fronts are located at the correct positions, and the gross features of the density profile are roughly reproduced.
![]() |
Figure 10: Left to right, top to bottom: density profiles at t=0.010, t=0.016, t=0.026, t=0.028, t=0.030, t=0.032, t=0.034, t=0.038 for the Woodward high-resolution test (see text for details). All quantities are in code units. |
Open with DEXTER |
5.4 Kelvin-Helmoltz instability
The Kelvin-Helmoltz (KH) instability problem is a very demanding test. Recently, Price (2008) replied to the claim by Agertz et al. (2007) that the SPH algorithm is not able to correctly model contact discontinuities like the Kelvin-Helmoltz (KH) problem. He showed how the inclusion of the conductive term is sufficient to correctly reproduce the instability. We tried to reproduce his results running a similar test with the 2-D version of EVOL.
We set up a regular lattice of particles in the periodic domain
0<x<1, 0<y<1, with 2562 particles. The initial conditions were
similar to those adopted by Price (2008). The masses of
the particles were assigned in a way that
for
|y - 0.5|
< 0.25, and
elsewhere (note that in Price 2008, this
density difference was instead obtained changing the disposition of
equal mass particles). The regions were brought to
the pressure equilibrium with P = 2.5. In this case, although the
initial gradients in density and thermal energy were not smoothed,
the thermal energy was assigned to the particles after the
first density calculation, to ensure a continuous initial pressure
field.
The initial velocity of the fluid was set like this:
![]() |
(78) |
and
![]() |
(79) |
where we used

Figures 12 and 13 show the temporal evolution of the instability in the two cases. Clearly, our code is able to reproduce the expected trend of whirpools and mixing between the fluids; the evolution is faster for a stronger initial perturbation. For comparison, we also ran a test with the K2 initial conditions, but switched artificial conduction off; Fig. 14 shows the comparison of the final density fields (at t=1.79) and of the initial pressure fields. In the latter, the blip that inhibits mixing is clearly visible at the discontinuity layer: the resulting spurious surface tension keeps the two fluids separated, a problem efficiently solved by the artificial conduction term.
![]() |
Figure 11: Comparison of the high-resolution (solid line) and low-resolution (dots) density profiles at t=0.038 for the Woodward test (see text for details). The quantities are in code units. |
Open with DEXTER |
5.5 Strong shock collapse
The strong shock collapse (Noh 1987) is another very demanding
test and is generally considered a serious challenge for AMR
numerical codes. We ran the test in 2-D, displacing 125 000 particles on a regular grid and retailing those with
to
model a gas disk with an initially uniform density
and
a negligible internal energy; we then added a uniform radial velocity
towards the origin,
.
A spherical shock wave with
a formally infinite Mach number developped from the centre and travelled
outwards, leaving a constant density region inside the shock (in
2-D,
). The test was run on four parallel CPUs.
In Fig. 15, the density in the x>0, y>0 region is shown at t = 0.9885, while in Fig. 16 the density-radius relation at the same time is plotted for all particles.
There is very good agreement with the theoretical expectation, although some scatter is present in the particles density within the shocked region, where the density is moreover slightly underestimated (but this is a common feature for this kind of test: see e.g. Springel 2010). The shock velocity is instead slightly overestimated, and some spurious structures form near the shock layer due to the initial regular displacement of particles. Apart from these minor flaws though, the code is able to handle the very strong shock without any particular difficulty.
![]() |
Figure 12: Kelvin-Helmoltz instability according to test K1. Left to right, top to bottom: density field at t=0.768, t=1.158, t=1.549 and t=1.940. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 13: Kelvin-Helmoltz instability according to test K2. Left to right, top to bottom: density field at t=0.377, t=0.767, t=1.158and t=1.549. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 14: Kelvin-Helmoltz instability according to test K2, with ( left) and without ( right) artificial thermal conductivity. Top panels: initial pressure field. Bottom panels: density fields at t=1.79. All quantities are in code units. |
Open with DEXTER |
5.6 Gresho vortex problem
The test for the conservation of angular momentum and vorticity, the
so-called Gresho triangular vortex problem (Gresho & Chan 1990) follows the evolution of a vortex initially described in its 2-D version (Liska & Wendroff 2003) by an azimuthal velocity profile
![]() |
(80) |
in a gas of constant density

![]() |
(81) |
so that the vortex should remain independent of time.
As noted by Springel (2010), this test seems to be a serious obstacle for SPH codes. Due to shear forces, the angular momentum tends to be transferred from the inner to the outer regions of the vortex, eventually causing the rotational motion to spread within the r>0.4 regions; these finally dissipate their energy because of the interactions with the counter-rotating regions of the periodic replicas. With the GADGET3 code, the vortex did not survive up to t=3. AMR codes generally give a better description, although under some conditions (for example, the addition of a global bulk velocity, which should leave the system unchanged due to its galileian invariance) they also show some flaws.
We tried to run the test starting from two different initial
settings of particles, since as pointed out by Monaghan (2006)
the initial particle setting may have some influence on the
subsequent evolution of the system. Thus, in the first case we set
up a regular cartesian lattice of 10 000 particles (in the
periodic domain
-0.5<x<0.5, -0.5<y<0.5), and the second one put
the same number of particles on concentric rings, allowing for a
more ordered description of the rotational features of the gas (see
Fig. 17).
The results are shown in Fig. 18. We found a residual
vorticity still present at ,
although strongly degraded
and reduced in magnitude. As expected, a substantial amount of the
angular momentum was passed to the outer regions of the volume
under consideration. This was confirmed by comparing the initial to
the final pressure fields (Fig. 19): at t=2.7 the
pressure exterted by the outskirts is higher, due to the increased
temperature of the regions confining with the periodic replicas.
This leads to a compression of the internal region, which in turn
must increase its pressure as well.
However, we could not find other direct comparisons for this test with an SPH code in the literature. It must be pointed out that the resolution in these tests is quite low, and a larger number of particles may help to improve the results.
5.7 Point explosion
The presence of the artificial conduction becomes crucial in the point explosion experiment, a classic test to check the ability of a code to follow the evolution of strong shocks in three dimensions. In this test, a spherically symmetric Sedov-Taylor blast-wave develops and must be modelled consistently with the known analytical solution.
![]() |
Figure 15: The Noh's problem: density field in the first quadrant at t=0.9885. All quantities are in code units. See text for details. |
Open with DEXTER |
![]() |
Figure 16: The Noh's problem: density vs. radius relation for all particles at t=0.9885. Solid line: theoretical expectation. All quantities are in code units. See text for details. |
Open with DEXTER |

The following test runs were made (using the Lagrangian SPH scheme):
- S1 - constant viscosity with
viscosity, no artificial conduction;
- S2 - variable viscosity with
plus artificial conduction;
- S3 - as S2, but without the
correcting terms;
- S4 - as S2, with the individual time-stepping scheme;
- S5 - as S2, with higher resolution (513 particles).
![]() |
Figure 17: Positions of all particles at t=0.02 ( top) and t=2.7( bottom) for the two different initial configurations for the Gresho test: grid ( left) and rings ( right). All quantities are in code units. See text for details. |
Open with DEXTER |
![]() |
Figure 18: Initial (black dots) and final (red small points) azimuthal velocity profiles for the two Gresho tests: grid ( top) and rings ( bottom). All quantities are in code units. See the text for details. |
Open with DEXTER |
![]() |
Figure 19: Initial (t=0.02, left) and final (t=2.7, right) pressure fields in the grid Gresho test. Note the increased pressure in the outskirts (white) and the central regions of the vortex. All quantities are in code units. See the text for details. |
Open with DEXTER |
![]() |
Figure 20: Top: spatial distribution of particles in a slice taken at z=0 and t=0.09 for the Sedov blast wave of test S1. Bottom: radial density profile for the same test. Points: SPH particles; solid line: analytical prediction. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 21: The same as in Fig. 20, but for test S2. |
Open with DEXTER |
![]() |
Figure 22:
Energy conservations for Sedov test with artificial
conductivity and variable viscosity. Points: with |
Open with DEXTER |
![]() |
Figure 23: The same as in Fig. 20, but for test S5. |
Open with DEXTER |
Figure 20 shows the xy positions of particles belonging to the z=0 plane at t=0.09 for the S1 test and the corresponding radial density profile compared to the analytical expectation. A strong noise (which develops soon after the initial explosion) is evident in the particle positions, due to the huge unresolved discontinuity in thermal energy. The description of the shock evolution is quite poor, although its gross features are still captured.
Figure 21 presents the same data for the S2 run, in which
both the viscosity with variable
and artificial conduction
were switched on. The increased precision in the description of the
problem is immediately evident: the conductive term acts smoothing
out the thermal energy budget of the central particle, strongly
reducing the noise and allowing a symmetric and ordered dynamical
evolution. The radial density profile (in which single particles,
and not averages, are plotted) shows that the shock is well
described. The density peak is a factor of
2 lower than the
analytical expectation, due to the intrinsic smoothing nature of the
SPH technique.
Another test (S3) was carried out dropping the
correcting
terms (see Sect. 3). While the differences in particle
positions and density profiles are almost negligible, the comparison
of the total energy conservations
E(t)/E(t0) in the two cases
(Fig. 22) shows that the correction helps to contain
losses due to wrong determinations of the gradient.
A fourth test (S4) was also run to check the behaviour of the individual time-stepping algorithm; the results are virtually indiscernible from the those of the previous case (not shown here). This is remarkable because all particles must be ``woken up'' (with the method described in Sect. 4.2.1), as they are still and cold at the beginning of the simulation.
Finally, a test (S5) with an increased numerical resolution was performed (Fig. 23). As expected, in this case the shock boundary was sharper and closer to the peak value of the analytical solution, while the particles positions again accurately describe the evolution of the system.
5.8 Self-gravitating collapse
The collapse of a gaseous self-gravitating sphere is another
standard SPH 3-D test (Evrard 1988). The combined action of
hydrodynamics and self-gravity leads the system, a sphere of gas
initially far from equilibrium with a negligible internal energy and
density profile
,
to a collapse in which most
of its kinetic energy is converted into heat. An outward moving
shock develops, a slow expansion follows and later a
core-halo structure develops with nearly isothermal inner regions
and outer regions which cool adiabatically. To set the initial
conditions, 10 000 particles with mass
were placed in a sphere of radius
Mpc in a way that
their initial density radial profile scales as 1/r. Another
sphere with 40 000 particles and consequently smaller particles'
masses (higher resolution) was also prepared. Two cases were examined:
the collisionless collapse and the adiabatic collapse.
![]() |
Figure 24: Collisionless collapse with adaptive softening lengths. Top: spatial distribution of the particles projected onto the xy plane after 12 000 steps; middle: softening lengths (code units) as a function of the radial coordinate; bottom: temporal conservation of the total energies of the systems E(t)/E(t0) for the un-corrected ( left) and corrected ( right) tests. |
Open with DEXTER |
5.8.1 Collisionless collapse
To check the performance of the adaptive softening length algorithm, tests were run where the hydrodynamical interactions were switched off to mimic the collisionless collapse. A first case dropped the hydrodynamical interactions, included the adaptive softening algorithm, but neglected the correcting terms presented in Sect. 2.1. This provided the reference case. The second one also includes the effects of the correcting terms.
Figure 24 shows in the upper panels the spatial
distribution of particles projected onto the xy plane during
the collapse of the sphere (after 12 000 steps),
for the un-corrected (left) and corrected (right)
cases: no evident difference is present. The same
holds (middle panels) for the value of the softening lengths of
particles as a function of the radial coordinate. However, the
temporal conservation of the total energies of the systems
E(t)/E(t0) in both cases were quite different (bottom panels of
Fig. 24). When the correcting terms were not included, a
secular error in the energy conservation developped, which reached ,
while the error was about ten times smaller (albeit not completely
eliminated) with the introduction of the extra-terms.
5.8.2 Adiabatic collapse
The standard hydrodynamical tests (Evrard 1988) were calculated with the settings
- E1 - constant softening lengths (
, with R and M the total radius and mass of the sphere at the beginning of the simulation); Lagrangian SPH scheme, with constant viscosity (
) and no artificial conduction;
- E2 - as E1, except for using adaptive softening lengths;
- E3 - adaptive softening lengths, viscosity with variable
, and artificial conduction;
- E4 - as E3, higher resolution (40 000 particles).







![]() |
Figure 25: Energy trends in the cold collapse test. Light lines: red: test E1; black: test E2; blue: test E3. Heavy dotted blue line: test E4 (see text for details). Top to bottom: thermal, kinetic, total and potential energies. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 26: Magnification of the energy conservations E(t)/E(t0) in the cold collapse tests. Black: test E1; red: test E2. See text for details. |
Open with DEXTER |
The density and internal energy profiles at t=0.8 for the E1 to
E3 runs are shown in Fig. 28; the comparison between
the E3 and the hi-res E4 runs is shown in Fig. 29
(the shock is moving outward leaving an inner heated core). The
discontinuity in density is smoothed out by the kernel smoothing,
which in the current implementation requires a high number of
neighbours (60), but the effects of the inclusion of the
artificial conductive term are evident. We did not find any particular
problem with the inclusion of the conductive term in a self-gravitating
system.
![]() |
Figure 27: Softening lengths in the cold collapse test T2 at t=0.8. The solid line is the value of the constant softening length in test T1. All quantities are in code units. |
Open with DEXTER |
![]() |
Figure 28: Density ( top) and internal energy ( bottom) profiles at t=0.8 in the cold collapse E1 ( left), E2 ( centre) and E3 ( right) tests. All particles are plotted. |
Open with DEXTER |
We also ran an X-SPH test, adopting
.
While the energy
trends are essentially identical to the ones the previous tests (not
shown), the energy conservation is slightly worse, reaching an error
of
.
Indeed, Monaghan (2002) pointed out that a
more complex treatment of the equations of motion should be adopted
to achieve a perfect energy conservation.
5.9 Isothermal collapse
Introduced by Boss & Bodenheimer (1979) and later revisited by Burkert & Bodenheimer (1993), the test on the isothermal collapse combines hydrodynamics and gravity, and also adds rotation. It develops in a complex game of collapse and fragmentation. We ran the test in the version proposed by Bate & Burkert (1997).
A homogeneous sphere of cold gas with an initial radius
cm and a mass
was
modelled retailing
78 000 particles out of a uniform lattice.
The sphere was put in solid body rotation around the z axis, with
an angular velocity
rad s-1, and the
otherwise flat density field was perturbed by
in a way
that
![]() |
where





![]() |
Figure 29: Comparison between cold collapse E3 (low-res, left) and E4 (hi-res, right) test: density ( up) and internal energy ( bottom) profiles at t=0.8. |
Open with DEXTER |
The test is very challenging and has been used to check the behaviour of many AMR and SPH codes. In particular, Bate & Burkert (1997) used it to study the effects of wrong calibrations of the smoothing and softening lengths in SPH (see Sect. 3.2). We ran the test in four different cases:
- B1 - Lagrangian SPH with constant softening lengths
(
cm, an intentionally high value);
- B2 - Lagrangian SPH with adaptive softening lengths (no limits
imposed on
and h);
- B3 - as B2, but imposing a minimum smoothing
and softening length
cm, to avoid artificial clumping (as in Bate & Burkert 1997);
- B4 - as B2, with the X-SPH method (adopting
).

The high value of
assumed in the B1 test clearly
demonstrates how a wrong choice of the softening length can result
in catastrophic errors. The evolution of the system was clearly
different from the grid solution used by Bate & Burkert (1997) as a proxy
of the exact solution of the problem. Because of the excessive
softening, some clumps proved to be stable against collapse and a rotating
structure formed, resembling a barred spiral galaxy. More realistic
choices for
may have given better results, but the opposite
problem (artificial clumping of unresolved structures) might have arosen for
too small a softening length. Indeed, the case B2, in which
the adaptive algorithm was adopted (note that
), gave
much better results, but disorderly clumped sub-structures formed.
Imposing a minimum
(=h, case B3) mitigated the problem (note
however that the evolution of the system seemed to be somewhat faster than
the grid solution), yet this prescription cannot be generalised. A more
physically sound solution may be the introduction of a
pressure floor in unresolved regions, i.e. where the Jeans
mass of the gas is not resolved by the SPH technique. This issue
will be discussed in the forthcoming companion paper (Merlin et al.
2010, in preparation).
Finally, the X-SPH test B4 yielded results that were essentially undiscernible from those of case B2, apart from a slightly less disordered density field in the regions surrounding the central collapsed structure.
Figures 35 plots the temporal evolution of the maximum value
of the density. This plot can be compared with Fig. 5a in
Bate & Burkert (1997) and Fig. 12 in Springel (2005). The results
for all tests are very similar and agree quite well with their SPH
tests of a comparable resolution (with the plateau at
g cm-3 reached at t = 1.14), except for B1,
which clearly evolves faster.
![]() |
Figure 30:
Evolution of the density field in the central region for
the test B1. Plotted is the region within
|
Open with DEXTER |
![]() |
Figure 31: The same as in Fig. 30, but for test B2. |
Open with DEXTER |
![]() |
Figure 32: The same as in Fig. 30, but for test B3. |
Open with DEXTER |
![]() |
Figure 33: The same as in Fig. 30, but for test B4. |
Open with DEXTER |
![]() |
Figure 34: Final density field at t=1.29 (in units of free fall time) in the central region. Left to right, top to bottom: tests B1, B2, B3, B4. See text for details. |
Open with DEXTER |
![]() |
Figure 35: Maximum density in the isothermal collapse tests. B1: red dashed line; B2: green solid line; B3: blue thick dots; B4: black dot-dashed line. See text for details. |
Open with DEXTER |
![]() |
Figure 36:
Initial (blue) and final (red) smoothing length h of all
particles in the four isothermal collapse tests ( left to right, top
to bottom: B1, B2, B3, B4). The softening length |
Open with DEXTER |
For reference, Fig. 36 shows the initial and final values
of both
and h (which are equal for B2, B3 and B4) as a function of
in the four runs.
![]() |
Figure 37: Comparison between the final density fields for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of test B2. See the text for details. |
Open with DEXTER |
![]() |
Figure 38: Comparison between the final h-radius relation, for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of test B2. See the text for details. |
Open with DEXTER |
![]() |
Figure 39:
Comparison between the final density fields for a
single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of
test B2, with tree opening angle
|
Open with DEXTER |
![]() |
Figure 40:
Comparison between the final h-radius relation,
for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of
the B2 test, with tree opening angle
|
Open with DEXTER |
5.9.1 Parallel run
Finally, we re-computed the B2 test with 16 parallel CPUs. Figure 37 shows the comparison between the final density field for this run and for the original one with a single CPU. Although the agreement is good and no sign of a multi-processor subdivision of the domain is present, some differences in local morphological features are present; these are more evident in Fig. 38, which shows the comparison for the smoothing lengths values. While the gross features and the minimum and maximum values are very similar, differences in the local clumping of particles are clearly present.
We ascribe this flaw to the parallel oct-tree scheme. The global tree constructed by N>1 CPUs is slightly different from the one built by a single processor on the same spatial domain: the regions in which two or more different CPUs interact are described by a different cell architecture depending on N (note that the reason for this is that the cells have always to be cubic; indeed, the problem should not occur if a binary tree, in which cells have adaptive shapes, was used). This causes a slightly different approximation of the gravitational force due to the opening cell criterion, giving in turn a slightly different dynamical evolution.
To prove the validity of this speculation, we ran the same
tests again and reduced the opening angle
to 0.1 instead of the
standard 0.8. In this way, the approximation given by the adoption
of the tree structure should become negligible (at
the expenses of a much longer computation time). Indeed, looking
at Figs. 39 and 40, we can notice an
improvement in that the density fields are more similar, although
still not exactly identical. An ideal run with
(i.e., exact particle-particle interaction) should give almost
identical distributions of particles.
We must point out that this flaw may be of non-negligible importance when dealing with phenomena that directly depend on the local density field, like the star formation and feedback processes. The same simulation run on a different number of CPUs may give slightly different results because of this issue.
![]() |
Figure 41: Evolution of the density field in the z=0 plane for the collision test C1. Left to right, top to bottom: t=0.05, 0.15, 0.2, 0.25, 0.3, 0.5. Superposed is the velocity field. |
Open with DEXTER |
![]() |
Figure 42: Top: energy trends in the collision test C1; black dots: total energy, blue dot-dashed line: potential, red dashed line: kinetic, green solid line: thermal. Bottom: magnification of the total energy conservation E(t)/E(t0). |
Open with DEXTER |
5.10 Collision of two gas spheres
The collision between two gaseous Plummer spheres is a very crucial
test for the energy conservation. The tests were run in 3-D, with
adaptive softening lengths, variable
viscosity, and thermal
conduction.
Each of the two spheres (with unit mass, unit radius, and negligible
initial internal energy) were set up randomly
selecting 10 000 particles in a way that their initial
density radial profile
resembles the Plummer profile,
.
Their
centres are on the x axis.
In a first run (C1) we followed a head-on collision, assigning a
relative velocity of 1.5 in the x direction to the spheres. In a
second run (C2) we added a shear component, giving relative
velocities of
and
.
According to Hernquist (1993), early SPH simulations of the violent head-on collision of
two polytropic stars did not conserve the energy within .
Modern formulations of SPH (see e.g. Rosswog & Price 2007) can
handle the problem with much more accuracy, reducing the error to below
.
Figures 41 and 43 plot the density and velocity
fields at different times, on a slice taken at z=0. In our tests
the energy conservation is good (see Figs. 42 and 44) with a maximum error of
in the
head-on collision, and
error when the shear component is
added.
5.11 Two-components evolution
As a final test, we studied the evolution of a two-components fluid
in order to investigate the behaviour of the adaptive smoothing
and softening length algorithm in presence of more than one material.
We set up two spheres of the same radius, R=103 pc, and mass,
,
using the particle distribution
adopted adiabatic collapse test by Evrard for each sphere. One of the
two systems was considered to be made of collisionless bodies, with
negligible pressure, thus mimicking the behaviour of a Dark Matter
halo; the other one was instead considered to be made of gas, with an
initial temperature
of 1000 K. The system was the let free to evolve under the action
of self-gravity and hydrodynamical interactions in four different
runs with the following settings:
- TC1 - constant softening lengths,
; null initial velocities;
- TC2 - adaptive softening lengths, null initial velocities;
- TC3 - as TC2, with an initial solid body rotation
rad s-1, for both components;
- TC4 - as TC3, but with counter-rotating initial tangential velocities (same magnitude).
![]() |
Figure 43: Evolution of the density field in the z=0 plane for the collision test C2. Left to right, top to bottom: t=0.03, 0.38, 0.77, 1.16, 1.55, 1.94, 2.72, 3.5, 4.29. Superposed is the velocity field. |
Open with DEXTER |
![]() |
Figure 44: Top: energy trends in the collision test C2; black dots: total energy, blue dot-dashed line: potential, red dashed line: kinetic, green solid line: thermal. Bottom: magnification of the total energy conservation E(t)/E(t0). |
Open with DEXTER |
![]() |
Figure 45: Two components test, energy conservation E(t)/E(t0)for the four runs: dashed red line, TC1; blue dot-dashed line, TC2; green dots, TC3; black solid line, TC4. |
Open with DEXTER |
The free expansion of a system of particles is by itself a demanding
test, and a loss of total energy is a common flaw for Lagrangian codes.
Furthermore, the rotation imposed in the last two runs makes the
problem more complicated. In our tests, a further complication was
given by the dynamical interaction within the central regions of the
system, where the hydrodynamical pressure of the gas struggled against
self-gravity and the (stronger) gravitational action of the collisionless
component. Moreover, when the adaptive
scheme was used,
gas particles had to cope with a large variation of their interaction
sphere, due to the presence of Dark Matter particles in the central
regions, and to their absence in the outskirts, where in addition
gas particles are far away from one another.
Figure 45 plots the energy trends for all the four
tests. The conservation is very good in all cases, with a
maximum error well below ;
quite surprisingly, the worst
performance is given by the constant
scheme, while
the corresponding test with adaptive softening performs better
and stays below a
error.
Figure 46 shows the evolution of the Dark Matter component in the test TC3, projected on the xy and on the xz planes (the TC4 run gives almost undistinguishable plots). Clear is the early formation of a disk due to the initial rotational velocity, and a subsequent expansion which leaves a dense core with a loosely disky shape. On the other hand, the gas component expands with an almost radial motion (Fig. 47), but in the central regions the interaction with the collisionless component plays an important role. Indeed, looking at Fig. 48 where a magnification of the velocity field in the central region at late times is shown for the tests TC3 and TC4, the difference between the two cases is clear: in the counter-rotation run the gas in the innermost zone has inverted its sense of motion and co-rotates with the Dark Matter, creating a complex pattern of velocities.
The conservation of angular momenta in the tests TC3 and TC4
is shown in Fig. 49. We find a maximum error of in the relative error for the total momentum in the counter-rotation
test; here the absolute violation is small anyway because
of the very low total initial momentum. In all the other cases
the conservation is always very satisfactory.
![]() |
Figure 46: Two components test, case TC3. Dark Matter particles projected onto the xy ( top panels) and xz ( bottom panels) planes: left to right, top to bottom, t=0.5, 1.0, 1.5, 2.0, 2.5 Gyr. |
Open with DEXTER |
![]() |
Figure 47: Two components test, case TC3. Gas density and velocity fields in a slice taken at z=0: left to right, top to bottom, t=0.5, 1.0, 1.5, 2.0, 2.5 Gyr. |
Open with DEXTER |
![]() |
Figure 48: Two components test: comparison between the central velocity fields in the cases TC3 ( left) and TC4 ( right), in a slice taken at z=0, at t=2.5 Gyr. |
Open with DEXTER |
![]() |
Figure 49: Two components test, angular momenta conservations. Top panels: test TC3, bottom panels: test TC4. Shown are the total (solid lines), gas (dashed lines) and Dark Matter (dash-dotted lines) angular momenta as a function of time L(t), and conservations as a function of time L(t)/L(t0)(same meaning of the symbols). Note that in the TC3 test the gas and Dark Matter lines are superposed because they have identical angular momentum. |
Open with DEXTER |
6 Conclusions
We have presented the basic features of EVOL, the new
release of the Padova parallel N-body code for cosmological
simulations of galaxy formation and evolution. In this first paper,
the standard gravitational and hydrodynamical algorithms,
as well as the parallel architecture and the data structures
of the code, were extensively reviewed and discussed. EVOL
includes some interesting options like adaptive softening
lengths with self-consistent extra-terms to avoid large errors in
energy conservation,
terms in the SPH formalism, variable
viscosity formulation, and artificial thermal conduction
terms to smooth out pressure at contact discontinuities.
We also performed and presented an extended series of standard hydrodynamical tests to check the ability of the code to handle potentially difficult situations. The results are encouraging. Almost all tests gave results in nice agreement with theoretical expectations and previous calculations with similar codes, sometimes even showing better performance. In particular, we showed how the inclusion of an artificial thermal conduction term as suggested by Price (2008) significantly improves the modelling of demanding problems like the Kelvin-Helmoltz instability or the Sedov-Taylor point-like explosion. Furthermore, the adoption of a variable softening lentghs algorithm allows for a higher degree of adaptivity without resulting in appreciable losses in terms of the precision and conservation of energy. While some typical flaws of SPH codes are still present (e.g., problems in the conservation of vorticity because of spurious shear viscosity), these new features clearly improve the method and positively aid to solve well-known drawbacks of the SPH algorithm.
It must be pointed out however that the new features must be extensively tested, especially in situations of interest for cosmological simulations (i.e., gas dynamics in presence of a Dark Matter and/or stellar components). Here we have restricted our analysis to standard hydrodynamical problems. Other and new tests will be presented in the companion paper by Merlin et al. (2010, in preparation), dedicated to the inclusion in EVOL of radiative cooling, chemical evolution, star formation, energy feedback, and other non-standard algorithms.
AcknowledgementsThe authors thank D.J. Price, S. Borgani, P.A. Thomas, L. Secco, S. Pasetto and G. Tormen for the useful discussions and for their help. All simulations were run using the CINECA facilities and/or the MONSTER cluster at the Department of Astronomy (Padova). The pictures have been produced using MATLAB or SPLASH by D. Price (Price 2007).
Appendix A: N-dimensional kernel
The general form for the N-dimensional spline kernel function (
)
is
where
![]() |
(A.2) |
Derivatives are straightforwardly obtained from this expression. The 1-D and 2-D kernels have been used where necessary in some of the hydrodynamical tests presented in Sect. 5.
Appendix B: Summary of equations of motion and conservation
Gravitational acceleration:
![]() |
= | ![]() |
|
![]() |
(B.1) |
Hydrodynamical acceleration (for SPH particles only):
![]() |
= | ![]() |
|
![]() |
(B.2) |
Specific internal energy evolution (for SPH particles only):
![]() |
= | ![]() |
|
![]() |
(B.3) |
In these expressions,
![]() |
(B.4) |
![]() |
(B.5) |
![]() |
(B.6) |
and
![]() |
(B.7) |
The smoothing kernel W is given by Eq. (4), whereas the softened gravitational potential

The standard equation of state is that of an ideal gas,
![]() |
(B.8) |
where

References
- Agertz, O., Moore, B., Stadel, J., et al. 2007, MNRAS, 380, 963 [NASA ADS] [CrossRef] [Google Scholar]
- Alimi, J.-M., Serna, A., Pastor, C., & Bernabeu, G. 2003, J. Comp. Phys., 192, 157 [NASA ADS] [CrossRef] [Google Scholar]
- Athanassoula, E., Fady, E., Lambert, J. C., & Bosma, A. 2000, MNRAS, 314, 475 [NASA ADS] [CrossRef] [Google Scholar]
- Balsara, D. S. 1995, J. Comp. Phys., 121, 357 [Google Scholar]
- Barnes, J., & Hut, P. 1986, Nature, 324, 446 [NASA ADS] [CrossRef] [Google Scholar]
- Bate, M. R., & Burkert, A. 1997, MNRAS, 288, 1060 [NASA ADS] [CrossRef] [Google Scholar]
- Benz, W. 1990, in Numerical Modelling of Nonlinear Stellar Pulsations Problems and Prospects, ed. J. R. Buchler, 269 [Google Scholar]
- Boss, A. P., & Bodenheimer, P. 1979, ApJ, 234, 289 [NASA ADS] [CrossRef] [Google Scholar]
- Brookshaw, L. 1986, PASA, 6, 461 [NASA ADS] [Google Scholar]
- Burkert, A., & Bodenheimer, P. 1993, MNRAS, 264, 798 [NASA ADS] [CrossRef] [Google Scholar]
- Carraro, G., Lia, C., & Chiosi, C. 1998, MNRAS, 297, 1021 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
- Couchman, H. M. P., Thomas, P. A., & Pearce, F. R. 1995, ApJ, 452, 797 [NASA ADS] [CrossRef] [Google Scholar]
- Dehnen, W. 2001, MNRAS, 324, 273 [NASA ADS] [CrossRef] [Google Scholar]
- Dolag, K., Vazza, F., Brunetti, G., & Tormen, G. 2005, MNRAS, 364, 753 [NASA ADS] [CrossRef] [Google Scholar]
- Dubinski, J. 1996, New Astron., 1, 133 [NASA ADS] [CrossRef] [Google Scholar]
- Einfeldt, B., Roe, P. L., Munz, C. D., & Sjogreen, B. 1991, J. Comp. Phys., 92, 273 [Google Scholar]
- Evrard, A. E. 1988, MNRAS, 235, 911 [NASA ADS] [CrossRef] [Google Scholar]
- Ewald, P. P. 1921, Ann. Phys., 369, 253 [CrossRef] [Google Scholar]
- Fryxell, B., Olson, K., Ricker, P., et al. 2000, ApJS, 131, 273 [NASA ADS] [CrossRef] [Google Scholar]
- Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 [Google Scholar]
- Gresho, P. M., & Chan, S. T. 1990, Int. J. Numerical Methods Fluids, 11, 621 [Google Scholar]
- Harfst, S., Theis, C., & Hensler, G. 2006, A&A, 449, 509 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Hernquist, L. 1993, ApJ, 404, 717 [NASA ADS] [CrossRef] [Google Scholar]
- Hernquist, L., & Katz, N. 1989, ApJS, 70, 419 [NASA ADS] [CrossRef] [Google Scholar]
- Hernquist, L., Bouchet, F. R., & Suto, Y. 1991, ApJS, 75, 231 [NASA ADS] [CrossRef] [Google Scholar]
- Hu, X. Y., & Adams, N. A. 2005, J. Comp. Phys., 213, 844 [NASA ADS] [CrossRef] [Google Scholar]
- Lia, C., Portinari, L., & Carraro, G. 2002, MNRAS, 330, 821 [NASA ADS] [CrossRef] [Google Scholar]
- Liska, W., & Wendroff, B. 2003, SIAM J. Scientific Computing, 25, 995 [Google Scholar]
- Lucy, L. B. 1977, AJ, 82, 1013 [NASA ADS] [CrossRef] [Google Scholar]
- McMillan, S. L. W., & Aarseth, S. J. 1993, ApJ, 414, 200 [NASA ADS] [CrossRef] [Google Scholar]
- Merlin, E., & Chiosi, C. 2007, A&A, 473, 733 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Merritt, D. 1996, AJ, 111, 2462 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 1988, Computer Phys. Comm., 48, 89 [Google Scholar]
- Monaghan, J. J. 1992, ARA&A, 30, 543 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 1997, J. Comp. Phys., 136, 298 [Google Scholar]
- Monaghan, J. J. 2000, J. Comp. Phys., 159, 290 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 2002, MNRAS, 335, 843 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 2006, MNRAS, 365, 199 [Google Scholar]
- Monaghan, J. J., & Gingold, R. A. 1983, J. Comp. Phys., 52, 374 [Google Scholar]
- Monaghan, J. J., & Lattanzio, J. C. 1985, A&A, 149, 135 [NASA ADS] [Google Scholar]
- Morris, J., & Monaghan, J. J. 1997, J. Comp. Phys., 136, 41 [Google Scholar]
- Nelson, R. P., & Papaloizou, J. C. B. 1994, MNRAS, 270, 1 [NASA ADS] [Google Scholar]
- Noh, W. F. 1987, J. Comp. Phys., 72, 78 [Google Scholar]
- O'Shea, B. W., Bryan, G., Bordner, J., et al. 2005, in Adaptive Mesh Refinement - Theory and Applications, ed. T. Plewa, T. Linde, & G. V. Weirs (Springer) [arXiv:astro-ph/0403044] [Google Scholar]
- Price, D. J. 2007, PASA, 24, 159 [NASA ADS] [CrossRef] [Google Scholar]
- Price, D. J. 2008, J. Comp. Phys., 227, 10040 [Google Scholar]
- Price, D. J., & Monaghan, J. J. 2005, MNRAS, 364, 384 [NASA ADS] [Google Scholar]
- Price, D. J., & Monaghan, J. J. 2007, MNRAS, 374, 1347 [NASA ADS] [CrossRef] [Google Scholar]
- Romeo, A. B. 1998, A&A, 335, 922 [NASA ADS] [Google Scholar]
- Rosswog, S. 2009, New Astron. Rev., 53, 78 [NASA ADS] [CrossRef] [Google Scholar]
- Rosswog, S., & Price, D. 2007, MNRAS, 379, 915 [NASA ADS] [CrossRef] [Google Scholar]
- Saitoh, T. R., & Makino, J. 2009, ApJ, 697, L99 [NASA ADS] [CrossRef] [Google Scholar]
- Salmon, J. K., & Warren, M. S. 1994, J. Comp. Phys., 111, 136 [Google Scholar]
- Schuessler, I., & Schmitt, D. 1981, A&A, 97, 373 [NASA ADS] [Google Scholar]
- Serna, A., Alimi, J.-M., & Chieze, J.-P. 1996, ApJ, 461, 884 [NASA ADS] [CrossRef] [Google Scholar]
- Shirokov, A. 2007, unpublished [arXiv:0711.2989] [Google Scholar]
- Sod, G. A. 1978, J. Comp. Phys., 27, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V. 2010, MNRAS, 401, 791 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V., & Hernquist, L. 2002, MNRAS, 333, 649 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V., Yoshida, N., & White, S. D. M. 2001, New Astron., 6, 79 [Google Scholar]
- Steinmetz, M., & Mueller, E. 1993, A&A, 268, 391 [NASA ADS] [Google Scholar]
- Thacker, R. J., & Couchman, H. M. P. 2000, ApJ, 545, 728 [NASA ADS] [CrossRef] [Google Scholar]
- Thomas, P. A., & Couchman, H. M. P. 1992, MNRAS, 257, 11 [NASA ADS] [CrossRef] [Google Scholar]
- Wadsley, J. W., Stadel, J., & Quinn, T. 2004, New Astron., 9, 137 [NASA ADS] [CrossRef] [Google Scholar]
- Watkins, S. J., Bhattal, A. S., Francis, N., Turner, J. A., & Whitworth, A. P. 1996, A&AS, 119, 177 [Google Scholar]
- Wetzstein, M., Nelson, A. F., Naab, T., & Burkert, A. 2009, ApJS, 184, 298 [NASA ADS] [CrossRef] [Google Scholar]
- Williams, P. R., Churches, D. K., & Nelson, A. H. 2004, ApJ, 607, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Woodward, P. & Colella, P. 1984, J. Comp. Phys., 54, 115 [Google Scholar]
Footnotes
- ...
aspects
- The core of the old PD-TSPH code was written during the '90s by C. Chiosi, G. Carraro, C. Lia and C. Dalla Vecchia (Carraro et al. 1998; Lia et al. 2002). Over the years, many researchers added their contribution to the development of the code. In its original release, the PD-TSPH was a basic Tree + SPH code, written in Fortran90, conceptually similar to TREE-SPH by Hernquist & Katz (1989). Schematically, PD-TSPH used an early formulation of SPH (Benz 1990) to solve the equations of motion for the gas component, and the Barnes & Hut (1986) Tree algorithm to compute the gravitational interactions.
- ...
considered
- A relativistic formulation of both gravitational and hydrodynamical interactions is possible (for a general summary see e.g. Rosswog 2009), but is generally unessential for problems of cosmological structure formation.
- ...)
- Throughout the paper, we only use the three-dimensional formulation of kernels. See Appendix A for the 1-D and 2-D forms.
- ...
simulation
- Recently, Shirokov (2007) pointed out that since particles have unequal mass and hence unequal softening lengths, one should actually compute the pairwise gravitational force and potential by solving a double integral over the particle volumes. Therefore the computation of the interactions with the classic approach is likely not accurate. Given the practical difficulty in evaluating those integrals, they also provide a numerical approximation.
- ... by
- A clear advantage of using the Lagrangian to derive the
equations of
motion is that provided the Lagrangian is appropriately
symmetrised, momentum and energy conservation are guaranteed. Note
that this derivation closely matches the one described
in Sect. 3
to derive the variational formulation of the
SPH formalism and the so-called
correcting terms.
- ...)
- Note that in principle W should be
defined over the
whole space, as in the first SPH calculations by Gingold & Monaghan (1977)
where a Gaussian-shaped kernel was adopted. For practical purposes,
anyway, its domain is made compact putting W=0 for
distances
greater than
, with
, from the position of the active particle.
- ...
purposes
- Price & Monaghan (2007) use the mass density in place of number density to achieve the desired solutions. The formulation in terms of the number density is equivalent and can be advantageous when dealing with particles of different masses. Also note that this is a reformulation of the well-known SPH density summation, see Sect. 3.
- ...
- Thus all
particles, and not only SPH particles, need to find their
neighbours to compute these terms (see Sect. 2.2).
While the
term described in Sect. 3 is only needed for the SPH algorithm, and therefore only SPH particles concur to compute it (considering only SPH neighbours in turn), the
and
gravitational terms are required for any kind of gravitating particle and must be computed looping on all neighbouring particles, both SPH and not. The time lost in the neighbour-searching routine for non-gaseous particles can be at least partially balanced adopting the individual time-stepping scheme (see Sect. 4.2.1). In this case, large softening lengths in low-density regions result in larger individual time-steps for particles belonging to those regions.
- ...)
- Alternatively, it could be written as
which is advantageous in case the simulated fluid has boundaries or edges, but has the disadvantage of not conserving the mass exactly. - ...)
- Some authors (e.g. Hu
& Adams 2005)
proposed a different ``number density'' formulation of SPH in which
, and ni is obtained via Eq. (41). While this can help to resolve sharp discontinuities and to take into account the presence of multi-phase flows, it may also lead to potentially disastrous results if mixed unequal mass particles are not intended to model density discontinuities but are instead used as mass tracers in a homogeneous field.
- ...Monaghan (2002)
- As noted by
Schuessler & Schmitt
(1981), the gradient of the spline kernel Eq. (4) can lead to
unphysical clustering of particles. To
prevent this, Monaghan (2000)
introduced a small artificial
repulsive pressure-like force between particles in close pairs.
In EVOL a modified form of the kernel
gradient is instead
adopted, as suggested by Thomas
& Couchman (1992):
(54)
with the usual meaning of the symbols. We point out that this correction, although useful to avoid pairing problems expecially when dealing with particles of different mass, implies that the kernel gradient is no longer correctly normalised (potentially violating conservations). As long as the correction is small, this effect can be considered of minor importance. - ...
- It would also be possible to compute the interactions by
only considering neighbours within hi
(or
). To do so, each particle should return its contribution to the accelerations to its neighbours. But this approach is more complicated with individual timesteps (where one has to compute inactive contributions to active particles), and in a parallel architecture. We therefore decided to adopt the fully symmetric scheme in this release of EVOL.
All Figures
![]() |
Figure 1: Determination of the initial density adopting four different SPH algorithms. Left to right, top to bottom: standard SPH, mass density summation; Lagrangian SPH, mass density summation; standard SPH, number density summation; Lagrangian SPH, number density determination. Blue dots: high mass particles; red crosses: low mass particles. |
Open with DEXTER | |
In the text |
![]() |
Figure 2: Data structures in EVOL. Void cells are included to allow for new particles to be created and/or imported from other CPUs. Ghost structures are reallocated at each time-step. |
Open with DEXTER | |
In the text |
![]() |
Figure 3: Left to right, top to bottom: density, internal energy, x-velocity, and pressure profiles at t=0.2 for the Einfeldt test. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 4: Left to right, top to bottom: density, internal energy, x-velocity and pressure profiles at t=0.12 for test T1 (see text for details). Red solid line: exact solution. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 5: the same as in Fig. 4, but for test T2 (see text for details). |
Open with DEXTER | |
In the text |
![]() |
Figure 6: The same as in Fig. 4, but for test T3 (see text for details). |
Open with DEXTER | |
In the text |
![]() |
Figure 7: The same as in Fig. 4, but for test T4 (see text for details). |
Open with DEXTER | |
In the text |
![]() |
Figure 8: The same as in Fig. 4, but for test T5 (see text for details). |
Open with DEXTER | |
In the text |
![]() |
Figure 9: The same as in Fig. 4, but for test T6 (see text for details). |
Open with DEXTER | |
In the text |
![]() |
Figure 10: Left to right, top to bottom: density profiles at t=0.010, t=0.016, t=0.026, t=0.028, t=0.030, t=0.032, t=0.034, t=0.038 for the Woodward high-resolution test (see text for details). All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 11: Comparison of the high-resolution (solid line) and low-resolution (dots) density profiles at t=0.038 for the Woodward test (see text for details). The quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 12: Kelvin-Helmoltz instability according to test K1. Left to right, top to bottom: density field at t=0.768, t=1.158, t=1.549 and t=1.940. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 13: Kelvin-Helmoltz instability according to test K2. Left to right, top to bottom: density field at t=0.377, t=0.767, t=1.158and t=1.549. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 14: Kelvin-Helmoltz instability according to test K2, with ( left) and without ( right) artificial thermal conductivity. Top panels: initial pressure field. Bottom panels: density fields at t=1.79. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 15: The Noh's problem: density field in the first quadrant at t=0.9885. All quantities are in code units. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 16: The Noh's problem: density vs. radius relation for all particles at t=0.9885. Solid line: theoretical expectation. All quantities are in code units. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 17: Positions of all particles at t=0.02 ( top) and t=2.7( bottom) for the two different initial configurations for the Gresho test: grid ( left) and rings ( right). All quantities are in code units. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 18: Initial (black dots) and final (red small points) azimuthal velocity profiles for the two Gresho tests: grid ( top) and rings ( bottom). All quantities are in code units. See the text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 19: Initial (t=0.02, left) and final (t=2.7, right) pressure fields in the grid Gresho test. Note the increased pressure in the outskirts (white) and the central regions of the vortex. All quantities are in code units. See the text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 20: Top: spatial distribution of particles in a slice taken at z=0 and t=0.09 for the Sedov blast wave of test S1. Bottom: radial density profile for the same test. Points: SPH particles; solid line: analytical prediction. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 21: The same as in Fig. 20, but for test S2. |
Open with DEXTER | |
In the text |
![]() |
Figure 22:
Energy conservations for Sedov test with artificial
conductivity and variable viscosity. Points: with |
Open with DEXTER | |
In the text |
![]() |
Figure 23: The same as in Fig. 20, but for test S5. |
Open with DEXTER | |
In the text |
![]() |
Figure 24: Collisionless collapse with adaptive softening lengths. Top: spatial distribution of the particles projected onto the xy plane after 12 000 steps; middle: softening lengths (code units) as a function of the radial coordinate; bottom: temporal conservation of the total energies of the systems E(t)/E(t0) for the un-corrected ( left) and corrected ( right) tests. |
Open with DEXTER | |
In the text |
![]() |
Figure 25: Energy trends in the cold collapse test. Light lines: red: test E1; black: test E2; blue: test E3. Heavy dotted blue line: test E4 (see text for details). Top to bottom: thermal, kinetic, total and potential energies. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 26: Magnification of the energy conservations E(t)/E(t0) in the cold collapse tests. Black: test E1; red: test E2. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 27: Softening lengths in the cold collapse test T2 at t=0.8. The solid line is the value of the constant softening length in test T1. All quantities are in code units. |
Open with DEXTER | |
In the text |
![]() |
Figure 28: Density ( top) and internal energy ( bottom) profiles at t=0.8 in the cold collapse E1 ( left), E2 ( centre) and E3 ( right) tests. All particles are plotted. |
Open with DEXTER | |
In the text |
![]() |
Figure 29: Comparison between cold collapse E3 (low-res, left) and E4 (hi-res, right) test: density ( up) and internal energy ( bottom) profiles at t=0.8. |
Open with DEXTER | |
In the text |
![]() |
Figure 30:
Evolution of the density field in the central region for
the test B1. Plotted is the region within
|
Open with DEXTER | |
In the text |
![]() |
Figure 31: The same as in Fig. 30, but for test B2. |
Open with DEXTER | |
In the text |
![]() |
Figure 32: The same as in Fig. 30, but for test B3. |
Open with DEXTER | |
In the text |
![]() |
Figure 33: The same as in Fig. 30, but for test B4. |
Open with DEXTER | |
In the text |
![]() |
Figure 34: Final density field at t=1.29 (in units of free fall time) in the central region. Left to right, top to bottom: tests B1, B2, B3, B4. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 35: Maximum density in the isothermal collapse tests. B1: red dashed line; B2: green solid line; B3: blue thick dots; B4: black dot-dashed line. See text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 36:
Initial (blue) and final (red) smoothing length h of all
particles in the four isothermal collapse tests ( left to right, top
to bottom: B1, B2, B3, B4). The softening length |
Open with DEXTER | |
In the text |
![]() |
Figure 37: Comparison between the final density fields for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of test B2. See the text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 38: Comparison between the final h-radius relation, for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of test B2. See the text for details. |
Open with DEXTER | |
In the text |
![]() |
Figure 39:
Comparison between the final density fields for a
single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of
test B2, with tree opening angle
|
Open with DEXTER | |
In the text |
![]() |
Figure 40:
Comparison between the final h-radius relation,
for a single-CPU-run ( left) and a 16-parallel-CPUs-run ( right) of
the B2 test, with tree opening angle
|
Open with DEXTER | |
In the text |
![]() |
Figure 41: Evolution of the density field in the z=0 plane for the collision test C1. Left to right, top to bottom: t=0.05, 0.15, 0.2, 0.25, 0.3, 0.5. Superposed is the velocity field. |
Open with DEXTER | |
In the text |
![]() |
Figure 42: Top: energy trends in the collision test C1; black dots: total energy, blue dot-dashed line: potential, red dashed line: kinetic, green solid line: thermal. Bottom: magnification of the total energy conservation E(t)/E(t0). |
Open with DEXTER | |
In the text |
![]() |
Figure 43: Evolution of the density field in the z=0 plane for the collision test C2. Left to right, top to bottom: t=0.03, 0.38, 0.77, 1.16, 1.55, 1.94, 2.72, 3.5, 4.29. Superposed is the velocity field. |
Open with DEXTER | |
In the text |
![]() |
Figure 44: Top: energy trends in the collision test C2; black dots: total energy, blue dot-dashed line: potential, red dashed line: kinetic, green solid line: thermal. Bottom: magnification of the total energy conservation E(t)/E(t0). |
Open with DEXTER | |
In the text |
![]() |
Figure 45: Two components test, energy conservation E(t)/E(t0)for the four runs: dashed red line, TC1; blue dot-dashed line, TC2; green dots, TC3; black solid line, TC4. |
Open with DEXTER | |
In the text |
![]() |
Figure 46: Two components test, case TC3. Dark Matter particles projected onto the xy ( top panels) and xz ( bottom panels) planes: left to right, top to bottom, t=0.5, 1.0, 1.5, 2.0, 2.5 Gyr. |
Open with DEXTER | |
In the text |
![]() |
Figure 47: Two components test, case TC3. Gas density and velocity fields in a slice taken at z=0: left to right, top to bottom, t=0.5, 1.0, 1.5, 2.0, 2.5 Gyr. |
Open with DEXTER | |
In the text |
![]() |
Figure 48: Two components test: comparison between the central velocity fields in the cases TC3 ( left) and TC4 ( right), in a slice taken at z=0, at t=2.5 Gyr. |
Open with DEXTER | |
In the text |
![]() |
Figure 49: Two components test, angular momenta conservations. Top panels: test TC3, bottom panels: test TC4. Shown are the total (solid lines), gas (dashed lines) and Dark Matter (dash-dotted lines) angular momenta as a function of time L(t), and conservations as a function of time L(t)/L(t0)(same meaning of the symbols). Note that in the TC3 test the gas and Dark Matter lines are superposed because they have identical angular momentum. |
Open with DEXTER | |
In the text |
Copyright ESO 2010
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.