Issue |
A&A
Volume 508, Number 1, December II 2009
|
|
---|---|---|
Page(s) | 541 - 560 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/200811170 | |
Published online | 01 October 2009 |
A&A 508, 541-560 (2009)
Algorithmic comparisons of decaying, isothermal, supersonic turbulence
S. Kitsionas1,
- C. Federrath2,3 - R. S. Klessen1,2 - W. Schmidt4 - D. J. Price5,6 - L. J. Dursi7 - M. Gritschneder8 - S. Walch8,9 - R. Piontek1 - J. Kim10 - A.-K. Jappsen1,7,9 - P. Ciecielag8,11 - M.-M. Mac Low12
1 - Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany
2 - Institut für Theoretische Astrophysik, Universität Heidelberg, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany
3 -
Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
4
- Lehrstuhl für Astronomie, Institut für Theoretische Physik und
Astrophysik, Universität Würzburg, Am Hubland, 97074 Würzburg, Germany
5 -
School of Physics, University of Exeter, Stocker Road, EX4 4QL Exeter, UK
6 -
CSPA, School of Mathematical Sciences, Monash University, Clayton Vic 3168, Australia
7
- Canadian Institute for Theoretical Astrophysics, University of
Toronto, 60 St. George Street, Toronto, ON, M5S 3H8, Canada
8 -
Universitäts-Sternwarte München, Scheinerstr. 1, 81679 München, Germany
9 -
School of Physics & Astronomy, Cardiff University, 5 The Parade, CF24 3AA Cardiff, UK
10 -
Korea Astronomy and Space Science Institute, 61-1 Hwaam-dong, Yuseong-gu, Daejeon, 305-348, Republic of Korea
11 -
N. Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland
12
- American Museum of Natural History, Department of Astrophysics,
Central Park West at 79th Street, New York, NY, 10024-5192, USA
Received 16 October 2008 / Accepted 24 September 2009
Abstract
Context. Simulations of astrophysical turbulence have
reached such a level of sophistication that quantitative results are
now starting to emerge. However, contradicting results have been
reported in the literature with respect to the performance of the
numerical techniques employed for its study and their relevance to the
physical systems modelled.
Aims. We aim at characterising the performance of a variety of
hydrodynamics codes including different particle-based and grid-based
techniques on the modelling of decaying supersonic turbulence. This is
the first such large-scale comparison ever conducted.
Methods. We modelled driven, compressible, supersonic, isothermal turbulence with an rms Mach number of
,
and then let it decay in the absence of gravity, using runs performed with four different grid codes (ENZO, FLASH, TVD, ZEUS) and three different SPH codes (GADGET, PHANTOM, VINE). We additionally analysed two calculations denoted as PHANTOM A and PHANTOM B using two different implementations of artificial viscosity in PHANTOM.
We analysed the results of our numerical experiments using
volume-averaged quantities like the rms Mach number, volume- and
density-weighted velocity Fourier spectrum functions, and probability
distribution functions of density, velocity, and velocity derivatives.
Results. Our analysis indicates that grid codes tend to be less
dissipative than SPH codes, though details of the techniques used can
make large differences in both cases. For example, the Morris &
Monaghan viscosity implementation for SPH results in less dissipation (PHANTOM B and VINE versus GADGET and PHANTOM A).
For grid codes, using a smaller diffusion parameter leads to less
dissipation, but results in a larger bottleneck effect (our ENZO versus FLASH runs). As a general result, we find that by using a similar number of resolution elements N
for each spatial direction means that all codes (both grid-based and
particle-based) show encouraging similarity of all statistical
quantities for isotropic supersonic turbulence on spatial scales
(all scales resolved by more than 32 grid cells), while scales smaller
than that are significantly affected by the specific implementation of
the algorithm for solving the equations of hydrodynamics. At comparable
numerical resolution (
),
the SPH runs were on average about ten times more computationally
intensive than the grid runs, although with variations of up to a
factor of ten between the different SPH runs and between the different
grid runs.
Conclusions. At the resolutions employed here, the ability to
model supersonic to transonic flows is comparable across the various
codes used in this study.
Key words: hydrodynamics - shock waves - methods: numerical - turbulence
1 Introduction
Laboratory and terrestrial fluid dynamics are often described as incompressible flow (e.g., Lesieur 1997); however, astrophysical fluids are usually characterised by highly compressible supersonic turbulent motions (see e.g. Scalo & Elmegreen 2004; Elmegreen & Scalo 2004). For example, the large observed line widths in Galactic and extragalactic molecular clouds and star-forming regions show direct evidence of chaotic velocity fields with magnitudes in excess of the sound speed. This random motion carries enough kinetic energy to counterbalance and sometimes overcompensate for the effects of self-gravity in these clouds (e.g. Blitz et al. 2007; Ballesteros-Paredes et al. 2007). The intricate interplay between supersonic turbulence and self-gravity determines the overall dynamical evolution of these clouds and their observable features, such as their density structure, the star formation rate within them, and their lifetimes (see e.g. McKee & Ostriker 2007; Mac Low & Klessen 2004).
Despite turbulence being a universal phenomenon, it is also one of the least understood natural phenomena. Turbulence arises as a result of the nonlinear terms in the Navier-Stokes equation that governs the dynamical behaviour of gases and fluids (Lesieur 1997; Frisch 1995). A self-consistent mathematical formulation does not exist. Thus, analytical research mostly focuses on finding appropriate closure equations that capture the bulk behaviour of the system.
As a first approach, turbulence is characterised by two spatial
scales that are connected by a self-similar cascade of kinetic energy
that occurs over the so-called inertial range. Energy is injected into
the system on some large scale Land dissipated on small scales
that are comparable to the
viscous length
.
For incompressible turbulence, Kolmogorov (1941)
described a simple heuristic model based on dimensional analysis that
captures the basic behaviour of the flow surprisingly well. He assumed
that turbulence driven on a large scale L forms eddies on that scale. These
eddies interact to form slightly smaller eddies, transferring some of their
energy to the smaller scale. The smaller eddies in turn form even smaller
ones, and so on, until energy has cascaded all the way down to the
dissipation scale
.
In order to maintain a steady state, equal amounts of energy must be
transferred from each scale in the cascade to the next, and eventually
dissipated, at a rate
,
where vL is
the typical velocity on scale L and
is a constant determined empirically.
Kolmogorov (1941) assumes this rate is constant throughout the scales,
leading to
,
or equivalently
for wavenumbers
.
The kinetic energy in the wavenumber
interval
is
and consequently the energy spectrum function
.
This
describes the self-similar cascade of turbulent kinetic energy. Most of this
energy resides at the top of this cascade near the driving scale, and the
spectrum drops off steeply below
.
Because of the apparently
local nature of the cascade in wavenumber space, the viscosity only
determines the behaviour of the energy distribution at the bottom of the cascade below
,
while the driving only determines the behaviour near the
top of the cascade on and above L.
Supersonic flows in highly compressible gas create strong density perturbations. Early attempts to understand turbulence in the interstellar medium (von Weizsäcker 1943; Chandrasekhar 1949; von Weizsäcker 1951) were based on insights drawn from incompressible turbulence. An attempt to analytically derive the density spectrum and resulting gravitational collapse criterion was first made by Chandrasekhar (1951a,b). This work was followed up by several authors, culminating in the work by Sasao (1973) on density fluctuations in self-gravitating media. Larson (1981) qualitatively applied the basic idea of density fluctuations driven by supersonic turbulence to the problem of star formation. Bonazzola et al. (1992) used a re-normalization group technique to examine how the slope of the turbulent velocity spectrum could influence gravitational collapse. This approach was combined with low-resolution numerical models to derive an effective adiabatic index for subsonic compressible turbulence by Panis & Pérault (1998).
In supersonic turbulence, shock waves offer additional possibilities for dissipation. They can transfer energy between widely separated scales, removing the local nature of the turbulent cascade typical of incompressible turbulence. The spectrum may shift only slightly, however, as the power spectrum (Fourier spectrum) of a step function representative of a perfect shock wave is k-2. Boldyrev (2002) has proposed a theory of velocity structure function scaling based on the work of She & Leveque (1994) using the assumption that dissipation in supersonic turbulence primarily occurs in sheet-like shocks, rather than linear filaments at the centres of vortex tubes (see also Boldyrev et al. 2002b,a). Transport properties of supersonic turbulent flows in the astrophysical context have been discussed by de Avillez & Mac Low (2002) and Klessen & Lin (2003), and the fractal dimension of turbulent media by Federrath et al. (2009b).
As satisfying analytic models are rare, especially when dealing with compressible and supersonic turbulent flows, special attention is drawn to numerical approaches. A wide range of methods are used to study turbulence, ranging from simulating statistical processes such as random walks (e.g. Metzler & Klafter 2000), remapping models, or certain Hamiltonian systems (Isichenko 1992), to hydrodynamic large-eddy simulations (LES). In LES only the largest spatial scales are resolved directly using a hydrodynamic integrator. For the turbulent dynamics on smaller scales, a so-called subgrid scale (SGS) model is utilised. Among astrophysicists, the most often used SGS model is numerical dissipation, i.e. performing Implicit LES. It is not possible to optimise the use of closure models for astrophysical turbulence through comparisons with laboratory experiments. Therefore, the representation of the SGS behaviour provided by numerical dissipation must be sufficient, and indeed provides a reasonably good approximation (Benzi et al. 2008).
In the current study we focus on comparing different Implicit LES
schemes. Our goal is to assess the applicability of different numerical
schemes to the modelling of supersonic turbulent flows, and to compare
their validity and accuracy in the astrophysical context. To keep this
comparison simple, we focus on purely hydrodynamic turbulence in
isothermal gaseous media in regions with periodic boundaries, and study
the decay of fully developed turbulence. We follow the dissipation of
kinetic energy due to the numerical viscosity intrinsic to any
numerical scheme, and characterise the turbulent velocity field using
volume- and density-weighted velocity power spectra,
and probability distribution functions of density, velocity and
velocity derivatives. We remind the reader that in supersonic
turbulence energy is not only dissipated below
by the action of
artificial viscosity on the smallest scale eddies, but also in shocks. In most
of the codes employed here artificial viscosity is necessary also for the
modelling of shocks. The main aim of our comparisons is to characterise the
role of artificial viscosity in dissipating energy below
rather than the use of artificial viscosity in the modelling of
shocks. A discussion on the shock capturing ability of
the codes used in this study is provided in Sect. 2 (for a
comprehensive such comparison, see also Tasker et al. 2008). One of the fundamental questions we want to
address is at which numerical resolution are different numerical schemes
capable of modelling supersonic turbulence adequately.
This is the first such comparative study; there has been no coherent
comparison of the various hydrodynamic codes used in the literature
for the study of supersonic turbulence. In spite of the fact that results from
different codes appear to contradict each other and lead to different
interpretations of the role of turbulence in astrophysics - e.g. the
hydrodynamic simulations of Ballesteros-Paredes et al. (2006) and Padoan et al. (2007) in which different power-law slopes are obtained from the
velocity power spectra leading to different interpretations of the role of
turbulence on cloud fragmentation and the resulting core mass function - it
has not been properly checked whether at least some of these differences are
due to differences in the numerical schemes employed. We perform and analyse here a
first set of low-resolution calculations aiming at extending our
investigations in the future to higher resolution simulations, achieved either
directly, or by using adaptive resolution techniques like Adaptive Mesh Refinement (Schmidt et al. 2009; Kritsuk et al. 2006), or Particle Splitting (Kitsionas & Whitworth 2002), or by using Subgrid Scale Models (Schmidt et al. 2006c,b),
and/or combinations of the above. It should be emphasised that the
typical number of resolution elements used here for the SPH
calculations is quite large (number of particles N=2153)
compared to the typical number of particles used in existing studies of
supersonic turbulence and cloud fragmentation in the literature (e.g. Ballesteros-Paredes et al. 2006).
On the other hand, the number of resolution elements used for the grid
codes presented here is rather small (number of grid cells N=2563) compared to what is the current state-of-the-art resolution for such studies (e.g. Padoan et al. 2007; Schmidt et al. 2009; Federrath et al. 2009a; Lemaster & Stone 2009; Kritsuk et al. 2007).
The structure of our current study is as follows: in Sect. 2, we describe the setup of the experiments conducted as well as list the most important features of the codes used. In Sect. 3, we review the statistical measures used in this paper for the analysis of supersonic turbulence. In Sect. 4, we present the initial conditions employed for the decay simulations, while in Sect. 5 we discuss the results of the decay experiments and the comparison of the performance of the various codes. We summarise the computational efficiency of the codes and runs in Sect. 6, and present our conclusions in Sect. 7.
2 Experimental setup and numerical codes
Our aim is to study the decay of supersonic hydrodynamic turbulence using different grid- and particle-based codes![[*]](/icons/foot_motif.png)
The turbulent gas distribution is created with GADGET (Springel et al. 2001). We start with a box of side
.
Inside this box,
2153 = 9 938 375 particles were distributed homogeneously representing a static, uniform, isothermal gas with temperature
(corresponding to a sound speed
), and mass
.
We impose a turbulent velocity field within our box using the driving scheme of Mac Low (1999), as it has been implemented for GADGET by Jappsen et al. (2005). Turbulence is driven on large scales (at wavenumbers between k = 1 and k = 2), aiming at an rms Mach number equal to
.
The rms velocity has then reached
.
We drive turbulence for a few code unit times. The code unit for time is arbitrarily chosen to be equal to the free-fall time
,
of the initial homogeneous and static gas distribution. Using
,
we obtain
.
This does not imply that the gas in our box is or will become
self-gravitating. A useful time unit for the comparison of
non-self-gravitating turbulent flows is the turbulent crossing time
,
which is defined as the time it takes for a typical turbulent
fluctuation to cross half of the computational domain (defined
equivalently to Federrath et al. 2009b; Kritsuk et al. 2007).
In Sect. 4.1, we discuss for how long the turbulence needs to be driven to reach a statistical steady state. After driving has finished, the density and the velocity of the GADGET-particle distributions are interpolated onto a grid with 2563 cells. We use this grid as the initial condition for following the decay with the grid codes. The particle distribution at the end of the driving phase provides the initial conditions to the SPH codes.
We follow the decay of turbulence for about six turbulent crossing times
.
The self-gravity of the gas is kept switched off at all times and the
influence of magnetic fields is neglected, i.e. we follow the
hydrodynamic decay only. An isothermal equation of state is used.
Periodic boundary conditions are adopted.
Snapshots were taken at 0.0, 0.06, 0.31, 0.62, 3.1, and
.
For the SPH codes, the particle distribution is interpolated onto a
grid at each snapshot. The grid codes naturally provide their density
and velocity fields on grids. From these grids, we then calculate
spatially averaged quantities like the rms Mach number, velocity power
spectra, and probability distribution functions of several quantities
including the density, the velocity, its derivatives, and combinations
of density and velocity.
In this paper, we include turbulence decay experiments using the SPH codes GADGET, PHANTOM (runs: A, B), and VINE, as well as the grid codes ENZO, FLASH, TVD, and ZEUS. In the following paragraphs, the general features of the codes used in this work are listed briefly. For more details on each of the codes, please refer to the references given below. All codes were run in parallel. The parallelisation architectures that the codes were run on, the total number of CPUs used, and the number of CPU hours consumed are listed in Sect. 7 for each of the runs studied here.
Codes that are used to perform simulations of supersonic turbulence are chosen for their performance in the highly compressible regime typical of astrophysical flows. Thus, codes with the ability to capture accurately the sharp discontinuities and high density ratios that result in supersonic turbulence are preferred for this study.
Grid-based codes for supersonic flows are often based on finite volume Riemann-solvers using the Godunov scheme (see e.g. Toro 1997). By their conservative nature, these codes maintain correct shock speeds, and, because of the Riemann-solver approach in calculating fluxes, they maintain very sharp discontinuities across a shock (typically within a few zones). These methods are often implemented in a dimensionally split way (Strang 1968).
A very common such method used for astrophysical flows is the Piecewise Parabolic Method (PPM), described in Colella & Woodward (1984), which is formally third-order accurate in space for smooth flows but switches to linear order to maintain step-function-like shocks or contact discontinuities near such features. Although not typically crucial, a small amount of artificial viscosity is implemented to ensure no or little oscillations behind shocks. Both ENZO (Norman & Bryan 1999; O'Shea et al. 2004) and FLASH (Dubey et al. 2008; Fryxell et al. 2000), used in this paper, are of this type. ENZO was used here in version 1.0.1, and FLASH was used in version 2.5. FLASH in particular has been extensively tested against laboratory experiments (Calder et al. 2002) and other codes (Heitmann et al. 2005; Dimonte et al. 2004). For the FLASH run of this work, a PPM diffusion parameter of K=0.1 (Colella & Woodward 1984) was used, whereas for the ENZO run the PPM diffusion parameter was set to K=0.0. This provides an additional test of the effects of artificial shock diffusion on the results of grid codes using the PPM. The effects of varying the PPM diffusion parameter has been investigated before: setting K=0.0 provides less dissipation, but produces stronger post-shock oscillations and a more pronounced bottleneck-effect (Federrath et al. 2009a; Kritsuk et al. 2007).
Other methods that attempt to maintain flatness of the solution behind shocks include Total Variation Diminishing (TVD) methods, which impose restrictions on the reconstruction of flow variables to ensure that the total variation of variables strictly diminishes over time. The TVD code used here employs an approximate (second-order-accurate), Roe-type (upwind) Riemann solver and TVD interpolation to maintain sharp shocks and smooth flow behind the shocks. Recipes for building the isothermal MHD code based on the TVD scheme are presented in Kim et al. (1999). For the turbulence comparisons of this project, we have used an isothermal hydrodynamic version of the code.
The ZEUS code (Stone & Norman 1992b,a) is a second-order accurate code using the van Leer (1977) monotonic advection algorithm. It resolves shocks using artificial viscosity, and does not include explicit techniques for keeping shocks sharp. A staggered mesh approach is adopted: the velocity is stored at cell interfaces, and density and energy at cell centres. This version of ZEUS is however different from the official version of ZEUS-MP, which was employed for instance by Vernaleo & Reynolds (2006) and Hayes et al. (2006), in that the version used here was parallelised by Piontek.
An entirely different method is employed in Smoothed Particle Hydrodynamics (SPH) codes (e.g. Monaghan 1988; Gingold & Monaghan 1977; Monaghan 2005; Lucy 1977; Monaghan 1992). SPH codes do not involve a grid at all, but track (fixed)-mass packets of fluid in a Lagrangian sense. While this approach to fluid dynamics requires the use of an artificial viscosity to maintain shock structure (e.g. Monaghan & Gingold 1983), it has the advantage for highly compressible flows that resolution automatically increases in high-density regions, as the Lagrangian fluid packets follow the mass flow. Three SPH codes, namely GADGET, PHANTOM (runs: A, B), and VINE, are used in this work. More details of these codes are given below.
GADGET is an MPI parallel tree-SPH and N-body code designed by Springel et al. (2001). We here used GADGET
version 1. The code uses individual and adaptive timesteps for all
particles, and it combines this with a scheme for dynamic tree update.
As a time integrator, it uses a variant of the leapfrog integrator,
involving an explicit predictor step. The smoothing lengths are derived
according to the commonly used M4 kernel (Monaghan & Lattanzio 1985; Springel et al. 2001). In the GADGET run performed here, we set the number of neighbours of each SPH particle to
neighbours. The influence of changing the number of SPH neighbours
has been investigated by Attwood et al. (2007). They argued that using a fixed number of neighbours prohibiting any variation in
may reduce numerical dissipation. Commerçon et al. (2008)
also studied the influence of changing the number of SPH neighbours in
simulations of gravitational fragmentation. They find a weak dependency
of their results on the number of neighbours indicating that increasing
speeds up gravitational fragmentation slightly. All SPH codes used here
employed roughly the same number of SPH neighbours (see below). A
number of about
SPH neighbours is the typical setup for most SPH calculations reported
in the literature. Thus, our comparison of SPH runs with each other
should not be systematically affected by our choice of SPH neighbours.
However, varying the number of neighbours should be investigated in a
detailed systematic study of supersonic turbulence in the future.
PHANTOM is a low-memory, highly efficient SPH code written
especially for studying non-self-gravitating problems. The code is made
very efficient by using a simple neighbour finding scheme based on a
fixed grid and linked lists of particles. In particular, it uses an
term in calculating the smoothing length h through
.
This corresponds to about 58 SPH neighbours in a uniform density distribution, though it is the
relation (to a tolerance of 10-4)
that provides the deciding criterion, not the neighbour number. The
code implements the full ``grad-h'' SPH formulation developed by Price & Monaghan (2004) and Price & Monaghan (2007),
whereby the smoothing length and density are mutually dependent and
iterated self-consistently, resulting in exact conservation of
momentum, energy and entropy in the SPH equations. Shocks are treated
using the Monaghan (1997) formulation of artificial viscosity in SPH as described in Price (2004), though modified slightly in PHANTOM to allow a more efficient calculation. Timestepping is performed using a Kick-Drift-Kick leapfrog integrator. The standard PHANTOM run is labeled as PHANTOM A. We conduct a second run with PHANTOM, labeled PHANTOM B, in which dissipation is reduced away from shocks using the viscosity switch proposed by Morris & Monaghan (1997).
VINE (Wetzstein et al. 2009; Nelson et al. 2009; Gritschneder et al. 2009) is an OpenMP parallel, tree-SPH and N-body
code. The code scales linearly up to a high number of processors, and
is designed for a combined usage of generic CPUs and the special
purpose hardware GRAPE. Time integration is performed here with a
Drift-Kick-Drift leapfrog integrator, which allows for individual
particle timesteps. The smoothing lengths are derived according to the
M4 kernel (Monaghan & Lattanzio 1985). In the VINE simulations, we set the number of neighbours of each SPH particle to .
Shocks are treated with the time dependent artificial viscosity prescription introduced by Morris & Monaghan (1997) (i.e. similar to the PHANTOM B run).
3 Methods for the analysis of statistical measures
3.1 SPH interpolation onto a grid
For interpolating the density and the velocity distribution of the SPH particles onto a grid, we use the generic SPH interpolation formula (e.g. Monaghan 1992)where


![]() |
(2) |
is used for the interpolation. For each grid point, the above summation is over a limited number of neighbouring SPH particles due to the compact support of the kernel function, which vanishes for

3.2 Volume-weighted velocity power spectrum
![]() |
Figure 1:
Time evolution of the rms Mach number during the driving phase. After driving for about 2 turbulent crossing times
|
Open with DEXTER |
The velocity power spectrum is calculated as follows. For each velocity
component we take the Fourier transform of the velocity field
.
We denote these Fourier transforms as
.
Using these definitions the volume-weighted velocity power spectrum is defined as
where


![]() |
(4) |
and the solenoidal (transverse) part as
![]() |
(5) |
For each wavenumber

![$[k,~k\!+\!1]$](/articles/aa/full_html/2009/46/aa11170-08/img70.png)
![[*]](/icons/foot_motif.png)


To calculate the volume-weighted dissipation rate, we estimate at each
snapshot the integral over the volume-weighted velocity power spectrum.
This is formally given as
and is estimated numerically. The dissipation rate is then given as the rate of change of this integral as the decay proceeds.
The sonic scale,
,
separates supersonic turbulent flows on large scales (small k) from subsonic turbulent flows on smaller scales (high k). We estimate the sonic scale by solving the following equation,
implicitly for the sonic wavenumber

3.3 Density-weighted velocity power spectrum
For density-weighted velocity power spectra, we substitute
![]() |
(8) |
where

in analogy to Eq. (3). The density-weighted dissipation rate is computed in analogy to Eq. (6), from the density-weighted velocity power spectrum

![]() |
Figure 2:
Top: velocity power spectra obtained at four different
snapshots during the driving phase (volume-weighted spectra on the
left, and density-weighted spectra on the right panels). The spectrum
of each snapshot is plotted with a different colour (
|
Open with DEXTER |
3.4 Probability distribution functions
Using all the cells in our grid, we obtain the cumulative distributions, F, of the following quantities: logarithm of the density, the three velocity components vi with i = 1, 2, 3, the logarithm of the trace free rate of strain, |S*|, and the logarithm of vorticity,
.
To obtain these distributions, the corresponding quantities are binned linearly. From the cumulative distribution, F, we derive the probability distribution function (PDF) of quantity A at each bin n by computing
![]() |
(10) |
For the trace free rate of strain, the spatial derivatives of the velocity components are first calculated as
![]() |
(11) |
The (symmetric) strain tensor has components, Sij = 0.5 (vi,j + vj,i). The rate of strain is then
![]() |
(12) |
The trace of the strain tensor is



|S*| gives the rate at which a fluid element is deformed without changing its volume, e.g. by the act of a shear flow.
4 Driving phase
4.1 Driving time
We simulate driven turbulence using GADGET following the prescription in Jappsen et al. (2005). The driving phase starts with an initially homogeneous density distribution with





![]() |
Figure 3: Velocity power spectra of the initial conditions used for the decay experiments (volume-weighted spectrum on the left, and density-weighted spectrum on the right panel). The spectra were compensated with power-law slopes of 2.20 ( left panel - volume-weighted case) and 1.67 ( right panel - density-weighted case). |
Open with DEXTER |
Turbulence has been established when the decay experiments start. This is shown in Fig. 2, where we plot the velocity power spectra, calculated as explained in Sect. 3.2 and Sect. 3.3
for the volume- and density-weighted spectra, respectively, of four
different snapshots taken along the driving phase. On the left panels,
volume-weighted spectra are plotted whereas density-weighted spectra
are shown on the right panels. Driving for
(black lines) was not sufficient to produce a statistically fully
established turbulent flow, as there was not enough time for turbulence
(driven on large scales) to cascade down to the smallest spatial
scales. However, the turbulence is fully established after about
:
there is no significant variation of the velocity power spectra when we
attempted to drive turbulence for longer times (cf. the red, green
and blue lines corresponding to driving times of 2.5, 3.1, and
,
respectively). We conclude that starting the decay with the gas distribution obtained after
of driving is a reasonable choice of initial conditions. Schmidt et al. (2009), Federrath et al. (2009b), and Glover et al. (2009) also conclude that after about
,
supersonic turbulence has established a statistical invariant state.
However, significant statistical fluctuations from snapshot to snapshot
remain (Federrath et al. 2009a), which explains the slight changes visible in the velocity power spectra at t = 2.5, 3.1, and
in Fig. 2. The variations seen in Fig. 2 for
are at most in the order of the typical snapshot-to-snapshot variations
introduced by intermittent fluctuations, i.e., less than 10% (see also Kritsuk et al. 2007,
Fig. 1). The initial conditions used for our code comparison
therefore constitute a statistically fully established supersonic
turbulent density and velocity distribution.
4.2 The result of driving: initial conditions for the decay experiments
In this section, we present the velocity power spectra of the
initial conditions used for the decay experiments. These initial
conditions have been produced with GADGET using the turbulence driving routine developed by Mac Low (1999), and employed in Jappsen et al. (2005). They present the state of the system after
of driving (see Sect. 4.1). On the left panel of Fig. 3,
we plot the volume-weighted velocity spectrum, with the
density-weighted velocity spectrum shown on the right panel. The
spectra were compensated with power-law slopes of 2.20 (left panel -
volume-weighted case) and 1.67 (right panel - density-weighted case).
4.2.1 Volume-weighted velocity power spectrum of the initial conditions
From the volume-weighted velocity power spectrum computed with GADGET we derive a slope of about 2.2, which is obtained in the wavenumber range

In the presence of a bottleneck effect (e.g., Schmidt et al. 2006a; Dobler et al. 2003; Haugen & Brandenburg 2004), Porter & Woodward (1994) argued that the bottleneck affects all scales up to
for grids of size N=256. Schmidt et al. (2006a) suggested that the bottleneck peaks at
.
These authors also argued that, in codes showing no bottleneck,
numerical dissipation will start acting at wavenumbers smaller than
.
Since our initial conditions do not seem to exhibit a bottleneck, dissipation will certainly start at
.
Since a power law is established for scales
,
and since this power law breaks down at
,
we argue that dissipation did not play a significant role for wavenumbers
in the spectra of the initial conditions.
![]() |
Figure 4:
Left panel: density-weighted velocity power spectra of the initial conditions using different velocity weights:
|
Open with DEXTER |
Padoan et al. (2007) and Kritsuk et al. (2007) performed high-resolution hydrodynamic simulations of driven turbulence using ENZO at resolutions of 10243 grid cells. They obtained a significantly shallower slope of 1.9 for the volume-weighted velocity spectrum. Federrath et al. (2009a) showed that driving turbulence with the FLASH code at resolutions of 5123 and 10243 grid cells results in a slope consistent with those of Padoan et al. (2007) and Kritsuk et al. (2007). In contrast, at the resolution of 2563 (as used here), a steeper slope of the order of 2.1 was obtained (Federrath et al. 2009a), which is consistent with our result for the volume-weighted spectra. Federrath et al. (2009a) directly demonstrated the steepening of the velocity spectrum with decreasing numerical resolution (cf. their Fig. C.1). However, they also find that the slope of the velocity spectrum is converged to within less than 3% going from 5123 to 10243 in their numerical experiments. Considering the low resolution in the present study, it is therefore not surprising that we find a slope of about 2.2 for the volume-weighted velocity power spectrum. This result however can also be taken as an indication that the initial conditions used for our comparison experiments did not strongly depend on the method by which these initial conditions have been produced (i.e. GADGET).
4.2.2 Density-weighted velocity power spectrum of the initial conditions
From the density-weighted velocity power spectrum we obtain a scaling close to the Kolmogorov (1941) scaling with a slope of about 5/3. This slope is found within
.
We have used
(see Sect. 3.3) instead of
,
which was used by Kritsuk et al. (2007), Kowal & Lazarian (2007), Schmidt et al. (2008), and Federrath et al. (2009a). The
weights correspond to a quantity that has physical reference to kinetic energy
,
while the
weights correspond to a constant kinetic energy dissipation rate within the inertial range (Kritsuk et al. 2007). For GADGET only, we additionally compute energy spectra with the
weights. The comparison of the
to the
weights is shown in Fig. 4 (left panel). We find a steeper slope of about 1.8 for the
weights. The fact that Kritsuk et al. (2007) obtain Kolmogorov-type scaling using the
weights is a consequence of their volume-weighted velocity power
spectrum being shallower than ours with slopes of 1.9 and 2.2,
respectively. We find this steeper slope of about 2.2 due to our
limited numerical resolution as discussed in the previous section.
Therefore, the fact that our density-weighted spectra show scaling
close to Kolmogorov scaling is a result of the rather small numerical
resolution adopted in this comparison. Clearly, the slopes of the
density-weighted spectra depend not only on the velocity statistics,
but also on the convolution of density and velocity statistics.
However, we argue that the density information should be taken into consideration in the statistical analysis of compressible turbulence, as most of the mass ends up in small volumes through shocks. This fact is neglected by statistical measures that take into account volume-weighted velocities only (for instance, some models of the mass distribution of molecular cloud cores and stars are based on the volume-weighted velocity power spectrum, e.g., Padoan & Nordlund 2002).
4.2.3 The effective SPH resolution
We would like to comment here on the Lagrangian nature of the SPH method. In Fig. 3, there is a rise in power on scales
,
particularly prominent in the density-weighted case (right panel). This
is a consequence of the adaptivity in resolution that is intrinsic to
the SPH method: as the SPH particles move with the flow, the build-up
of high densities is accompanied by an increase in the number of
particles (sampling-points) for a fixed volume element. Therefore, as
high densities build up on small scales due to the shocks developed in
supersonic turbulence, the SPH particle concentration increases on
these scales. In particular, in driven turbulence the effective SPH
resolution in high-density regions eventually becomes superior to the
resolution initially employed. Hence, the extra power developed on
scales
is a result of the interpolation of the SPH particle distribution onto
a grid of resolution lower than the effective SPH resolution
. When the SPH particle distribution is interpolated onto a larger grid (N=5123) this rise in power is no longer observed (right panel of Fig. 4). In other words, because of the finite extent of the grid we used for our comparison experiments (N=2563), all SPH information on scales smaller than
gets interpolated into the
bin. This appears as a rise in the power on the smallest scales of our grid.
5 Results of the turbulence decay code comparison
5.1 Volume-weighted and density-weighted velocity power spectra
![]() |
Figure 5:
Time evolution of the volume-weighted velocity power spectra (compensated with power-law slopes of 2.20) of the SPH codes GADGET, PHANTOM (runs: A, B), and VINE ( left column) and of the grid codes ENZO, FLASH, TVD, and ZEUS ( right column) for the following snapshots: t=0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER |
![]() |
Figure 6: Same as Fig. 5, but the density-weighted velocity power spectra (compensated with power-law slopes of 1.67) are shown. |
Open with DEXTER |
![]() |
Figure 7:
Comparison of the volume-weighted velocity power spectra (compensated with power-law slopes of 2.20) of all codes at
different times along the decay: t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER |
![]() |
Figure 8: Same as Fig. 7, but the density-weighted velocity power spectra (compensated with power-law slopes of 1.67) are shown. |
Open with DEXTER |
Figures 5 and 6
present the time evolution of the volume- and density-weighted velocity
power spectra obtained with the various codes employed in this work.
Data from the following snapshots are shown: initial conditions at
(black lines),
(red lines),
(green lines),
(blue lines),
(cyan lines), and
(magenta lines). Figures 7 and 8
show the volume- and density-weighted velocity power spectra of each
code plotted on top of each other in a single plot as a function of
time, so that the spectra obtained for each snapshot can be directly
compared across all codes for each time.
The grid codes dissipate the power produced on small spatial scales (
)
of the initial conditions faster than the SPH codes. This is a result of the SPH interpolation onto the grid (see Sect. 4.2.3). Due to their Lagrangian nature the SPH runs (GADGET, PHANTOM A, PHANTOM B, and VINE), maintained power at
for a longer time. All grid codes have lost slightly more power for
than the SPH codes, immediately after the decay simulations start (i.e. from the first snapshot at
).
The differences seen at early times on the small scales of the SPH
power spectra is a result of the different methods that each of the SPH
codes adopt for calculating particle smoothing lengths and/or the use
of different smoothing kernels, and different implementations of
artificial viscosity.
The volume-weighted velocity power spectra obtained with the SPH codes and with the grid code ZEUS are quite similar for
at
,
with power law slopes of about 2.2. ENZO, FLASH, and TVD have a slightly shallower slope of about 2.1. This slope agrees with the low-resolution models in Federrath et al. (2009a) using 2563 grid cells. At t=0.31, and
,
also ZEUS
develops a slightly shallower slope that roughly agrees with the slopes
obtained using the other grid codes. The slopes have droped to about
1.95 and 1.9 at t=0.31, and
,
respectively. The wavenumber ranges over which these slopes are maintained are slightly smaller for ZEUS than for TVD and FLASH (up to
), while ENZO maintains the slopes up to
.
The SPH codes again have a slightly steeper slope by about 0.1 than the
grid codes, and the wavenumber range over which this slope is
maintained is comparable with the range obtained using the ZEUS code.
Table 1:
Time evolution of
in units of
,
for all codes/runs.
Table 2:
Time evolution of
in units of
,
for all codes/runs.
The density-weighted velocity power spectra are shallower than the
volume-weighted spectra for all codes with slopes of about 1.6 at
.
This much shallower slope is a result of the low resolution of our numerical experiments as discussed in Sect. 4.2.2. Similar to the results obtained from the volume-weighted spectra, all grid codes dissipate the initial power on scales
faster than the SPH codes with ZEUS
having dissipated most. However, there is an important exception to
this result concerning the grid codes: the density-weighted velocity
power spectrum produced by the grid code ENZO is almost identical to the power spectra produced with the SPH codes at
,
while FLASH, TVD, and ZEUS have lost a considerable amount of their power at
.
The power spectrum obtained with the ZEUS code shows the break into the dissipation range already at
and produces a slightly steeper slope of about 1.65 than all other codes. At later times (
,
and
)
all codes produced similar density-weighted power spectra for
,
while the ENZO code develops a clear bottleneck (see e.g. Schmidt et al. 2006a; Dobler et al. 2003; Haugen & Brandenburg 2004), which manifests itself in the excess power seen at
.
Since ENZO was run here with a PPM diffusion parameter set to K=0.0 (Colella & Woodward 1984), the bottleneck effect is quite strong (see also Federrath et al. 2009a; Kritsuk et al. 2007). Although the FLASH code uses the same numerical technique as ENZO it does not show such a pronounced bottleneck effect, because FLASH was used with the recommended PPM diffusion parameter of K=0.1 (Colella & Woodward 1984).
As the decay progresses (t=3.1, and
), power gets dissipated differently by the various codes at
.
However, on large scales (
),
all codes used in the present study show very similar volume- and
density-weighted velocity power spectra with slight variations that can
be attributed to statistical fluctuations (Federrath et al. 2009a; Kritsuk et al. 2007).
This is an important result, as it shows that all codes, despite having
different dissipation mechanisms acting on small scales at
,
on large spatial scales the results of the decaying turbulence
experiments presented here are quite robust. They do not show
considerable systematic differences for the codes employed here at the
resolutions studied. Moreover, it is important to note that the
dissipation ranges at
are not just different when we compare grid codes with SPH codes, but
they are also different among the SPH codes and among the grid codes.
Thus, we conclude that the dissipation range is strongly dependent on
the code being used, while scales with
are similarly well reproduced by all the hydrodynamic codes employed here.
The overall performance of the codes, as seen through the analysis of their velocity power spectra at times
,
and
shows that the numerical viscosity of grid codes is generally lower
than that of SPH codes, and details of the method used will determine
the detailed ranking. For example, using K=0 in PPM, as done for the ENZO run, yields a lower dissipation value, although with a stronger bottleneck effect than the K=1 value used for the FLASH run. Similarly, we find that the viscosity implementation by Morris & Monaghan (1997) used in the PHANTOM B and VINE runs is superior to that of GADGET and the PHANTOM A run.
Considering the resolution of N3=2563 cells used in the present study (2153 SPH particles interpolated to a 2563 grid), the fact that the velocity power spectra are different at
implies that one should be cautious with the interpretation of results obtained with power spectra at wavenumbers
.
This means that length scales smaller than about 32 grid cells for grid
codes (and SPH codes using a similar number of resolution elements
interpolated to a grid of equivalent size), are affected by the
individual dissipation mechanisms acting in hydrodynamical codes. In
contrast, the results of the various codes are robust for
.
This is encouraging, because the results of all the hydrodynamical
codes used here agree well in this regime, and one is free to choose a
code for modelling supersonic turbulence as long as only results for
scales
are considered. However, this also means that one needs resolutions of at least 10243
grid cells to obtain roughly one full decade in length scale over which
a power law could be fitted to turbulent velocity spectra. In practice,
this range turns out to be even smaller than one decade in length
scales at a resolution of 10243 grid cells (Federrath et al. 2009a; Klein et al. 2007; Kritsuk et al. 2007).
5.2 Kinetic energy dissipation rates
![]() |
Figure 9:
Evolution of the rms Mach number as a function of time in units of the turbulent crossing time
|
Open with DEXTER |
Tables 1 and 2
list the integrals over the volume- and density-weighted velocity
spectra, respectively for each code and run at each of the snapshots
presented here. Up to
,
all SPH codes dissipate volume-weighted power slightly faster than the
grid codes, while for the density-weighted power, both SPH and grid
codes dissipate kinetic energy at roughly the same rate. At
,
however, ZEUS
has dissipated about 15% more power in velocity fluctuations than the
other codes, while all other grid codes have still dissipated less than
the SPH codes. The density-weighted integral gives more similar results
for all codes at all times analysed. At t=3.1, and
,
all codes have roughly dissipated the same amount of volume- and density-weighted power, except for the PHANTOM B run having kinetic energies about 16% larger than all other codes.
5.3 Time evolution of the rms Mach number
The time evolution of the rms Mach number is shown in Fig. 9. The dotted line shows the expected power-law decay rate
for supersonic turbulence (Stone et al. 1998; Mac Low 1999; Mac Low et al. 1998), starting at an rms Mach number of
:
![]() |
(14) |
Clearly, the rms turbulent flow remains supersonic (i.e.


5.4 Time evolution of the sonic scale
Table 3:
Time evolution of the sonic scale
as defined by Eq. (7) for all codes/runs.
Table 3 lists the evolution of the sonic scale for all codes/runs and all snapshots. The sonic scale decreases fastest for PHANTOM A, PHANTOM B and ZEUS, and slowest for ENZO. GADGET, VINE, FLASH and TVD
show quite similar results for the evolution of the sonic scale. The
sonic scale does not differ considerably among the various codes at
later times (
and
).
However, during the initial stages of the decay, there are differences
up to 30%. This is partly a result of our computation of the sonic
scale (cf. Sect. 3.2).
Since the different codes have quite different dissipation properties,
the sonic scale is affected accordingly (cf. Sect. 5.1). Also note that the sonic scale is given as an integer. Thus, the fact that, for instance, ENZO maintains
until
does not imply that its sonic scale stays exactly the same for all times
,
but will have also decreased slightly. However, our grid of wavenumbers is binned such that only integer values of k are permitted, and thus, rounding errors introduce uncertainties of about 10% in the
-values at early times of the decay (
).
5.5 Probability distribution functions
5.5.1 Probability distribution functions of the gas density
![]() |
Figure 10:
Comparison of the volume-weighted density PDFs of all codes at different times along the decay: t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER |
Table 4:
Time evolution of the mean s0 and standard deviation
of the logarithmic density
for all codes/runs.
Figure 10 shows
volume-weighted probability distribution functions (PDFs) of the gas
density. Each panel shows the comparison of the density PDFs for all
codes at t = 0.0, 0.06, 0.31, 0.62, 3.1, and
after they have been interpolated to grids of 2563 cells. The density PDFs were computed from the logarithm of the density
.
The PDF p(s) is expected to follow roughly a Gaussian distribution
where

The density PDFs of all codes show little variation around the peak of
the distribution and at the high-density tail, and they are all roughly
consistent with log-normal distributions, Eq. (15).
The low density tails show stronger variations. This is because the low
density tail is subject to stronger temporal variations caused by
intermittent fluctuations (Federrath et al. 2009a; Kritsuk et al. 2007). In Table 4 we list the values of s0 and
for each code. Note that for a log-normal volume-weighted density distribution,
.
The means and standard deviations of the PDFs are similar for all codes and vary only by about 10%, except for our ZEUS run at
,
which has a mean value |s0|
about 28% larger than the average over all runs at that time. This
appears slightly too high a variation to be attributed to temporal
fluctuations. The difference of the density PDF around its peak
obtained with the ZEUS run at
is also visible in Fig. 10 (bottom left panel). However, at later times, the ZEUS density PDFs are almost identical to the ones obtained with the other codes.
![]() |
Figure 11:
Volume-weighted density PDFs at t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER |
One would expect that SPH codes can resolve high-density regions better
than grid codes. This is because the effective SPH resolution increases
with increasing density (cf. Sect. 4.2.3).
On the other hand, low-density gas will become less resolved as
particles move towards higher density regions and away from low-density
regions. However, the density PDFs of grid codes and SPH codes agree
very well after interpolation of the particle data to grids. This may
be caused by our interpolation procedure. We therefore would like to
test the density PDF obtained from the SPH particle density directly
without interpolation to a grid. The density PDF of SPH particles is
naturally mass-weighted. Therefore, in order to obtain the
volume-weighted density PDF directly from the particles we must weight
the contribution of each particle into a density bin by the inverse of
its density (for equal mass particles). Note that in general,
volume-weighted PDFs, pv, are related to mass-weighted PDFs, ,
by
(e.g. Ostriker et al. 2001; Li et al. 2003).
This conversion is necessary as particles and grid cells do not sample
the computational volume in the same manner, i.e. the particles
are concentrated in high-density regions, whereas the grid cells sample
the volume homogeneously. The same applies for the distribution of
Lagrangian tracer particles used in grid simulations (Federrath et al. 2008a; Price & Federrath 2009). Therefore, we must compensate the particle PDF for the fact that most particles are located in high-density regions.
![]() |
Figure 12: Same as Fig. 10, but the PDFs of the trace-free rate of strain are shown, as defined in Eq. (13). |
Open with DEXTER |
Figure 11 shows the density PDF of the GADGET
run as obtained both after interpolation to a grid (black line), and
directly obtained from the SPH particle density (red line).
Additionally, we show the SPH density PDF obtained from the VINE run for comparison. The SPH density PDFs for GADGET and VINE were transformed into volume-weighted PDFs using
to allow for a direct comparison of the grid and the SPH density PDFs.
The grid-interpolated and the SPH density PDFs show no significant
differences: the grid-interpolated PDF exhibits less scatter in the
low-density regime than the SPH particle PDF, indicating that the
grid-interpolated PDF samples the low-density regime slightly better
than the particle-based PDF. On the other hand, the particle-based
density PDF shows better sampling of the high-density tail and extends
to slightly higher densities and lower probability densities. This is
to be expected because the effective resolution is larger for the
particle-based distribution at high densities, and SPH density peaks
are smoothed to slightly smaller densities by the interpolation
technique, Eq. (1).
This analysis offers an additional illustration of the fact that the
adaptivity of the SPH method is suppressed when the SPH information is
interpolated onto a grid of resolution smaller than the effective SPH
resolution (cf. Sect. 4.2.3).
It is an encouraging result that all codes are able to reproduce the general form of the density PDF quite well. This is important, because the turbulent density PDF is an essential ingredient to many models of star formation: to understand the mass distribution of cores in molecular clouds and stars (Hennebelle & Chabrier 2009,2008; Padoan & Nordlund 2002), the star formation rate (Padoan & Nordlund 2009; Krumholz & McKee 2005), the star formation efficiency (Elmegreen 2008), and the Kennicutt-Schmidt relation on galactic scales (Elmegreen 2002; Tassis 2007; Kravtsov 2003).
5.5.2 Probability distribution functions of the rate of strain
Figure 12 presents the trace free rate of strain PDFs computed as explained in Sect. 3.4. Each panel shows the comparison of the PDFs of all codes at t = 0.0, 0.06, 0.31, 0.62, 3.1, and
.
Within the first
(top row and bottom-left panel of Fig. 12), all SPH codes and ZEUS have PDFs that are slightly narrower than those of the remaining grid codes (top-right and bottom-left panels of Fig. 12). At later times (bottom-middle and bottom-right panels of Fig. 12), all codes appear to have distributions with similar widths, but with the PDFs of the SPH codes and the ZEUS code peaking at almost half the |S*| value of the other grid codes. In particular, GADGET, PHANTOM A, PHANTOM B, VINE and ZEUS have their peaks at
,
while ENZO and FLASH have their peaks at
.
TVD appears to peak in between. At
the velocity power spectra of the codes are maintained up to k-values that can be ordered as follows (from higher to lower values): ENZO, FLASH, TVD, ZEUS, VINE, PHANTOM B, GADGET, PHANTOM A (cf. the bottom-right panels of Figs. 7 and 8 in Sect. 5.1). This is the order of the |S*|-value of the peaks of the rate of strain PDFs at this time (bottom-right panel of Fig. 12).
5.5.3 Probability distribution functions of the vorticity
![]() |
Figure 13: Same as Fig. 10, but the PDFs of the vorticity are shown. |
Open with DEXTER |
In Fig. 13, we present the vorticity PDFs. The vorticity was computed as explained in Sect. 3.4. Each panel shows the comparison of the vorticity PDFs of all codes at t = 0.0, 0.06, 0.31, 0.62, 3.1, and
.
The vorticity and the trace free rate of strain PDFs show a similar behaviour: within the first
(top row and bottom-left panel of Fig. 13), the SPH codes and ZEUS show narrower distributions than the remaining grid codes. At later times (bottom-middle and bottom-right panels of Fig. 13), the vorticity distributions of the SPH codes and ZEUS peak at almost half the vorticity value of the other grid codes. Again, the peaks of PHANTOM B, VINE, TVD, and ZEUS are bracketed by the peaks of the remaining codes, with GADGET and PHANTOM A peaking at the lowest values of the vorticity, and ENZO and FLASH peaking at the highest vorticity.
Since the vorticity is related to the ability of the codes to model turbulent eddies, these comparisons indicate that SPH codes exhaust their ability earlier than grid codes, most likely because of their excess viscosity acting on the smallest of these eddies and erasing them. As in the case of the rate of strain above, the order of the vorticity values of the peaks of the vorticity PDFs is the same as the order with which the codes maintain their velocity power spectra in the high-k regime (cf. the bottom right panel of Fig. 7 and 8, and the bottom-right panel of Fig. 12).
![]() |
Figure 14:
Same as Fig. 10, but the PDFs of the z-component of the velocity ( |
Open with DEXTER |
Table 5: Computational efficiency of all codes/runs.
5.5.4 Probability distribution functions of the velocity
Figure 14 shows the PDFs of the velocity component .
Each panel shows the comparison of the PDFs of all codes at times t = 0.0, 0.06, 0.31, 0.62, 3.1,
.
For all these snapshots, the velocity PDFs are very similar for all
codes. Apart from statistical fluctuations showing up in the wings of
the distributions, the velocity PDFs are roughly Gaussian
distributions. As the standard deviation of the density PDFs
(cf. Fig. 10),
the standard deviation of the velocity PDFs decreases with time, which
simply reflects the turbulence decay (cf. Fig. 9).
6 Computational efficiency of the codes
Table 5 provides a summary of the computational efficiency of our codes/runs for the present setup. We remind the reader that in the current study all SPH codes used 2153 resolution elements, while the grid codes used 2563 resolution elements. We compensated for this discrepancy in the number of resolution elements, as well as compensated roughly for the different CPU clock rates in the last row of Table 5. However, the various runs were performed on different parallel machines throughout the world, some with, others without optimisations. Furthermore, different supercomputers use different hardware solutions for the parallelisation. Thus, the given numbers should only be taken as a rough estimate that may be accurate to within factors of a few. However, we emphasise that we have performed both the GADGET and the ENZO run on exactly the same supercomputing platform with the same optimisations, such that we can compare the performance of these two runs directly.
The fastest of all runs was performed with the TVD grid code. It is roughly ten times faster than our ENZO run, about 15 times faster than FLASH, and about 27 times faster than the version of ZEUS used here. However, the official ZEUS-MP version is expected to be faster and to scale better. Due to its specific design and implementation, PHANTOM is the fastest of all the SPH codes employed, but still roughly 50% slower than the slowest grid code (our version of ZEUS). GADGET and VINE are about 16 times, and about 30 times, respectively slower than PHANTOM. However, it must be remembered that the extra cost in SPH reflects the fact that resolution elements are placed to follow the mass, and thus preferentially to resolve high-density regions. Thus, additional information is calculated on small scales. This information however does not enter this comparison as the particles are interpolated onto a fixed grid. The extra cost for the SPH runs is similar to what can occur with grid-based Adaptive Mesh Refinement (AMR) (e.g. Berger & Colella 1989), because of the additional overhead to store and iterate over the AMR hierarchy (see e.g. Schmidt et al. 2009). However, a quantitative analysis of the performance of AMR versus SPH is beyond the scope of this paper, and should be discussed elsewhere.
The fact that GADGET and VINE are more than
one order of magnitude slower than any of the grid codes makes it
computationally expensive to study supersonic turbulence at resolutions
higher than about 2563 SPH particles. This may partly explain why no SPH calculations of supersonic turbulence have so far been using more than 5123 particles. The latter was only achieved with the PHANTOM code in Price & Federrath (2009) and in the KITP code comparison project.
7 Conclusions
In this paper we report the comparison of the performance of four grid-based and three particle-based hydrodynamic codes on the modelling of supersonic turbulence decay. In particular, we have studied the decay of compressible, supersonic, isothermal turbulence in the absence of gravity within a periodic box using simulations with resolutions of 2563 grid cells for the grid codes, and 2153 particles for the SPH codes.We have simulated driven turbulence with the SPH code GADGET. The SPH particle distribution at the end of the driving has been interpolated onto a grid that provides the initial conditions to the grid codes employed, namely ENZO, FLASH, TVD and ZEUS. We have also followed the decay of turbulence using several implementations of the SPH method, namely GADGET, PHANTOM (runs: A, B), and VINE, where PHANTOM B used the Morris & Monaghan (1997) viscosity switch, while PHANTOM A was run without the switch.
The turbulence decay was followed for about six turbulent crossing
times. During the whole decay phase considered here, the turbulent flow
stays in the supersonic regime. The turbulent energy dissipation was
measured for all codes. For times greater than about 3 crossing times
(when any initial transient phase due to small variations in the
initial conditions has disappeared, and all codes have developed their
individual dissipation signatures) a comparison of volume- and
density-weighted velocity spectra indicates that the numerical
viscosity of grid codes is generally lower than that of SPH codes, with
details of the method like the order of the code contributing
secondarily.
We show that the differences between ENZO and FLASH are due to our choice of using different PPM diffusion parameters for the two codes, i.e., in this study ENZO was used with a PPM diffusion parameter of K=0.0, while FLASH was used with the PPM diffusion parameter set to K=0.1 as recommended by Colella & Woodward (1984). Switching-off PPM diffusion completely (as in our ENZO run) results in less dissipation, but produces a stronger bottleneck effect (see also Federrath et al. 2009a; Kritsuk et al. 2007). Use of the Morris & Monaghan (1997) viscosity implementation for SPH provides less dissipation as observed in our PHANTOM B and VINE runs in comparison with the GADGET and PHANTOM A runs. In general, the viscosity acts differently for the different grid- and SPH-codes at wavenumbers
at the resolutions studied here, shown by our analysis of velocity power spectra in Sect. 5.1. However, all codes produced velocity spectra that are in good agreement for
.
Using Fourier spectra we also showed that the additional information that the SPH method can offer in high-density regions and/or on small scales will be suppressed if it is not interpolated onto a high enough resolution grid, as discussed in Sects. 4.2.3 and 5.1.
The trace-free rate of strain and the vorticity PDFs confirm the
ordering of the runs according to their dissipation given above. The
density PDFs are very similar for all the runs performed in the present
study. The means and standard deviations of the logarithmic density
varied by less than 10% for all codes at all times analysed, with one
exception (the mean logarithmic density obtained in our ZEUS run at
varied by about 30%, which is also seen in its density PDF). For the SPH code GADGET
we have shown that the density PDF obtained from the SPH particle
distribution samples the high density tail slightly better, while our
results indicate that using a grid, the low-density tail is slightly
better sampled. However, the overall shape is very close to a
log-normal distribution in density, and its mean and standard deviation
are quite robust for all codes employed in the present study.
Our results demonstrate that different codes have different dissipation mechanisms affecting spatial scales
.
However, our code comparison also shows that SPH and grid codes give
similar results for an equivalent number of resolution elements N for each direction in space on scales
,
though with the SPH runs being about ten times more computationally
expensive than the grid runs on average. Careful choice of numerical
algorithm can extend this scaling range slightly, indicating that grid
codes tend to show a slightly longer scaling range than SPH codes
(cf. Figs. 7 and 8).
However, at the numerical resolutions employed in the present study,
all slopes inferred from the volume- and density-weighted spectra are
too steep compared with higher resolution data from the literature. It
is thus rather a question of resolution than a question of the specific
properties of the hydrodynamical codes used in this study that
determines their performance in reproducing turbulence scaling
relations. This must be tested in a future comparison employing at
least one order of magnitude more resolution elements for both grid and
SPH codes.
The authors would like to thank Jens Niemeyer for many interesting and stimulating discussions, especially during the setup phase of this project. We thank the anonymous referee for a balanced and detailed report. S.K. kindly acknowledges support by an EU Commission ``Marie Curie Intra-European (Individual) Fellowship'' of the 6th Framework Programme with contract number MEIF-CT-2004-011226. S.K. also acknowledges financial assistance by the EU Commission Research Training Network ``Constellation'' of the 6th Framework Programme. C.F. acknowledges financial support from the International Max Planck Research School for Astronomy and Cosmic Physics (IMPRS-A) and the Heidelberg Graduate School of Fundamental Physics (HGSFP). The HGSFP is funded by the Excellence Initiative of the German Research Foundation DFG GSC 129/1. The ENZO and GADGET simulations used computational resources from the HLRBII project grant h0972 and pr32lo at the Leibniz Rechenzentrum Garching and from a project grand by CASPUR, Italy. C.F. and R.S.K. are grateful for subsidies from the DFG SFB 439 Galaxies in the Early Universe. R.S.K. and C.F. acknowkledge financial support from the German Bundesministerium für Bildung und Forschung via the ASTRONET project STAR FORMAT (grant 05A09VHA) and from the Deutsche Forschungsgemeinschaft (DFG) under grants no. KL 1358/1, KL 1358/4, KL 1359/5. R.S.K. furthermore thank for subsidies from a Frontier grant of Heidelberg University sponsored by the German Excellence Initiative and for support from the Landesstiftung Baden-Württemberg via their programme International Collaboration II. D.J.P. is supported by a Royal Society University Research Fellowship (UK). The PDF figures were partly produced using SPLASH (Price 2007). MG and SW acknowledge support by the DFG cluster of excellence ``Origin and Structure of the Universe''. All VINE and part of the FLASH calculations were performed on an SGI Altix 3700 Bx2 supercomputer that was funded by the DFG cluster of excellence ``Origin and Structure of the Universe''. The simulations performed by J.K. utilized a high performance cluster that was built with funding from the Korea Astronomy and Space Science Institute (KASI) and the Korea Science and Engineering Foundation through the Astrophysical Research Center for the Structure and Evolution of Cosmos (ARCSEC). This work of J.K. was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MEST) (No. 2009-0062863). AKJ acknowledges support by the Human Resources and Mobility Programme of the European Community under the contract MEIF-CT-2006-039569. M.M.M.L. acknowledges partial support for his work from NASA Origins of Solar Systems grant NNX07AI74G. The software used in this work was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago.
References
- Attwood, R. E., Goodwin, S. P., & Whitworth, A. P. 2007, A&A, 464, 447 [NASA ADS] [CrossRef] [EDP Sciences]
- Ballesteros-Paredes, J., Gazol, A., Kim, J., et al. 2006, ApJ, 637, 384 [NASA ADS] [CrossRef]
- Ballesteros-Paredes, J., Klessen, R. S., Mac Low, M.-M., et al. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil, 63
- Beetz, C., Schwarz, C., Dreher, J., et al. 2008, Phys. Lett. A, 372, 3037 [NASA ADS] [CrossRef]
- Benzi, R., Biferale, L., Fisher, R. T., et al. 2008, Phys. Rev. Lett., 100, 234503 [NASA ADS] [CrossRef]
- Berger, M. J., & Colella, P. 1989, J. Comput. Phys., 82, 64 [NASA ADS] [CrossRef]
- Blitz, L., Fukui, Y., Kawamura, A., et al. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil, 81
- Boldyrev, S. 2002, ApJ, 569, 841 [NASA ADS] [CrossRef]
- Boldyrev, S., Nordlund, Å., & Padoan, P. 2002a, ApJ, 573, 678 [NASA ADS] [CrossRef]
- Boldyrev, S., Nordlund, Å., & Padoan, P. 2002b, Phys. Rev. Lett., 89, 031102 [NASA ADS] [CrossRef]
- Bonazzola, S., Perault, M., Puget, J. L., et al. 1992, J. Fluid Mechanics, 245, 1 [NASA ADS] [CrossRef]
- Calder, A. C., Fryxell, B., Plewa, T., et al. 2002, ApJS, 143, 201 [NASA ADS] [CrossRef]
- Chandrasekhar, S. 1949, ApJ, 110, 329 [NASA ADS] [CrossRef]
- Chandrasekhar, S. 1951a, R. Soc. London Proc. Ser. A, 210, 18 [NASA ADS] [CrossRef]
- Chandrasekhar, S. 1951b, R. Soc. London Proc. Ser. A, 210, 26 [NASA ADS] [CrossRef]
- Colella, P., & Woodward, P. R. 1984, J. Comput. Phys., 54, 174 [NASA ADS] [CrossRef]
- Commerçon, B., Hennebelle, P., Audit, E., Chabrier, G., & Teyssier, R. 2008, A&A, 482, 371 [NASA ADS] [CrossRef] [EDP Sciences]
- de Avillez, M. A., & Mac Low, M.-M. 2002, ApJ, 581, 1047 [NASA ADS] [CrossRef]
- Dimonte, G., Youngs, D. L., Dimits, A., et al. 2004, Phys. Fluids, 16, 1668 [NASA ADS] [CrossRef]
- Dobler, W., Haugen, N. E. L., Yousef, T. A., et al. 2003, Phys. Rev. E, 68, 026304 [NASA ADS] [CrossRef]
- Dubey, A., Fisher, R., Graziani, C., et al. 2008, in Numerical Modeling of Space Plasma Flows, ed. N. V. Pogorelov, E. Audit, & G. P. Zank, ASP Conf. Ser., 385, 145
- Elmegreen, B. G. 2002, ApJ, 577, 206 [NASA ADS] [CrossRef]
- Elmegreen, B. G. 2008, ApJ, 672, 1006 [NASA ADS] [CrossRef]
- Elmegreen, B. G., & Scalo, J. 2004, A&A, 42, 211 [NASA ADS] [CrossRef]
- Federrath, C., Glover, S. C. O., Klessen, R. S., et al. 2008a, Phys. Scr. T, 132, 014025 [NASA ADS] [CrossRef]
- Federrath, C., Klessen, R. S., & Schmidt, W. 2008b, ApJ, 688, L79 [NASA ADS] [CrossRef]
- Federrath, C., Duval, J., Klessen, R., Schmidt, W., & Mac Low, M.-M. 2009a, A&A, submitted [arXiv:0905.1060]
- Federrath, C., Klessen, R. S., & Schmidt, W. 2009b, ApJ, 692, 364 [NASA ADS] [CrossRef]
- Frisch, U. 1995, Turbulence (Cambridge Univ. Press)
- Fryxell, B., Olson, K., Ricker, P., et al. 2000, ApJS, 131, 273 [NASA ADS] [CrossRef]
- Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 [NASA ADS]
- Glover, S. C. O., & Mac Low, M.-M. 2007, ApJ, 659, 1317 [NASA ADS] [CrossRef]
- Glover, S. C. O., Federrath, C., Mac Low, M.-M., et al. 2009, MNRAS, in press
- Gritschneder, M., Naab, T., Walch, S., Burkert, A., & Heitsch, F. 2009, ApJ, 694, L26 [NASA ADS] [CrossRef]
- Haugen, N. E., & Brandenburg, A. 2004, Phys. Rev. E, 70, 026405 [NASA ADS] [CrossRef]
- Hayes, J. C., Norman, M. L., Fiedler, R. A., et al. 2006, ApJS, 165, 188 [NASA ADS] [CrossRef]
- Heitmann, K., Ricker, P. M., Warren, M. S., et al. 2005, ApJS, 160, 28 [NASA ADS] [CrossRef]
- Heitsch, F., Mac Low, M.-M., & Klessen, R. S. 2001, ApJ, 547, 280 [NASA ADS] [CrossRef]
- Hennebelle, P., & Chabrier, G. 2008, ApJ, 684, 395 [NASA ADS] [CrossRef]
- Hennebelle, P., & Chabrier, G. 2009, ApJ, 702, 1428 [NASA ADS] [CrossRef]
- Isichenko, M. B. 1992, Rev. Mod. Phys., 64, 961 [NASA ADS] [CrossRef]
- Jappsen, A.-K., Klessen, R. S., Larson, R. B., Li, Y., & Mac Low, M.-M. 2005, A&A, 435, 611 [NASA ADS] [CrossRef] [EDP Sciences]
- Kim, J., Ryu, D., Jones, T. W., et al. 1999, ApJ, 514, 506 [NASA ADS] [CrossRef]
- Kitsionas, S., & Whitworth, A. P. 2002, MNRAS, 330, 129 [NASA ADS] [CrossRef]
- Klein, R. I., Inutsuka, S.-I., Padoan, P., et al. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil, 99
- Klessen, R. S. 2000a, ApJ, 535, 869 [NASA ADS] [CrossRef]
- Klessen, R. S., & Lin, D. N. 2003, Phys. Rev. E, 67, 046311 [NASA ADS] [CrossRef]
- Klessen, R. S., Heitsch, F., & Mac Low, M.-M. 2000b, ApJ, 535, 887 [NASA ADS] [CrossRef]
- Kolmogorov, A. N. 1941, Dokl. Akad. Nauk SSSR, 32, 16
- Kowal, G., & Lazarian, A. 2007, ApJ, 666, L69 [NASA ADS] [CrossRef]
- Kravtsov, A. V. 2003, ApJ, 590, L1 [NASA ADS] [CrossRef]
- Kritsuk, A. G., Norman, M. L., & Padoan, P. 2006, ApJ, 638, L25 [NASA ADS] [CrossRef]
- Kritsuk, A. G., Norman, M. L., Padoan, P., et al. 2007, ApJ, 665, 416 [NASA ADS] [CrossRef]
- Krumholz, M. R., & McKee, C. F. 2005, ApJ, 630, 250 [NASA ADS] [CrossRef]
- Lemaster, M. N., & Stone, J. M. 2008, ApJ, 682, L97 [NASA ADS] [CrossRef]
- Lemaster, M. N., & Stone, J. M. 2009, ApJ, 691, 1092 [NASA ADS] [CrossRef]
- Lesieur, M. 1997, Turbulence in Fluids (Kluwer)
- Li, Y., Klessen, R. S., & Mac Low, M.-M. 2003, ApJ, 592, 975 [NASA ADS] [CrossRef]
- Lucy, L. B. 1977, AJ, 82, 1013 [NASA ADS] [CrossRef]
- Mac Low, M.-M. 1999, ApJ, 524, 169 [NASA ADS] [CrossRef]
- Mac Low, M.-M., & Klessen, R. S. 2004, Rev. Mod. Phys., 76, 125 [NASA ADS] [CrossRef]
- Mac Low, M.-M., Klessen, R. S., Burkert, A., et al. 1998, Phys. Rev. Lett., 80, 2754 [NASA ADS] [CrossRef]
- McKee, C. F., & Ostriker, E. C. 2007, ARA&A, 45, 565 [NASA ADS] [CrossRef]
- Metzler, R., & Klafter, J. 2000, Phys. Rep., 339, 1 [NASA ADS] [CrossRef]
- Monaghan, J. J. 1988, Comp. Phys. Commun., 48, 89 [NASA ADS] [CrossRef]
- Monaghan, J. J. 1992, ARA&A, 30, 543 [NASA ADS] [CrossRef]
- Monaghan, J. J. 1997, J. Comput. Phys., 136, 298 [NASA ADS] [CrossRef]
- Monaghan, J. J. 2005, Rep. Progr. Phys., 68, 1703 [NASA ADS] [CrossRef]
- Monaghan, J. J., & Gingold, R. A. 1983, J. Comput. Phys., 52, 374 [NASA ADS] [CrossRef]
- Monaghan, J. J., & Lattanzio, J. C. 1985, A&A, 149, 135 [NASA ADS]
- Morris, J., & Monaghan, J. 1997, J. Comput. Phys., 136, 41 [NASA ADS] [CrossRef]
- Nelson, A. F., Wetzstein, M., & Naab, T. 2009, ApJS, 184, 326 [CrossRef]
- Nordlund, Å., & Padoan, P. 1999, in Interstellar Turbulence, ed. J. Franco, & A. Carraminana, 218
- Norman, M. L., & Bryan, G. L. 1999, in Numerical Astrophysics, ed. S. M. Miyama, K. Tomisaka, & T. Hanawa, Astrophys. Space Sci. Libr., 240, 19
- O'Shea, B. W., Bryan, G., Bordner, J., et al. 2004 [arXiv:0403044]
- Ostriker, E. C., Gammie, C. F., & Stone, J. M. 1999, ApJ, 513, 259 [NASA ADS] [CrossRef]
- Ostriker, E. C., Stone, J. M., & Gammie, C. F. 2001, ApJ, 546, 980 [NASA ADS] [CrossRef]
- Padoan, P., & Nordlund, Å. 2002, ApJ, 576, 870 [NASA ADS] [CrossRef]
- Padoan, P., & Nordlund, A. 2009, ApJ, submitted [arXiv:0907.0248]
- Padoan, P., Nordlund, Å., & Jones, B. J. T. 1997, MNRAS, 288, 145 [NASA ADS]
- Padoan, P., Jimenez, R., Nordlund, Å., et al. 2004, Phys. Rev. Lett., 92, 191102 [NASA ADS] [CrossRef]
- Padoan, P., Nordlund, Å., Kritsuk, A. G., Norman, M. L., & Li, P. S. 2007, ApJ, 661, 972 [NASA ADS] [CrossRef]
- Panis, J.-F., & Pérault, M. 1998, Phys. Fluids, 10, 3111 [NASA ADS] [CrossRef]
- Porter, D. H., & Woodward, P. R. 1994, ApJS, 93, 309 [NASA ADS] [CrossRef]
- Price, D. J. 2004a, Ph.D. Thesis (University of Cambridge)
- Price, D. J., & Monaghan, J. J. 2004b, MNRAS, 348, 139 [NASA ADS] [CrossRef]
- Price, D. J. 2007a, Publ. Astron. Soc. Aust., 24, 159 [NASA ADS] [CrossRef]
- Price, D. J., & Monaghan, J. J. 2007b, MNRAS, 374, 1347 [NASA ADS] [CrossRef]
- Price, D. J., & Federrath, C. 2009, MNRAS, submitted
- Sasao, T. 1973, PASJ, 25, 1 [NASA ADS]
- Scalo, J., & Elmegreen, B. G. 2004, ARA&A, 42, 275 [NASA ADS] [CrossRef]
- Schmidt, W., Hillebrandt, W., & Niemeyer, J. C. 2006a, Computers and Fluids, 35, 353 [CrossRef]
- Schmidt, W., Niemeyer, J. C., & Hillebrandt, W. 2006b, A&A, 450, 265 [NASA ADS] [CrossRef] [EDP Sciences]
- Schmidt, W., Niemeyer, J. C., Hillebrandt, W., et al. 2006c, A&A, 450, 283 [NASA ADS] [CrossRef] [EDP Sciences]
- Schmidt, W., Federrath, C., & Klessen, R. 2008, Phys. Rev. Lett., 101, 194505 [NASA ADS] [CrossRef]
- Schmidt, W., Federrath, C., Hupp, M., Kern, S., & Niemeyer, J. C. 2009, A&A, 494, 127 [NASA ADS] [CrossRef] [EDP Sciences]
- She, Z.-S., & Leveque, E. 1994, Phys. Rev. Lett., 72, 336 [NASA ADS] [CrossRef]
- Springel, V., Yoshida, N., & White, S. D. M. 2001, New Astron., 6, 79 [NASA ADS] [CrossRef]
- Stone, J. M., & Norman, M. L. 1992a, ApJS, 80, 753 [NASA ADS] [CrossRef]
- Stone, J. M., & Norman, M. L. 1992b, ApJS, 80, 791 [NASA ADS] [CrossRef]
- Stone, J. M., Ostriker, E. C., & Gammie, C. F. 1998, ApJ, 508, L99 [NASA ADS] [CrossRef]
- Strang, G. 1968, SIAM J. Numer. Anal., 5, 506 [CrossRef]
- Tasker, E. J., Brunino, R., Mitchell, N. L., et al. 2008, MNRAS, 390, 1267 [NASA ADS] [CrossRef]
- Tassis, K. 2007, MNRAS, 382, 1317 [NASA ADS]
- Toro, E. F. 1997, Riemann solvers and numerical methods for fluid dynamics (Springer)
- van Leer, B. 1977, J. Comput. Phys., 23, 276 [NASA ADS] [CrossRef]
- Vázquez-Semadeni, E. 1994, ApJ, 423, 681 [NASA ADS] [CrossRef]
- Vázquez-Semadeni, E., Ballesteros-Paredes, J., & Klessen, R. S. 2003, ApJ, 585, L131 [NASA ADS] [CrossRef]
- Vernaleo, J. C., & Reynolds, C. S. 2006, ApJ, 645, 83 [NASA ADS] [CrossRef]
- von Weizsäcker, C. F. V. 1943, ZAp, 22, 319
- von Weizsäcker, C. F. 1951, ApJ, 114, 165 [NASA ADS] [CrossRef]
- Wetzstein, M., Nelson, A. F., Naab, T., et al. 2009, ApJS, 184, 298 [CrossRef]
Footnotes
- ...
- Current address: Hellenic-American Educational Foundation, Psychiko College, Stefanou Delta 15, GR-15452 P. Psychiko, Greece.
- ... spectra
- Volume-weighted velocity power spectra are often refered to as kinetic energy spectra for incompressible turbulence (e.g., Frisch 1995). However, for compressible turbulence the kinetic energy is proportional to the density-weighted velocity power spectrum.
- ... turbulence
- A limited comparison of two codes was presented for self-gravitating turbulence in Klessen et al. (2000) and Heitsch et al. (2001).
- ... codes
- We use the term particle code as the generic antonym of grid code. In general, a particle code is a numerical scheme that uses sampling points that are not fixed in space but rather move, resembling in this respect the properties of fluid particles. In particular, all particle codes used here are different implementations of the Smoothed Particle Hydrodynamics (SPH) technique first introduced by Gingold & Monaghan (1977) and Lucy (1977).
- ... interval
- Distances are measured with respect to the (0, 0, 0) cell, i.e. the central cell.
- ... resolution
- Note that the total number of sampling points for the SPH runs (2153 particles) was smaller than the number of sampling points employed for our grids (2563 grid cells).
- ... project
- http://kitpstarformation07.wikispaces.com/Star+Formation+Test+Problems
All Tables
Table 1:
Time evolution of
in units of
,
for all codes/runs.
Table 2:
Time evolution of
in units of
,
for all codes/runs.
Table 3:
Time evolution of the sonic scale
as defined by Eq. (7) for all codes/runs.
Table 4:
Time evolution of the mean s0 and standard deviation
of the logarithmic density
for all codes/runs.
Table 5: Computational efficiency of all codes/runs.
All Figures
![]() |
Figure 1:
Time evolution of the rms Mach number during the driving phase. After driving for about 2 turbulent crossing times
|
Open with DEXTER | |
In the text |
![]() |
Figure 2:
Top: velocity power spectra obtained at four different
snapshots during the driving phase (volume-weighted spectra on the
left, and density-weighted spectra on the right panels). The spectrum
of each snapshot is plotted with a different colour (
|
Open with DEXTER | |
In the text |
![]() |
Figure 3: Velocity power spectra of the initial conditions used for the decay experiments (volume-weighted spectrum on the left, and density-weighted spectrum on the right panel). The spectra were compensated with power-law slopes of 2.20 ( left panel - volume-weighted case) and 1.67 ( right panel - density-weighted case). |
Open with DEXTER | |
In the text |
![]() |
Figure 4:
Left panel: density-weighted velocity power spectra of the initial conditions using different velocity weights:
|
Open with DEXTER | |
In the text |
![]() |
Figure 5:
Time evolution of the volume-weighted velocity power spectra (compensated with power-law slopes of 2.20) of the SPH codes GADGET, PHANTOM (runs: A, B), and VINE ( left column) and of the grid codes ENZO, FLASH, TVD, and ZEUS ( right column) for the following snapshots: t=0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER | |
In the text |
![]() |
Figure 6: Same as Fig. 5, but the density-weighted velocity power spectra (compensated with power-law slopes of 1.67) are shown. |
Open with DEXTER | |
In the text |
![]() |
Figure 7:
Comparison of the volume-weighted velocity power spectra (compensated with power-law slopes of 2.20) of all codes at
different times along the decay: t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER | |
In the text |
![]() |
Figure 8: Same as Fig. 7, but the density-weighted velocity power spectra (compensated with power-law slopes of 1.67) are shown. |
Open with DEXTER | |
In the text |
![]() |
Figure 9:
Evolution of the rms Mach number as a function of time in units of the turbulent crossing time
|
Open with DEXTER | |
In the text |
![]() |
Figure 10:
Comparison of the volume-weighted density PDFs of all codes at different times along the decay: t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER | |
In the text |
![]() |
Figure 11:
Volume-weighted density PDFs at t = 0.0, 0.06, 0.31, 0.62, 3.1, and
|
Open with DEXTER | |
In the text |
![]() |
Figure 12: Same as Fig. 10, but the PDFs of the trace-free rate of strain are shown, as defined in Eq. (13). |
Open with DEXTER | |
In the text |
![]() |
Figure 13: Same as Fig. 10, but the PDFs of the vorticity are shown. |
Open with DEXTER | |
In the text |
![]() |
Figure 14:
Same as Fig. 10, but the PDFs of the z-component of the velocity ( |
Open with DEXTER | |
In the text |
Copyright ESO 2009
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.