U. Ziegler
Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany
Received 29 November 2004 / Accepted 21 January 2005
Abstract
I present a new version of the NIRVANA code capable
for the simulation
of multi-scale self-gravitational magnetohydrodynamics problems in three
space dimensions employing the technique of adaptive mesh refinement.
The building blocks of NIRVANA are (i) a fully conservative,
divergence-free Godunov-type central scheme for the solution of the
equations of magnetohydrodynamics; (ii) a block-structured mesh refinement
algorithm which automatically adds and removes elementary grid blocks
whenever necessary to achieve adequate resolution
and; (iii) an adaptive mesh Poisson solver based on multigrid philosophy
which incorporates the so-called elliptic matching condition to keep the
gradient of the gravitational potential continous at fine/coarse
mesh interfaces. In this paper I give an overview of the basic numerical
ideas standing behind NIRVANA
and apply the code to the problem of protostellar core collaps and
fragmentation.
Key words: magnetohydrodynamics (MHD) - stars: formation - methods: numerical - ISM: magnetic fields
From analysis of observational data there comes strong evidence that stars are preferably formed as a binary system or even higher-order multiple systems (Reipurth & Zinnecker 1993; Bodenheimer et al. 2000; Tokovinin & Smekhov 2002; Patience et al. 2002). From theory, the most plausible explanation for this is fragmentation of an unstable cloud core into smaller condensations (pre-stellar cores) during gravitational collapse. Fragmentation is usually studied through numerical simulations since the underlying equations are too complex to solve by analytical techniques. However, on several reasons, numerical investigations also require highly sophisticated codes. First, the problem of fragmentation is intrinsically three-dimensional. Second, it involves a large dynamic range in length scales which must be well resolved numerically to follow the collapse and to obtain useful solutions at all. Codes based on fixed resolution schemes are ruled out because these would need tremendous computer resources not available today. Moreover, such methods would be very inefficient since regions which actually need no high numerical resolution are yet fully resolved. Third, there are different physical effects involved described by different types of mathematical equations requiring different numerical methods. As an example, the conservation laws of ideal hydrodynamics are of hyperbolic type whereas the Poisson equation for the gravitational potential is elliptic.
An issue of particular interest in the fragmentation process is the likely role of magnetic fields. This is because there is striking evidence that dense cloud cores are partly supported - besides internal turbulent motions and ordered rotation - by magnetic fields against collapse (Crutcher 1999; Caselli et al. 2002). To study fragmentation of magnetized clouds the full nonlinear equations of gravitomagnetohydrodynamics (GMHD) must be solved in a multi-scale fashion.
Solutions of the G(M)HD equations for the collpase problem are either based on smoothed particle (magneto)hydrodynamics (SP(M)H) or finite-difference/finite-volume schemes with adaptive mesh refinement (AMR) each in combination with a solver for the Poisson equation. SPH is a Lagrangian approach where "smoothed particles'' carrying fluid properties are integrated forward in time subject to mutual gravitational attraction. Usually, the number of particles in a SPH simulation is fixed providing a variable spatial resolution through the movement of the particles itself but keeping the mass in a "smoothing sphere'' constant (Monaghan 1992). Variable mass resolution can be obtained to some extend by a new technique called particle splitting (Kitsionas & Whitworth 2002) in which a particle is subdivided into a number of child particles having accordingly lower masses.
In order to account for the large scale variations typical in collapse scenarios grid-based codes make use of AMR - the counterpart to particle splitting in SPH. Several approaches have been employed here. Boss (2002), for instance, applied a spherical code in which only the radial coordinate is adaptive. This is a useful procedure when structures near the coordinate origin are produced but it is less suited if such structures form further away from the center. Another approach which has been used by some workers is the nested-grid method in Cartesian geometry (see e.g. Burkert & Bodenheimer 1993). In this method a sequence of finer-spaced smaller grids is hierarchically nested and placed around the coordinate origin from the beginning of the simulation. Like in a r-adaptive spherical grid code a nested-grid scheme may underresolve structures not captured by the finest level of refinement. In a more sophisticated variant as that used by Truelove et al. (1997) the mesh is fully adaptive adding and removing finer grid units locally according to specific refinement criteria during the course of simulation.
Most numerical calculations on protostellar fragmentation so far focused on non-magnetic configurations (Truelove et al. 1998; Boss et al. 2000; Bate et al. 2002a,b; Bate et al. 2003; Klein et al. 2003; Goodwin et al. 2004a,b). Only just recently the magnetic regime has begun to be explored. Balsara & Burkert (2001) were the first using a fully adaptive GMHD scheme to study examplary the collapse of a magnetized cloud core. Boss (2002, 2004) in a more systematic way investigated the possible role of magnetic fields during collapse, however, neglecting the important effect of magnetic braking in the governing equations. Machida et al. (2004) constructed a fragmentation model based on a nested-grid GMHD code. Finally, Hosking & Whitworth (2004a,b) applied a recently developed two-fluid SPMH method to the problem of protostellar core collaps in the presence of a magnetic field including the effect of ambipolar diffusion. The major pitfall I see with this approach, however, is the severe violation of the divergence constraint for the magnetic field of up to 100% maximum relative error. Such large errors produce unphysical forces leading to wrong dynamics and generate wrong field topologies.
In this paper I present a new version of the NIRVANA code - a fully-conservative, divergence-free grid-based code for the equations of GMHD able to treat multi-scale problems employing AMR techniques. Section 2 will give an overview of the basic numerical algorithms implemented in the NIRVANA code. First results of an application of the code to the fragmentation problem in non-magnetic and magnetic cloud cores are then presented in Sect. 3 of this paper.
The algorithms described in the following aim to find numerical solutions
to the equations of ideal magnetohydrodynamics including the effects
of selfgravity:
![]() |
(1) | ||
![]() |
(2) | ||
![]() |
(3) | ||
![]() |
(4) | ||
![]() |
(5) |
![]() |
(6) |
The first building block of the NIRVANA code is the numerical method for the solution of the ideal MHD equations on a fixed-resolution Cartesian grid. Here, I present a compact overview of the basic ideas standing behind the MHD scheme. More details can be found in Ziegler (2004).
Space discretization.
Equations (1)-(3) excluding the gravity source terms but including
the Lorentz
force are solved with help of the Godunov-type central scheme
for conservation laws developed by
Kurganov et al. (2001). The scheme has been slightly modified for the MHD
case and combined with the constraint transport to solve the induction equation
for a divergence-free evolution of the magnetic field.
I apply a semi-discrete
approach where the equations are first discretized in flux conservation form
in space leaving open the time discretization.
The 3D Cartesian grid is indexed with constant cell width
(
,
)
in
x(y,z)-direction.
A numerical cell spans the index range
i.e. the cell center
is located at [ijk].
Denoting
as the numerical approximation to the
cell-averaged solution vector
the second-order version of the scheme reads:
![]() |
= | ![]() |
|
![]() |
|||
![]() |
(7) |
![]() |
= | ![]() |
|
![]() |
![]() |
= | ![]() |
|
![]() |
= | ![]() |
The induction Eq. (4) is solved separately with the constraint
transport (CT) method (see e.g. Evans & Hawley 1988; Tóth 2000).
Utilizing the staggered collocation of the magnetic field components
on cell faces (see Fig. 1)
renders the numerical divergence of curl operator to vanish
exactely.
CT thus ensures that an initially divergence-free magnetic field evolves
divergence-free
up to machine accuracy i.e.
where
,
and
are numerical approximations to the face-averaged
solution
.
As before, a semi-discrete ansatz is adopted. The scheme reads
![]() |
= | ![]() |
|
![]() |
(8) | ||
![]() |
= | ![]() |
|
![]() |
(9) | ||
![]() |
= | ![]() |
|
![]() |
(10) |
![]() |
= | ![]() |
|
![]() |
|||
![]() |
= | ![]() |
|
![]() |
|||
![]() |
= | ![]() |
|
![]() |
![]() |
= | ![]() |
|
![]() |
|||
![]() |
![]() |
Figure 1: Left: collocation of the MHD variables in a cell [ ijk]. Right: locations of the electric field components and components of the electric field flux tensor. |
Open with DEXTER |
![]() |
= | ![]() |
|
![]() |
= | ![]() |
![]() |
= | ![]() |
|
![]() |
= | ![]() |
![]() |
= | ![]() |
|
![]() |
= | ![]() |
The second building block of the NIRVANA code is its AMR facility. The basic ideas of the algorithm are reviewed here, and the reader is refered to Ziegler (2005) for more details.
Refinement principle. AMR in the NIRVANA code as well as many other codes (e.g. AMRVAC, see Keppens et al. 2003) is based on a block-structured ansatz as described in the fundamental work of Berger & Oliger (1984) and Berger & Collela (1989). In this ansatz grid blocks of finer resolution and with variable size are overlain respective parent grids in a recursive procedure till the required resolution is reached. Major differences to the original approach comes about because in NIRVANA (i) blocks have a fixed size of 4 cells per coordinate direction; (ii) the base level itself consists out of such elementary blocks i.e. there is no single base grid (in contrast to previous code versions); (iii) the parent-to-child block refinement ratio is always two and; (iv), elementary blocks are clustered temporarily to larger grid objects (called superblocks, see below) on which numerical integration of the equations take place followed by a remap step. The set of blocks builds an oct-tree data structure similar to the PARAMESH package of MacNeice et al. (2000) used in the FLASH code. Each block with given resolution belongs to a refinement level l starting with l=0 for the base level up to some maximum level L specified by the user. Technically, a block has to be understood as a logical grid unit storing not only the variables (without ghost zones for boundary conditions) and cell positions but also all necessary informations about its mesh environment realized by a set of pointers to its parent block, block neighbors and child blocks.
Refinement criteria.
Mesh refinement is controlled according to some refinement criteria.
On the base level the criterion is based on evaluating normalized
gradients in all variables. If any of these exceeds a prescribed
threshold the mesh is locally refined by inserting a new block.
On higher levels l>0, refinement is controlled by comparing the known
solutions on levels l and l-1. Apart from that standard refinement
criteria, in self-gravitational flows Truelove et al. (1997) demonstrated
that it is an absolute necessity to sufficiently resolve the local Jeans
length
,
Block initialisation. Variables on a newly generated block are initialised by interpolation from the underlying coarser level. In doing so, the procedure of limited reconstruction is used as in the central-constraint transport MHD scheme. Conservation properties of the MHD scheme are maintained this way. In particular, the magnetic field is reconstructed in a divergence-free manner as described in Ziegler (2005).
Block clustering and time integration.
The equations are not solved on individual blocks but on superblocks.
Superblocks are larger rectangular grids which temporarily
coexist with the blocks.
Each refinement level made up of generic blocks is mapped onto a
set of superblocks representing that refinement level for time integration.
I call this mechanism block clustering.
The number of superblocks associated with a refinement level depends
on the degree of fragmentation of the grid structure. In generating
superblocks generic blocks are clustered such that the resulting superblock
has maximal possible dimension in x-direction.
Ghost zones needed by the solver to take up boundary conditions are
automatically added. After a time step is complete the advanced solution
on superblocks is transfered back to generic blocks and
superblocks are destroyed. Notice that when the grid structure changes
the superblock distribution changes as well.
The purpose of block clustering is to increase efficiency: the use
of small blocks allow a very flexible mesh adaptation i.e. regions which are
refined but actually need no refinement are sparse.
On the other hand, solving the equations on lots
of small blocks would be associated with an overwhelming amount of overhead
due to the large number of interfaces. Block clustering combines
mesh flexibility with performance because, by experience, the overhead
due to the block clustering algorithm itself
is much smaller than the overhead without block
clustering.
For problems involving self-gravity
each refinement level
is integrated with the same time-step dictated by the CFL-related
minimum time-step over the grid hierarchy.
Consistency. Several mechanism in an AMR simulation ensure that the solution remains consistent on the block-structured grid hierarchy. First, to restore conservation fixup steps in the hydrodynamical fluxes and the electric field at fine-coarse grid interfaces are necessary. More precisely, the numerical flux at coarse cell faces matching a coarse-fine interface has to be replaced by the corresponding fine cell numerical fluxes. Similarly, the numerical electric field at coarse cell edges matching a coarse-fine interface must be replaced by the corresponding fine cell numerical electric fields to ensure solenoidality of the magnetic field. The use of a multi-stage time integration scheme demands that the fixup steps are carried out after each substep. Here, in the 2-step Runge-Kutta method fixup steps are carried out after the predictor step and after the corrector step. Moreover, synchronization of boundary values along superblock interfaces is required after the predictor step to avoid time mismatching of the solution.
The third building block of the NIRVANA code is the adaptive mesh
solver for the Poisson equation
![]() |
Figure 2: Principle of the AMR V-cycle. |
Open with DEXTER |
In the AMR multigrid approach
the Poisson equation is solved simultaneously on all grid levels
rather than level by level. Consequently,
in a V-cycle iteration step relaxation takes place on the full grid hierarchy.
Iteration is then stopped if the residual
with
becomes small enough, i.e. if
The procedure is as follows (see also Fig. 2).
For reasons of clearness grid indices are dropped.
The V-cycle starts on the finest level .
First, save the current potential
for later
update when going up the V-cycle. Then, perform
one Gauß-Seidel relaxation step
for the correction
,
i.e.
Note that, of course, synchronization work is
needed during a V-cycle along superblock interfaces to achieve global
convergence. For instance, internal boundary values of the correction
must be exchanged among adjacent superblocks
since level l is relaxed sucessively
superblock by superblock. Note also that, without AMR,
the method reduces to a pure SOR scheme. Thats why SOR is used as relaxation scheme for the base level.
Boundary conditions on the base level are
M | = | ![]() |
|
![]() |
= | ![]() |
|
Qij | = | ![]() |
As "tests'' of the GMHD scheme I consider a suite of collapse simulations of an initially uniform solar-mass cloud subject to various conditions, namely, with and without rigid rotation of the cloud, with and without attached azimuthal mode perturbation, with and without magnetic field and, finally, with different gas equation of states. In all simulations the computational domain is a cubic box with length four times the cloud radius and is spanned by 643 grid cells. From the beginning there is a first permanent refinement level in a spherical region covering the full cloud i.e. the cloud radius is effectively resolved by 32 grid cells. It may also be of interest for the reader to learn that the following adaptive mesh calculations do not require a supercomputer but were performed on a Linux PC with computing times of roughly 1 day for the isothermal simulations and up to 5 days for the barotropic simulations.
In one class of simulations a quasi-isothermal equation of state
is considered. Quasi-isothermal in this context means that the energy equation
is retained but an adiabatic index
close to unity is adopted.
![]() |
Figure 3:
Non-rotating collaps: grey-scale representation of the density
structure (
![]() ![]() |
Open with DEXTER |
Figure 3 illustrates the solution at time
when the density has reached
a maximum value
.
Figure 4 shows a cut of
along the x-axis.
The horizontal line in Fig. 4 indicates the analytic solution
for the collapsing portion not yet reached by the rarefaction wave at the given
time. The extend of the plateau is computed using the formula for the
rarefaction front radius as a function of density taken from T98.
The maximum density fits very well whereas there are somewhat larger
deviations from the analytic result at the edges of the plateau.
About 45% of the mass is within the
plateau region.
As expected from theory
the radial velocity to a good approximation grows linearly from the cloud
center to the rarefaction front and then falls off in the surrounding medium.
When comparing the NIRVANA code result with the T98 result there is one obvious difference, that is, the T98 solution possesses
a time delay also with respect to the analytic solution: Fig. 5 of T98
shows a snapshot at a time
associated with a maximum density
which is even lower than in the state
presented in Fig. 3
corresponding to a time
.
As reported in T98 this time delay might be mainly due to the periodic boundary
conditions faking neighboring clouds. These image clouds act as inhibitors
to collapse
because of their attractive force on the real cloud. Here, Dirichlet
boundary conditions for the gravitational potential are
used which are computed from a multipole expansion of the density distribution.
This is an excellent approximation for the almost spherically symmetric
configuration and explains the very insignificant time shift to the analytic
solution.
![]() |
Figure 4:
Non-rotating collaps: cut of
![]() |
Open with DEXTER |
The second case of a rotating uniform cloud again is taken from T98
but has originally been described in Norman et al. (1980).
A solar-mass cloud with density
kg/m3,
radius
m and temperature T=5 K
is initially set in uniform rotation with
1/s.
In terms of characteristic energy ratios
(thermal energy to gravitational energy)
and
(rotational energy to gravitational energy).
The main purpose of this collapse problem is to check the quality of angular momentum conservation by the numerical method. As pointed out by Norman et al. (1980) good numerical advection of angular momentum is necessary in order to achieve the correct answer to the problem, namely, the production of a disk which in the isothermal case evolves towards a singular state. Bad advection of angular momentum, on the other hand, may generate a ring-like structure due to artifical transport effects.
As in T98 the collapse is followed over roughly 6 decades in density
increase. The result is shown in Fig. 5 representing the
density in logarithmic scale in the x-y plane (upper panel)
and x-z plane (lower panel) at time
s where
.
The solution is throughout comparable to the
findings of T98 of a rotating disk structure. There is no visible tendency
to build rings indicating that the NIRVANA code, albeit solving for the
linear momentum, can handle angular momentum advection in an acceptable manner.
![]() |
Figure 5: Rotating collapse: grey-scale representation of the density structure in the x-y plane ( top) and x-z plane ( bottom). |
Open with DEXTER |
![]() |
Figure 6: Isothermal collapse: snapshots at different evolutionary stages showing the density structure, velocity field ( only left panel) and block distribution ( middle and right panels). Evolution times in units of the free-fall time are: t=1.008,1.0647,1.0657. |
Open with DEXTER |
![]() |
Figure 7: Isothermal collapse with vertical magnetic field. Evolution times in units of the free-fall time are: t=1.145,1.2479,1.2482. |
Open with DEXTER |
Next, I consider a cloud with azimuthal m=2 mode perturbation i.e.
Figure 6 shows a sequence of snapshots illustrating the result of
this collapse problem after the density has increased by 2.2 decades,
6.3 decades and 9.43 decades, respectively. The last value corresponds to a time
where
s. Fourteen levels
of refinement have been introduced at this latest stage so that the finest
resolution is equivalent to a
1 048 5763 uniform grid simulation.
The overdense regions of the cloud seeded by the initial perturbation
stimulate binary fragmentation.
When
two density maxima have clearly been
produced (Fig. 6, middle panel) which appear as elongated structures.
These high-density regions are connected by a bar and have rotated in a
clockwise sense around the cloud center a little bit less than 90 degress
measured from its initial orientation in the x-z plane. At this stage
each density peak is resolved by grid blocks of refinement level 9,
the bar in between by grid blocks of refinement level 8.
In the subsequent evolution each of the fragments approach a singular
filamentary state consistent with the analytical predictions by Inutsuka &
Miyama (1997) working on inviscid filament formation under the assumption of
an isothermal gas equation of state. Figure 6 (right panel) presenting
a close-up view (zooming factor is 4000) around the upper filament
when
apparently
give support for such a scenario.
The situation is similar, though not identical, if a vertical magnetic field
is present. Figure 7 illustrates the corresponding result for the
magnetic case adopting an initial field strength which
corresponds to a mass-to-flux ratio
twice the critical value
i.e.
the magnetic field is supercritical.
From HW
![]() |
Figure 8: Barotropic collaps without magnetic field: snaphots of the density structure and velocity illustrating the evolution at times t=1.0855,1.112,1.119in units of the free-fall time ( from left to right). |
Open with DEXTER |
The other class of simulations applies
a barotropic equation of state given by
For the following collapse problems the same initial conditions as in the isothermal perturbed case are adopted i.e. the configuration is identical to that used in the SPH simulations of HW. Therefore, a direct comparison of SPH and AMR-GMHD results is accessible. The main difference between the two models lies in the treatment of the cloud environment. In SPH there are no particles outside the cloud whereas in the grid case density is set to 1/50 of the initial cloud density to avoid vacuum conditions. Note by the way that due to the choice of equation of state the cloud surface is not in pressure equilibrium with the surroundings. This is in contrast to the quasi-isothermal calculations in which exact pressure balance were initially achieved. The resulting outwards directed acceleration is, however, relatively small and is a transient effect which plays no role in the late callapse phase.
I start with a simulation without magnetic field. As long as the system
remains nearly isothermal runaway collapse occurs. Later on, when the density
approches the critical density
the collapse phase gradually turns into an accretion phase where scales
no longer change drastically.
Figure 8 presents a sequence of snapshots illustrating the structure of
the system at such later evolutionary stages.
The density in logarithmic scale and coded in black-white
is shown in the x-y coordinate plane
overlain by the velocity field. Only a subset
of the entire domain is visualized by zooming in with a factor 50 around the coodinate origin. At time
(recall that
s) accompanied by a maximum density
a filamentary structure has developed
(Fig. 8, left panel).
At this time the adiabatic regime has clearly reached expressed by
the fact
that only one filament appeared instead of two as in the quasi-isothermal
case starting with same initial conditions. This is because the overdense
regions generated early during the collaps by the imposed azimuthal
perturbation have fallen towards the center and merged.
In the non-magnetic SPH model
of HW the same observation has been made (see their Fig. 2, top left).
![]() |
Figure 9:
Barotropic collaps without magnetic field: density structure
in the coordinate planes (shifted for the x-z and y-z planes)
and block distribution in the x-y coordinate plane at time
![]() |
Open with DEXTER |
![]() |
Figure 10:
Barotropic collaps with magnetic field: resulting structure
for
![]() ![]() |
Open with DEXTER |
The situation is quite different if a magnetic field is present. Here, I
consider only the case of a vertical magnetic field of uniform strength
i.e. the magnetic field is oriented along the rotation axis.
The initial magnetic field strength is chosen like in the isothermal case
with a mass-to-flux ratio
twice the critical mass-to-flux ratio and the
magnetic field is supercritical from the beginning.
The final outcome at
with
is illustrated in the color-coded representation of Fig. 10
(left panel)
similar to Fig. 9.
In the very beginning of the collaps the overdense regions initiated
by the initial perturbation again fall towards the center and merge.
In contrast to the non-magnetic case, however, there is a rebound leading
again to two separate overdensed objects. These objects are the seed
for binary formation. Indeed a binary system is
formed. The oblate cores separated a distance
from each other
are connected by a less dense bar. Each core is surrounded by a thin
disk-like structure with a gap tied up by the bar. As expected the magnetic
field is dragged with the infall and builds a hour-glass morphology around
each of the density cores. The magnetic field lines in Fig. 10
show a perceptible amount of twist
emanating from a differentially rotating flow field along the vertical
direction.
For a stronger magnetic field with mass-to-flux ratio
the situation again drastically changes.
The solution at
and
is illustrated in Fig. 10 (right panel). In this case only one
core has been formed embedded in an extended disk.
The magnetic field near the core again has a hour-glass structure but
is significantly less twisted than in the previous case. This can be
explained by efficient magnetic braking which
removes angular momentum from the collapsing material
and which tends to equalize differential rotation. The effect gets more
pronounced as the magnetic field gets stronger.
I have presented a powerful new version of the NIRVANA code suitable for the simulation of multi-scale gravitomagnetohydrodynamics problems in three space dimensions. A state-of-the-art Godunov-type central scheme for divergence-free MHD has been combined with a multigrid-type Poisson solver both operating within an adaptive mesh refinement framework. This new code has then been applied to the gravitational collapse of a solar-mass uniform cloud subject to different gas equation of states and for various initial conditions: isothermal and barotropic, non-magnetic and magnetic, non-rotating and rotating, with and without binary perturbation. In particular, it has been demonstrated that the code was able to robustly model the magnetodydrodynamical collapse and the related issue of fragmentation - a problem of high complexity which just begins to become explored in more depth. It has been shown that in the models with barotropic equation of state and an initial m=2 mode perturbation fragmentation is controlled by magnetic fields. Without magnetic field the final outcome is a single core surrounded by a ring-like structure. In case of a strong vertical field with mass-to-flux ratio close to the critical value again a single core is formed but embedded in an extended disk. For a weaker field with mass-to-flux ratio twice the critical one a binary system is produced connected by a bar.
The results presented in this paper are very encouraging and give strong motivation for further studies in this research field. To get more insight in the fragmentation process during protostellar core collapse the effects of ambipolar diffusion and probably radiation transport should be included. It would also be very fruitful to compare such AMR-GMHD models with models based on using SPH.