Issue |
A&A
Volume 606, October 2017
|
|
---|---|---|
Article Number | A78 | |
Number of page(s) | 30 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/201630208 | |
Published online | 16 October 2017 |
SPHYNX: an accurate density-based SPH method for astrophysical applications
1 Departement Physik, Universität Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland
e-mail: ruben.cabezon@unibas.ch
2 Scientific Computing Core, sciCORE, Universität Basel, Klingelbergstrasse, 61, 4056 Basel, Switzerland
3 Departament de Física, Universitat Politècnica de Catalunya, EEBE, Eduard Maristany 10–14 mòdul C2, 08019 Barcelona, Spain
e-mail: domingo.garcia@upc.edu; joana.figueira@upc.edu
4 Institut d’Estudis Espacials de Catalunya, Gran Capità 2–4, 08034 Barcelona, Spain
Received: 7 December 2016
Accepted: 30 June 2017
Aims. Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here.
Methods. SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code.
Results. A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes.
Conclusions. On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.
Key words: methods: numerical / hydrodynamics / instabilities
© ESO, 2017
1. Introduction
Many interesting problems in Astrophysics involve the evolution of fluids and plasmas coupled with complex physics. For example, in core collapse supernova, magnetohydrodynamics meets with general relativity, nuclear processes and radiation transport. Other scenarios, such as neutron star mergers, Type Ia supernova, and planet- or star formation, face similar challenges in terms of complexity. Besides that, these phenomena often have a strong dependence on the dimensionality and they must be studied in three dimensions. This requires accurate numerical tools which translate to rather sophisticated hydrodynamic codes. Because of its adaptability to complex geometries and good conservation properties, the Smoothed Particle Hydrodynamics (SPH) method is a popular alternative to grid-based codes in the astrophysics community. SPH is a fully Lagrangian method, born forty years ago (Lucy 1977; Gingold & Monaghan 1977), that since then has undergone sustained development (Monaghan 1992, 2005; Rosswog 2015a; Springel 2010b; Price 2012). Recent years have witnessed a large range of improvements specially aimed at reducing the numerical errors inherent to the technique. These errors are known as E0 errors (Read et al. 2010) and mainly appear due to the conversion of the integrals, representing local-averaged magnitudes of the fluid, into finite summations. The most simple and naive way to get rid of them would be to work closer to the continuum limit, which implies working with a number of particles and neighbors as big as possible (ideally, N → ∞ and Nnb → ∞, see Zhu et al. 2015). Unfortunately, this is not feasible in common applications of the technique because the total number of particles is limited by the computing power (both by speed and storage). Moreover, the number of neighbors of a given particle cannot be arbitrarily increased without suffering pairing-instability (Schuessler & Schmitt 1981). This is a numerical instability that acts as an attractive force which appears at scales slightly shorter than the smoothing length h, provoking artificial particle clumping and effectively decreasing the quality of the discretization, which eventually leads to unrealistic results.
In order to reduce the E0 errors, another more practical possibility has been studied during recent years: finding interpolating functions that are less prone to particle clustering than the widely used M4 or cubic-spline kernel (Monaghan & Lattanzio 1985). Among the various candidates, the most used (pairing-resistant) kernels come either from an extension of the Mn family to higher-order polynomials (Schoenberg 1946) or those based on the Wendland functions (Wendland 1995). In particular, the Wendland family is specially suited to cope with pairing instability (Dehnen & Aly 2012). Another possibility is the sinc family of kernels (Cabezón et al. 2008), which are functions of type S(x) = C(n)(sinx/x)n, and add the capability of dynamically modifying their shape simply by changing the exponent n. That adaptability of the sinc kernels makes the SPH technique even more flexible and can be used, in particular, to prevent particle clustering, as shown in Sect. 2.1.
Historically, the growth of subsonic hydrodynamical instabilities has been problematic for SPH simulations, as they damp them significantly. The Rayleigh-Taylor (RT) instability is a ubiquitous phenomenon which serves as a paradigmatic example. It appears wherever there is cold and dense fluid on top of a hot and diluted one in the presence of gravity (or any inertial force, in virtue of the principle of equivalence). The entropy inversion leads to the rapid overturn of the fluid layers. In the real world, the overturn is triggered by small perturbations at the separation layer between the light and dense fluids. The RT instability is one of the most important agents driving the thermonuclear explosion of a white dwarf, which gives rise to the Type Ia supernova (SNIa) explosions. Their correct numerical description is also crucial to understanding the structure of the supernova remnants (SNR) and to model core collapse supernova (CCSN) explosions. Additionally, the RT instability is also the source of other interesting phenomena such as the KH instability or turbulence. The numerical simulation of the Rayleigh-Taylor instability using SPH has traditionally been a drawback for the technique, especially for low-amplitude initial perturbations in the presence of a weak gravitational force. At present, for a similar level of resolution, the best SPH codes cannot yet compete with the state-of-art grid-based methods. For example, the finite-volume/difference Godunov methods such as ATHENA and PLUTO, AMR codes such as FLASH, the Meshless Finite Mass (MFM) and Volume methods (MFV), and especially the moving mesh methods based on Voronoi tessellations, as in the code AREPO, provide a good approach to the RT instability. This problem is partially overcome using a large number of neighbors and by adding an artificial heat diffusion term to the energy equation, as in the PSPH proposal by Saitoh & Makino (2013) and Hopkins (2013). However, these problems still persist when either the size of the initial perturbation or the gravity value are reduced (Valdarnini 2012), meaning that they are a symptom of another source of numerical error in SPH named tensile instability. This is an artificial surface tension that appears at contact discontinuities because of an insufficient smoothness of pressure between both sides of the discontinuity (Monaghan 2000). As a consequence, the integration of the momentum equation gives incorrect results. An excess of that tension provokes the damping of fluid instabilities, especially those with short wavelengths. Several techniques have been proposed to treat this problem, like averaging the pressure by means of the interpolating kernel itself, scheme PSPH (Hopkins 2015), volume element estimation bounded to pressure, the density independent scheme (DISPH; Hopkins 2013; Saitoh & Makino 2013, 2016), or by adding an artificial diffusion of heat to the energy equation, which helps to balance the pressure across the discontinuity (Price & Monaghan 2007). They have paved the road that led the SPH technique to a new standard within the last few years, and have helped to overcome this long lasting inconvenience. In particular, it has been proved that it is fundamental to increase the accuracy of the gradient estimation across the contact discontinuities via reducing its numerical noise. To achieve that, García-Senz et al. (2012) used an integral scheme to calculate spatial derivatives, which proved to be especially efficient at handling fluid instabilities. The validity of that approach was assessed in subsequent works by Cabezón et al. (2012) and Rosswog (2015a). In this last case, the integral approach to the derivatives (IAD) was used to extend the SPH scheme to the special-relativistic regime (see also Rosswog 2015b). See also the recent work of Valdarnini (2016), where the efficiency of the IAD scheme to reduce the E0 errors is studied in detail.
Finally, a recent breakthrough in SPH was the emergence of the concept of generalized volume elements (Ritchie & Thomas 2001; Saitoh & Makino 2013; Hopkins 2013). In these works, it was shown that a clever choice of the volume element (VE) can reduce the tensile-instability, leading to a better description of hydrodynamic instabilities. In this paper, we present a novel estimator to the VE that preserves the normalization of the kernel. We show in this work that having a good partition of the unity is also connected to the tensile-instability problem. The calculations of the growth of the Kelvin-Helmholtz and Rayleigh-Taylor instabilities using these new VE are encouraging.
In this work, we also introduce the hydrodynamics code SPHYNX, that gathers together the latest advances in the SPH technique, including those new ones presented here. SPHYNX has already been used in production runs simulating type Ia and core collapse supernova and is publicly accessible1.
The organization of this paper is as follows. In Sect. 2 we review the main properties of the sinc kernels as well as the integral approach to the derivatives, which are at the heart of SPHYNX. Section 3 is devoted to the choice of the optimal volume element and to the update of the smoothing-length h and of the kernel index n. Section 4 describes the structure of the hydrodynamics code SPHYNX: moment and energy equations and included physics. Sections 5 and 6 are devoted to describing and analyzing a variety of tests carried out in two-dimensions and three-dimensions, respectively. Finally, we present our main conclusions and prospects for the future in Sect. 7.
2. Interpolating kernels and gradients evaluation
2.1. The Sinc kernels
Interpolating kernels in SPH must fulfill several basic requirements in order to be considered suitable interpolators. Some of these features are automatically satisfied if the functional form of the kernel is spherically symmetric and approaches the Dirac−δ in the continuum limit. A functional representation of the Dirac-δ is (Couch 1997) (1)Writing
the function above becomes
(2)where the magnitude
is a widely known function used in signal analysis. Nevertheless, the expression given by Eq. (2) does not have compact support, limiting its practical applications as SPH interpolation kernel. Moreover, for x/h> 2, the sinc function oscillates and eventually produces negative values. An obvious solution to this problem is to restrict the domain to | x/h | ≤ 2, but in that case the first derivative of this function does not go to zero at the limits of the interval. To fix that, we define the
kernel set as:
(3)where Sn(·) = sincn(·), q = | r | /h, and d is the spatial dimension. Bn is a normalization constant, whereas n is a real number with n ≥ 2 to guarantee the nullity of the first derivative at q = 2. We generically refer to the
set as sinc kernels; see Cabezón et al. (2008; where they were called harmonic kernels) and García-Senz et al. (2014). These interpolators fulfill the regular desirable characteristics for SPH interpolators. Namely, tending to a δ-function in the continuum limit, having compact support, and being spherically symmetric. But they have a number of interesting additional features: a) they are connected by definition to the δ-function; b) they add flexibility to the SPH calculations as the kernel profile can be changed by varying the kernel index n; c) working with a high index (n ≥ 5) not only ensures that high-order derivatives are well behaved, but also overcomes particle clustering (see Sect. 2.1); and d) they tend to fulfill separability when the index n increases (see Appendix A).
Unfortunately, there is not a general analytic expression for the normalization constant Bn, but this small problem can be circumvented by fitting Bn, numerically calculated for a series of values of n. A fitting formula for Bn, valid in the range 3 ≤ n ≤ 12, was provided in García-Senz et al. (2014). We reproduce it here for the sake of completeness, (4)where the values of coefficients b0,b1,b2,b3 as a function of the dimensionality are provided in Table 1.
In the limit of large n, the sinc kernels also display an interesting feature: separability, which has not been sufficiently emphasized in the extant literature concerning SPH. Standard kernels used in SPH are spherically symmetric functions which naturally lead to a second-order accurate scheme. Interestingly, the majority of these kernels do not fulfill the identity: (5)This property guarantees the consistency of simulations involving planar symmetry, which should render identical results when calculated in 1D, 2D, or 3D, if the resolution and the interpolating kernel are the same in all three cases. Exploring in detail the applications of this property is beyond the scope of this paper. Nevertheless, we added a short discussion in Appendix A.
Additionally, some particular values of the kernel index n can mimic the profile of the most notable kernels used in SPH. In Table 2 we show the correspondence between the sinc and both, the Mn and Cm families of interpolators. The value of n shown in Table 2 was obtained by minimizing the metric distance , where W1 and W2 are the interpolators to be compared2. It is remarkable that n ≃ 5 is able to approach both the quintic M6 B-spline and the C4 Wendland kernel. Thus, the choice n = 5 seems a suitable default value when the number of neighbors is moderate, nb ≃ 60–120 in 3D (see f.e. Figs. 4 and 5 of Rosswog 2015a). For nb ≥ 120 it may be advisable to raise the index of the sinc kernel to avoid the pairing instability. Interestingly, the choice n = 6.315 provides a very similar profile to that of the Wendland C6 kernel. Nevertheless, a similarity in the real space is not the only characteristic to take into account. As proved by Dehnen & Aly (2012), having a positive Fourier transform for a large range of modes is of utmost importance when dealing with pairing instability (see Sect. 2.1).
It is also worth noting that the family forms a continuous set, making it possible to locally change the index n during the runtime of the simulation so that one can, for example, raise the value of n in presence of sharp density gradients, (see Sect. 3.2).
Finally, the evaluation of a sinc function is indeed computationally more costly than the regular spline kernels. Nevertheless, it is very easy to circumvent this problem by interpolating the value of the sinc function from a relatively small pre-calculated table. In practical applications, though, the choice of the kernel has a negligible impact on the computational burden, being completely masked by efficient cache usage and other sections of the code, as neighbor search or gravity calculation.
Although SPHYNX uses the sinc kernels by default, it also incorporates the Wendland C2,C4 and C6 kernels which can easily be selected by the user whenever necessary. On another note, the impact of the Cm kernels has recently been studied by other authors (Dehnen & Aly 2012; Rosswog 2015a) so we focus here almost exclusively on the set. Nevertheless, a comparison between the performance of the C6 and sinc interpolators is provided in Sect. 5.5, where the evolution of the Gresho-Chan vortex is simulated and discussed.
Values of the exponent n of the sinc kernels that provide the best match (i.e., minimum metric distance ) to the profile of known interpolators, such as members of the B-splines, Mn, and Wendland, Cm families.
Convergence to the continuum limit is only achieved when h → 0. According to Zhu et al. (2015), it can only be reached using both a large number of particles N, and a large number of neighbors nb. The value of N chiefly depends on the available computer resources and on algorithmic details. However, working with a large number of neighbors is not only expensive but it may be totally impractical due to the tendency of the mass-particles to gather in clusters when nb is high, a phenomenon known as pairing instability. A leap forward to alleviate this problem was the proposal of using Wendland functions (Wendland 1995) as interpolators by Dehnen & Aly (2012), which are much less sensitive to this instability than the low-order members of the Mn B-spline family. Another possible option to avoid particle clustering is to choose a sinc kernel with a large enough exponent, García-Senz et al. (2014).
![]() |
Fig. 1 Fourier transform for the spline kernel B4 and B6 (black lines), the Wendland kernels C4 and C6 (light blue lines), and the sinc kernels S5, S7, and S10. Only the positive part of the transform is shown. |
As mentioned above, having a Fourier transform with positive modes is necessary for a kernel to be stable. In Fig. 1 we show the Fourier transform of several B-spline, Wendland, and sinc kernels. As expected, Wendland kernels show positive Fourier transform at very large modes. Nevertheless, it is clear that the sinc family systematically becomes negative at higher modes and their negative regions decrease in size very quickly as the exponent n increases. Therefore, particle clustering can be totally suppressed, in practice, simply by raising n. It is also worth noting that, in order to improve the performance of Wendland kernels, it is advisable to considerably increase the number of neighbors up to nb ≃ 400 in 3D (Valdarnini 2016) which has a substantial increase on the computational burden of actual simulations3.
We tested the impact of the kernel and the number of neighbors on the emergence of pairing instability in a dynamical simulation. Results can be found in Sect. 5.1.
2.2. The integral approach to derivatives
In SPH, gradients are usually calculated simply by applying the nabla operator to the kernel function (Gingold & Monaghan 1977). Nevertheless, the same procedure is not applied to calculating higher-order derivatives, such as the Laplace operator, needed, for example, to evaluate the heat transport contribution to the energy equation. For these cases, an integral approach is preferred (Brookshaw 1985) because it is less noisy and provides excellent results. Working in that direction, an integral approach to calculating first derivatives, called IAD, was recently introduced by García-Senz et al. (2012). The IAD approach provides a more accurate estimation of the first derivative, thus improving the overall quality of the simulations. This was demonstrated through the study of several test cases in 1D, 2D, and 3D by Cabezón et al. (2012), Rosswog (2015a), and Valdarnini (2016), hence we refer the reader to these papers for technical details of the method. In García-Senz et al. (2012) it was shown that a restriction of the full integral approach (called IAD0) leads to a conservative formulation of the SPH equations in the same way as the standard scheme4 does. This, however is not without cost. Some accuracy is lost in the gradient calculation, which is exact for linear functions when IAD is used. In his work, Rosswog (2015a) evaluated this accuracy loss in several tests, and the outcome was that even if IAD0 was used, the accuracy loss, in comparison with IAD, was much smaller than the accuracy gain when comparing with standard gradient estimators. Furthermore, Valdarnini (2016) presents a series of tests that show the effectiveness of the IAD0 method at removing sampling errors in sub-sonic flows. Additionally, the SPH equations linked to the IAD0 scheme are formally similar to those of the standard method provided that the kernel gradient is computed as follows (for the sake of clarity, from now on we drop the dependence of the kernel on the particle inter-distance, rb−ra), (6)where,
(7)cij,a being the coefficients of the inverse of matrix
for particle a, whose elements are,
(8)In García-Senz et al. (2012), it is shown that for the Gaussian kernel the IAD0 scheme is equivalent to the gradient estimation with the analytical derivative of the kernel function, this being a particular case of the integral approach. It was also proven that IAD0, in fact, exactly reproduces the gradient of linear functions provided that,
(9)Equation (9) is, in general, fulfilled only approximately, and this is the main difference between IAD and IAD0 accuracies. The best gradient estimation is therefore attained by IAD0 only when ⟨ Δr ⟩ ≃ 0. In this respect, the initial model should be built so that Eq. (9) is approached as much as possible. As we see in the following section, it turns out that getting a small ⟨ Δr ⟩ is closely related to the choice of the adequate volume element for the integration. An analytical proof showing that an improvement in the partition of the unit translates to a better estimation of gradients within the IAD0 paradigm is provided in the Appendix B.
A collateral, albeit positive, feature of IAD0 is that it hinders the emergence of the pairing stability. This is shown in Sect. 5.1.
3. Preconditioning the calculation
3.1. The choice of the volume element
A recent breakthrough in SPH has been the development of a general method to assign the volume element to a particle (Saitoh & Makino 2013; Hopkins 2013, 2015; Rosswog 2015a). In SPH, the volume element of particle a has traditionally been Va = ma/ρa but, as noted in the seminal proposal by Ritchie & Thomas (2001), other options could be more convenient for handling specific problems. SPHYNX makes use of the scheme developed by Hopkins (2013), where an estimator Xa is introduced so that the particle volume is (10)The density of the particle is then calculated as ρa = ma/Va. Current choices for the estimator are Xa = 1, ma, and
(k ≤ 1), where Pa is the pressure. Taking Xa = ma leads to the common choice Va = ma/ρa. Actually, the same volume element is obtained with Xa = 1 when the mass of the particles is the same. On the other hand, the choice
helps to suppress the surface tension that appears at contact discontinuities.
Here we want to discuss another option for Xa, not considered in the aforementioned papers, which could also be of value for SPH users. We choose (11)and put it into Eq. (10):
(12)which for p = 1 becomes,
(13)where the summation underneath is simply the kernel normalization condition for the standard volume choice ma/ρa. Therefore, the volume element Va given by Eq. (13) comes after re-normalizing ma/ρa. We note that if
, then Eq. (13) leads to the identity ρa = ρa, as expected. Furthermore, one can take Eq. (12) as a recursive equation, which, jointly with ρa = ma/Va, allows to find the optimal ρa leading to an almost perfect kernel normalization after several iterations. That is, starting from an initial guess of the density, for example ρa = ∑ bmbWab, the value of Va is computed in an explicit way using Eq. (12). This gives a new density ρa = ma/Va to be used in the next integration step, and so on. The consistency and robustness of this procedure has been checked with the tests described in Sects. 5, 6, and in Appendix C.
The impact of changing the volume element is highlighted in the hydrostatic square test described by Saitoh & Makino (2013) and that we reproduce in Sect. 5.2.
3.2. Choosing smoothing length and kernel exponent: equalization
The use of the sinc kernels allows us to work with continuum adaptive index n(r,t). Interestingly, the smoothing length h(r,t) and kernel index n(r,t) can be calculated jointly with the density estimation. Because of the large number of involved variables, ρa, ha, na, Va, and Ωa, this point deserves some discussion.
Firstly, the volume estimator Xa is updated only when the global iteration has finished. Therefore, Xa is handled explicitly so that the coupling with other variables during the current iteration is avoided. Secondly, self-consistent values of ha, na, ρa, and Ωa are simultaneously calculated with a Newton-Raphson iterative scheme, that we named equalization (because it equalizes the resolution at the post-shock tail of shock waves) and is described in García-Senz et al. (2014). We summarize it here for the sake of completeness:
-
1.
Choose a trial value of ha, as well as the baseline kernel index n = n0, at thebeginning of the integration step.
-
2.
Calculate the density of each particle and its logarithm average over neighbors:
.
-
3.
Evaluate λa, the ratio between ρa and
, which is taken as a local indicator of linearity and is quantified with the following:
-
4.
Use λa to assign a new kernel index na according to:
(15)where Δn is the maximum allowed jump from n0, λc ≃ 1 is a scaling parameter, and f(ξ):
(16)
- 5.
Solve Eq. (15) jointly with mass conservation:
(17)where
is a constant set at the beginning of the simulation jointly with n0, Δn, and λc. After this process, we obtain the values of ha, na, ρa, and Ωa. Obviously, setting Δn = 0 turns the equalization off and the evolution is computed with na = n0. Typical values for the simulations presented in the following sections are n0 = 5, 0 ≤ Δn ≤ 5, and λc = 0.5.
4. The hydrodynamics code SPHYNX
SPHYNX gathers all the advances on the SPH technique that have been presented in Sects. 2 and 3 and represents our flagship code for astrophysical applications. In the following, we present the most relevant details on its implementation and itemize the mathematical expressions of the SPH equations implemented in the Cartesian version of the code. The precise form of the equations as well as the notation follows the discussion made in the preceding sections. They are similar to those presented in Rosswog (2015b).
4.1. Work-flow and formulation
First of all, we specify the initial number of neighbors nb and kernel index variables: n0 (baseline value), Δn (maximum allowed index jump), and λc (scaling parameter). This is followed by the choice of the volume estimator Xa. Then, the preconditioning stage starts with the self-consistent calculation of the volume element Va, smoothing-length ha, kernel index na, density ρa, and grad-h/n term Ωa (see Sect. 3.2).
-
•
Volume element
(18)with,
(19)For the volume elements we use Xa = (ma/ρa)p. From the experiments presented below, we saw that taking p ≃ 1 leads to the best results in most cases, but it has the disadvantage of overshooting density interpolations in contact discontinuities when p> 0.7. We overcome this problem by using a SPH-averaged value of the VE estimator, making it robust and allowing us to safely rise the exponent to p = 1 (see Sect. 5.2 for more details.)
-
•
Density equation
(20)which we calculate jointly with Ωa including both contributions: the grad-h and grad-n terms (García-Senz et al. 2014),
(21)Once the preconditioning is completed, the hydrodynamic equations are evaluated.
- •
- •


4.2. The artificial viscosity
Regarding the inclusion of physics, SPHYNX incorporates, by default, an artificial viscosity algorithm to handle shocks, as well as routines for the calculation of the equation of state (EOS) and gravitational force. Heat transport by conductive and diffusive means is also included as a basic code unit. More specific routines dealing with nuclear or chemical reactions, ionization or more complex transport schemes can be modularly added to the code (see, e.g., the application of SPHYNX to simulate core collapse supernova, including a spectral neutrino treatment, in Perego et al. 2016). In the following, we explain with some detail the AV algorithm.
![]() |
Fig. 2 Impact of changing the sinc kernel index n and number of neighbors nb on the pairing-instability during the relaxation towards equilibrium of a perturbed homogeneous system. In the X-axis the minimum interparticle distance normalized to its initial value is represented. The Y-axis shows the corresponding percentage of particles at different times. We show the results for n = 3 (red), n = 5 (green), n = 7 (blue) and n = 10 (pink). Continuum lines are for t = 0.01, dashed lines for t = 0.19 and big dots for t = 1.79. The continuum line in light blue for nb = 100 was calculated using the Wendland C3 interpolator and shows the results at t = 0.19. |
First of all, we take the viscosity Πab as in Monaghan (1997): (25)where
is an estimate of the signal velocity between particles a,b and wab = vab·rab/ | rab |. It is common to take
but in that case the volume element is implicitly assumed to be ma/ρab. Thus, the VE of particle a is not an unequivocally defined magnitude because it also depends on the density of the neighbor particle b. A more compatible option between AV and VE, which can be generalized to other VE, is obtained using
, which leads to the following viscous acceleration,
(26)where
(27)and fa,fb are the Balsara limiters (Balsara 1995):
(28)Expression (26) can be written as
(29)with Va = ma/ρa, Vb = mb/ρb. Unlike Πab, the magnitude
is not divided by the density and the viscous force,
, is symmetric with respect to any pair of particles, a,b. We note that the volume elements are now unequivocally defined when the mass of the particles is the same. The Balsara limiters work more efficiently if they are averaged as,
(30)because it gives a lower AV than the arithmetic average in regions with strong shear flows. By default SPHYNX sets these limiters to:
(31)which retains a residual viscosity to damp the smallest numerical fluctuations. Equation (31) works remarkably well in regions where strong shocks and instabilities cohabit, such as in the Triple Point test described below. In the case of strong shocks with no shear, Eq. (30) could be replaced by the standard arithmetic mean if necessary.
Following Springel (2010b), we used a constant α = 4/3 so that Πab remains close to the classical SPH artificial viscosity introduced by Monaghan & Gingold (1983). The contribution of the AV to the energy equation (Eq. (23)) is then, (32)Additional information on the physics included in SPHYNX, such as gravity, heat transport or alternative energy equations is deferred to Appendix D.
![]() |
Fig. 3 Same as Fig. 2 but using IAD0 instead of the standard gradient estimator. In this case, particle distributions peak at higher q/q0 ratios compared to those in Fig. 2, showing more resistance to particle clumping. |
5. Two-dimensional tests
In this section we present the outcome of applying SPHYNX to several two-dimensional (2D) tests that have traditionally been problematic for the SPH technique. We show that the sinc kernels help against the appearance of pairing instability, that the new VE provide a better treatment of discontinuities than the often-used m/ρ value, and that, when applied jointly with IAD0, there is an overall improvement in the results of the shock treatment in the Sedov and wall-shock tests, and also in the development of subsonic instabilities; in particular, the challenging cases of Kelvin-Helmholtz with a density jump of 8 and Rayleigh-Taylor with very weak gravitational field (g = −0.1). We also found a convergence on the Gresho-Chan test closer to those of Eulerian codes. Finally, we prove that our implementation suppresses the tensile instability, enabling mixing in tests like the wind-cloud interaction and the triple-point shock test.
Our results below suggest that improving volume conservation produces better results than using volume elements that do not ensure volume equipartition. At first glance, it also renders equalization useless because the advantages of using variable adaptive kernel indices are apparently obscured by the new VE. Nevertheless, equalization and volume elements work on different bases. As demonstrated in García-Senz et al. (2014), the use of a variable kernel index improves the interpolation of non-linear functions, even when they are estimated using integrals instead of finite summations. It is therefore expected that the equalization will be useful when both the number of particles, N, and neighbors, nb, are high enough so that the error in interpolations is dominated by non-linear effects rather than by the evaluation of integrals as finite summations (see also Sect. 6.2). For a moderate amount of neighbors the use of the new VE given by Eq. (12) makes equalization unnecessary. Thus, unless explicitly stated, many hydrodynamic tests presented in this work have been carried out taking Δn = 0.
5.1. Pairing instability
Here we show a numerical experiment, similar to that described in Zhu et al. (2015), used to study the relationship between the kernel index and pairing instability. We set a sample of particles in a regular 2D lattice leading to a homogeneous density distribution. Then, we seed a random small fluctuation of the internal energy, taking the system out of mechanical equilibrium. Afterwards, we add a frictional dissipative force proportional to the velocity in the momentum equation so that the system returns to equilibrium after a few sound crossing times. The final equilibrium distribution is sensitive to the number of neighbors and to the exponent of the sinc function. This is summarized in Fig. 2, which depicts the distribution of the minimum inter-particle distance (q), normalized to its initial value (q0), as a function of pairs (n,nb) at several elapsed times. As we can see, a careful choice of the exponent n keeps the distribution of normalized particle distances closer to 1, making particle clustering difficult, thus avoiding the pairing-instability even at large nb. For example, n = 3 keeps q/q0 close to 1 when the number of neighbors is small (top-left panel of Fig. 2), even at large times, while it completely fails when nb increases (top-right panel). Nevertheless, increasing the exponent n of the sinc kernel suppresses again the pairing instability. This numerical experiment points to n = 3, 5, 7 and n ≥ 10 as indicative values to handle with nb ≃ 15, 25, 50, and 100 neighbors, respectively (in 2D). For equivalent numbers in 3D, for example nb ≃ 60–120, currently used in SPH calculations, a sinc kernel with n ≃ 4–6 will be sufficient to suppress clustering in many applications (see also Fig. 3 in García-Senz et al. 2014). We provide a comparison between Sn and the Wendland C3 in the bottom-right panel of Fig. 2. As we can see, the C3 kernel works better than S10 but the difference is small.
![]() |
Fig. 4 The unmixed isobaric two-fluid test at times t = 0, t = 0.4, t = 1 and t = 2 (columns from left to right) calculated with different volume elements. The initial model is that of two nested squares with density contrast of four (the one with higher density being the inner yellow square). First and second rows use particles with identical mass but unevenly spaced grid, while the opposite applies for the two last rows. The first row depicts the evolution with the volume estimator |
![]() |
Fig. 5 Impact of Xa = (ma/ρa)p in both the volume normalization, ∑ bVbWab = 1, (left) and ⟨ Δr ⟩ = 0 condition (right) for different values of the exponent. The figure depicts the profile of these magnitudes along a 1D cut taken around y = 0.5 in the hydrostatic system shown in Fig. 4. We notice how the jump at the contact discontinuity is largely reduced in both conditions when p ≃ 1. |
The previous test was done using the standard gradient estimator. Figure 3 depicts the results for the same numerical experiment, but carried out with IAD0. The comparison between Figs. 2 (standard gradient estimator) and 3 (IAD0) suggests that the integral scheme also helps with the pairing problem, as it keeps the particle distribution closer to q/q0 = 1.
5.2. Static square of test
We consider a system of two fluids with different density but identical pressure: (33)The system is isobaric with P = 2.5 and the EOS is that of a perfect gas with γ = 5/3.
To carry out the simulations we use Eqs. (22) and (23) from Sect. 4, with constant kernel index n = 5, nb = 40 neighbors, and a variety of volume elements. We tried two different initial settings: using particles with the same mass (and uneven spaced grid), as well as particles with unequal mass spread on a uniform lattice. The outcome of the simulations at times t = 0, 0.4, 1, and 2 is depicted in Fig. 4. For equal-mass particles and Xa = (ma/ρa)p (with p = 0.9), the code was able to keep the hydrostatic equilibrium during several crossing times (first row). The evolution calculated using Xa = 1, Xa = ma, and (k = 0.5) (second row) led to the same wrong results in much shorter times. Rows 3 and 4 in Fig. 4 summarize the evolution using a homogeneous lattice of particles with different mass. Hydrostatic equilibrium is very well preserved using Xa = 1,
, and Xa = (ma/ρa)0.9 (third row), while Xa = ma (fourth row) does not give a satisfactory result.
![]() |
Fig. 6 Results for the 2D Sedov test at t = 0.1. S1 corresponds to p = 0 (standard VE) and Δn = 0 (no equalization). S2 shows the effect of the VE with p = 0.7 in |
In Fig. 5 we show the magnitudes ∑ bVbWab (left panel) and ⟨ Δr ⟩ = ∑ bVb (rb−ra)Wab (right panel) along a 1D cut taken around y = 0.5, from x = 0.1 to x = 0.5 (center of the system) in the hydrostatic system described above after 20 integration steps. As can be seen in the left panel, choosing Xa = (ma/ρa)p(p = 1) gives the best results, leading to an almost perfect volume normalization, while the case with p = 0 (equivalent to Xa = 1) shows a large oscillation around the contact discontinuity. The case with p = 0.9 is also quite satisfactory although, a small oscillation is still present. The right panel in Fig. 5 suggests that an adequate choice of the volume element may improve even further the accuracy of the IAD0 method to estimate gradients because, as we can see, a value of p ≃ 1 helps to keep ⟨ Δr ⟩ ≃ 0, which is the necessary condition for IAD0 to exactly reproduce the gradient of linear functions.
Settings of the calculated models in the Sedov test.
Unfortunately, taking p = 1 has the unwanted side effect that density tends to overshoot close to the contact discontinuity. The overshooting in density drops the value of the estimator Xa = ma/ρa, so that in the next integration step the density undergoes a further overshooting. Occasionally, the feedback between ρa and Xa might rise the density to unrealistic values after several time steps. This problem can be avoided by reducing the value of the exponent p but in that case the adequate optimal value becomes problem dependent. For example, p = 0.9 works well in the hydrostatic test depicted in Fig. 4 but it produces worse results for the Sedov test described later in Sect. 5. For that test p ≃ 0.6–0.7 was a better choice. A way to circumvent the overshooting problem is to consider (34)where ⟨ m/ρ ⟩ a is the SPH average of the standard volume element. Although less efficient than X = (m/ρ)p, this simple recipe enhances the robustness of the scheme so that the exponent of the estimator can be safely raised to p ≃ 1, independently of the problem to be simulated. This procedure is especially well suited to handle contact discontinuities hosting large density contrasts, and it has been checked in the blob and Evrard tests described in Sects. 5.6 and 6.3. A more quantitative discussion on the convergence rate of the estimator Xa as a function of p and the density contrast is given in Appendix C.
5.3. Shocks
5.3.1. Point explosion
The study of the propagation of a point-like explosion in more than one dimension is of great usefulness for checking hydrodynamics codes. We have carried out several simulations with SPHYNX to explore the impact of different combinations of volume element and level of equalization on handling shock waves. The explosion was triggered by depositing a considerable amount of energy inside a Gaussian surface of characteristic width σ = 0.05 located at the center of a uniform lattice of size [0,1] × [0,1]. The initial density profile is ρ(r,0) = 1 and the total injected energy E = 1. All simulations used N = 256 × 256 particles and the initial number of neighbors was set to nb = 50. The different parameters used in the simulations are summarized in Table 3.
Figure 6 shows the results of the Sedov test for several combinations of volume elements Va and equalization Δn at t = 0.1. Models S1, S2, S3, and S4 were calculated using and different values for the exponent p. All models are quite satisfactory but model S2, calculated with p = 0.7 and Δn = 0, leads to the best results. The density peak is closer to the analytic value and the pressure profile close to the origin behaves well. Model S4, calculated using
with p = 1, also provides good results while models S1, S3, with the choice p = 0 (standard VE) lead to lower peak values and oscillating pressure tails that depart from the analytic result.
The lower panels of Fig. 6 depict the profile of the moduli | ⟨ Δr ⟩ | a = | ∑ bVb (rb−ra)W(ha,na) | (left panel) and that of the normalization condition ∑ bVbW(ha,na) (right panel). These results are encouraging because they strongly suggest that the new volume elements not only enhance volume conservation (lower right panel of Fig. 6) but also make gradient estimation more accurate, because Eq. (9) is better fulfilled, ensuring that the IAD0 approximation is, in fact, close to the full IAD in terms of accuracy (lower left panel of Fig. 6). Again models S2 and S4 lead to the best results whereas S1 and S3 show worse volume conservation and lower fulfillment of Eq. (9).
It has been suggested that other choices of VE could be of interest for handling specific problems. In particular, it has been claimed that the pressure-based estimator (k< 1) could be well suited to handling contact discontinuities (Hopkins 2013). We carried out one simulation using that estimator, with the recommended value k = 0.05 (Rosswog 2015a). We found that the results with the pressure-based estimator were not as good as those of models S2 and S4 of Table 3. In Fig. 7 we compare the results of using the
estimator with those of S1 and S2. In the right panel it is clear that the volume normalization is not so well preserved as in model S2, obtaining values similar to those of using standard VE (model S1). Moreover, it leads to a lower peak density value and a spurious precursor shock is clearly seen ahead of the shock in the left panel. Although small, such precursor shock is a numerical artifact which was also reported by other authors (Rosswog 2015a). For that reason, hereafter we have limited the choice of volume elements to those given by Eq. (12)5.
![]() |
Fig. 7 Results for the 2D Sedov test at t = 0.1 taking Xa = Pk, where P is the pressure and k = 0.05 (black dashed line) compared to models S1 (red) and S2 (green). We show here the radial profiles of the density (left panel) and the equipartition condition (right panel). Solid black line is the analytic solution for the density profile. We note on the left panel that there is a small precursor bump in density for the calculation with Xa = Pk which, for models S1 and S2, is absent. |
5.3.2. The wall-shock test
Along with the point explosion (Sedov test), the so called wall-shock (or Noh) test is also used to check the performance of multidimensional hydrodynamics codes. Unlike the Sedov test, the Noh experiment is an implosion towards a geometrical center. The interaction of the converging inflow of gas with the stagnated material at the central volume provokes the formation of a shock-wave moving outwards. The Noh problem has an analytic solution (Noh 1987) to compare with. It is, however, not an easy test owing to the large jump in density across the shock front, 16 in 2D, 64 in 3D, for γ = 5/3. SPH codes traditionally had difficulties in handling this test because: a) many particles are needed to correctly reproduce the shock region (especially in 3D); and b) close to the center the internal energy shows a pronounced spike and, consequently, the density abruptly drops to keep the pressure constant. In fact, both problems are connected to the use of the AV and it is not clear to what extent a change in the SPH formalism may alleviate these shortcomings (Brookshaw 2003).
A sample of 2562 equal-mass particles was distributed in an bcc lattice with circular perimeter. The radius of the initial configuration is R = 1 and the density is homogeneous with ρ = 1 (except at the outer edge of the system). A radial profile of vr = −1 was imparted to all particles at t = 0, so that the system implodes. The evolution was tracked with SPHYNX until t = 0.5, when a clear steady shock moving outwards is formed. The results, calculated with n = 5, p = 0, and p = 0.7 for the VE and nb = 50, have been compared to both, the analytic predictions and to the output of GADGET-2 for the same initial model.
Table 4 and Figs. 8 and 9 summarize initial parameters of each simulated model and the main results of the simulations. The density profiles at t = 0.5 for the three calculated models, WHS1, WHS2, WHSG are shown on the leftmost panel of Fig. 8. A sharp shock, moving towards the right, is clearly seen at r ≃ 0.17. The density jump across the shock front is Δρ ≃ 16, hence close to the analytic value. Close to the center of the configuration there is a pronounced dip in density caused, as expected, by the use of the AV. The SPHYNX calculation with p = 0.7 (model WHS2 with green dots) gives slightly better results at the shock position than WHS1 and WHSG because the density profile is steeper and the maximum density is closer to the analytic value of 16. We note that all three calculations do show density oscillations at the shock front and that the GADGET-2 simulation is more blurred. The color-maps of density for models WHSG and WHS2 are also shown in the central and right panels of Fig. 8. As we can see, the shock region is sharp and well defined in the SPHYNX calculation whereas the simulation with GADGET-2 shows stronger oscillations close to the shock location and an inhomogeneous particle distribution close to the origin.
Additionally, model WHS2 has both a better volume normalization than WHS1 (i.e., closer to 1), as shown in Fig. 9 (left panel) and a better behavior of | Δr | (i.e., closer to 0) at the shock location (right panel).
List of the calculated models in the wall-shock tests.
5.4. Fluid instabilities
Instabilities play a central role in hydrodynamics because they are usually connected to shear mixing and turbulence. In the cosmos, instabilities are especially important because the large Reynolds numbers involved in astrophysical processes make these systems prone to turbulence. Particle- and grid-based codes have been applied with different levels of success to simulate the evolution of hydrodynamical instabilities such as Kelvin-Helmholtz (KH) or Rayleigh-Taylor (RT) instabilities. Here we re-visit them with SPHYNX.
![]() |
Fig. 8 Two-dimensional wall-shock test at time t = 0.5. Left: density profiles for models described in Table 4: WHS1 (red), WHS2 (green), and WHSG (blue) in dots. The analytic result is the solid black line. The central panel depicts the particle distribution with density as color at t = 0.5 as obtained with GADGET-2 whereas the last panel is that of model WHS2. We note how WHS2 achieves lower particle disorder in the central region, and smaller density oscillations at the shock position than GADGET-2. |
![]() |
Fig. 9 Two-dimensional wall-shock test. Left: volume normalization at t = 0.5 for model WHS1 (red) and WHS2 (blue). Right: same but for the condition, | ⟨ Δr ⟩ | = 0, among mass points within the kernel range of 2h. WHS2 (with new VE) fulfills both conditions better than WHS1 (with the standard VE). |
5.4.1. Kelvin-Helmholtz
Grid-based codes are, for the most part, almost free of numerical viscosity, leading to a good match between simulations and the analytic predictions during the linear phase of growth of the KH instability (Junk et al. 2010). On the other hand, SPH codes are Galilean invariant and do not suffer from numerical diffusion, but intrinsically have more numerical viscosity. It has been pointed out that SPH may not be appropriate for handling fluid instabilities across contact discontinuities with large density jumps (Agertz et al. 2007). Although there have been proposals to enhance the modeling of these systems (Price 2008), the SPH calculation of shear flows with large density contrast is still an open question.
![]() |
Fig. 10 Density color-map depicting the evolution of the Kelvin-Helmholtz instability for models KH1 (p = 0, upper panels) and KH2 (p = 0.7, lower panels) of Table 5. Each snapshot gives the density color-map at times t = 1.5 (left) and t = 2.5 (right). |
We ran two sets of models simulating the growth of the KH instability around the boundary layer separating two flows with moderate and large density contrasts, respectively. Each set was in turn calculated using two values for the volume elements namely p = 0 and p = 0.7 in Eq. (12). The initial setting was the same as in McNally et al. (2012), where three stratified fluid layers inside a 2D box of size [0,1] × [0,1] were considered. The fluid layers span for y ≤ y1, y1 ≤ y ≤ y2 and y ≥ y2 with densities ρ1, ρ2, and ρ3, respectively. The adopted values for y1, y2, ρ1, ρ2, and ρ3, as well as the number of particles used in the simulation, are shown in Table 5. Periodic boundary conditions were implemented at all sides of the box.
Settings of the calculated models in the 2D Kelvin-Helmholtz tests.
A velocity vx = 0.5 is given to the central strip, whereas the rest of the box moves in the opposite direction with vx = −0.5. Prior to the calculations, the density and vx distributions were smoothed following the method described in McNally et al. (2012; their Eqs. (1) and (3)). Thus, the growth-rate of the instability obtained with SPHYNX can be compared to the templates obtained by McNally and collaborators using the PENCIL Code6, a state-of-the-art hydrodynamics code of Eulerian type (Brandenburg & Dobler 2002). The pressure is set to P = 2.5 everywhere, with γ = 5/3. The fluid layer is in almost vertical equilibrium except for a small seeded perturbation in vy given by, (35)with w0 = 0.01. The velocity perturbation has therefore a wavelength λ = 0.5 so that the box hosts two complete waveforms.
Models KH1 and KH2 in Table 5 were calculated assuming a density contrast of two, whereas an initial jump of 8 was imposed to models KH3 and KH4. To build the initial models, a sample of 256 × 256 particles were first spread in a squared lattice and then stretched in the vertical direction so that the ensuing density profile adapts to that of Eq. (1) in McNally et al. (2012). This setting naturally leads to a density contrast of two for equal-mass particles and provides the necessary smoothness around the separation layers between the fluids. To build the high-density contrast models, the initial lattice was stretched until the density ratio was . The mass of the particles was then made proportional to the density of the stretched grid, so that a smooth profile with a density contrast of 8 was achieved.
![]() |
Fig. 11 Amplitude mode growth of models KH1 (p = 0) and KH2 (p = 0.7) of Table 5. The solid red line is the reference solution (McNally et al. 2012) obtained with the PENCIL code (Brandenburg & Dobler 2002), while the solid green and blue lines were obtained with SPHYNX for two different values of the p parameter. The dashed lines correspond to calculations performed with the NDSPMHD code (Price 2012) with a resolution of 5122 and kernel B4 (pink) and B6 (light blue). See McNally et al. (2012) for further details. We also show the L1 errors with respect to the reference values for each calculation. |
The Balsara limiter fa given by Eqs. (28), (30), and (31) was applied to all models to reduce the shear viscosity. For these initial conditions, previous simulations with the traditional formulation of SPH predict the growth of the KH instability only for the low-density contrast case (Agertz et al. 2007; Junk et al. 2010). When the density contrast rises well above 2 it is necessary either to add an artificial heat flux to keep isobaricity (Price 2008) or to redefine the volume elements (Saitoh & Makino 2013; Hopkins 2013). In particular, it was shown that the formulation which uses volume elements linked to pressure is able to reproduce the KH instability across high density jumps. Also, codes which directly use a smoothed pressure in the SPH equations, such as the PSPH formulation described in the Appendix of Hopkins (2015), can also handle with high density contrasts. Here we show that the IAD approach combined with new volume elements, which preserve kernel normalization, is also able to successfully simulate the KH instability with high density jumps.
![]() |
Fig. 12 Kernel normalization of models KH1 (left) and KH2 (right) of Table 5 at t = 2.5. The dispersion of ∑ bVbWab around unity is shown for two options of the exponent of the volume estimator, Xa = (ma/ρa)p, p = 0 (left) and p = 0.7 (right). Solid red lines are maximum and minimum values to help as a visual aid. |
Figure 10 shows a color-map of density of the evolution of models KH1 and KH2 at times t = 1.5 and 2.5. We see that there is a growth of the instability, qualitatively similar to the results obtained using other novel formulations of the technique such as the PSPH scheme with ≃200 neighbors (see for example Fig. 19 in Hopkins 2015). The results of SPHYNX are also very similar to those obtained with the recent CKRSPH formulation (see for example the upper-row of figure 17 in Frontiere et al. 2017). Model KH2, calculated using the new volume elements, appears to evolve slightly faster than KH1, computed with the standard choice, VE = m/ρ, showing more evolved structures in the non-linear stage. Unlike the PSPH scheme, our method does not estimate the pressure by kernel smoothing nor does it incorporate any artificial flux of heat to smooth the pressure.
Figure 11 depicts the mode-amplitude evolution of vy calculated taking the Fourier transform (FT) of the vy field. The FT was calculated with the expressions given in McNally et al. (2012) so that a comparison with the amplitude growth calculated with the PENCIL code (continuum-red line) and the NDSPMHD code, described in Price (2012), (dashed lines) is straightforward. The results are encouraging as SPHYNX reproduces the KH growth closer to the reference simulation of the PENCIL code than the NDSPMHD code, the latter even having a factor 2 higher resolution than our simulations with SPHYNX (see also Fig. 7 in McNally et al. 2012, and comments therein). Our results are close to those obtained with the PSPH scheme with the same resolution. In Fig. 11 we also show the L1 errors of each model with respect to the reference values, calculated as: , where Ar and As stand for the reference and simulated amplitudes, respectively. These results are an indication of the importance of using an accurate gradient evaluation, this being the most relevant difference between SPHYNX and NDSPMHD.
Model KH2, calculated with p = 0.7, exhibits better volume normalization than KH1, computed with p = 0, as shown in Fig. 12. That figure depicts the value of the normalization condition ∑ bVbWab as a function of the density of the particles. For both runs, most of the particles cluster around the expected value of 1, but the summation falls below one in the low-density region and exceeds it in the high-density region. Nevertheless, the dispersion around the correct value is considerably lower in the case with p = 0.7, showing its greater capability to achieve equipartition in disordered particle distributions.
Figure 13 summarizes the simulations with an initial density ratio ρ2/ρ1 = 8. The upper and lower panels were calculated with p = 0 and p = 0.7 in Eq. (12), respectively. The snapshots correspond to times t = 1.5 and 2.5. First of all, we see that despite the large jump in density, the instability is able to develop in both cases. These results, therefore, suggest that the use of the IAD scheme to calculate gradients improves by itself the quality of the simulations of hydrodynamic instabilities. Although the instability evolves slightly faster when p = 0.7, and shows more structure, it is probably affected by the noise introduced by the VE estimator (m/ρ)p when the mass of the particles is not constant. As in the precedent tests, the model calculated with p = 0.7 has the best kernel normalization properties (not shown in the figures).
5.4.2. The Rayleigh-Taylor instability
In García-Senz et al. (2012) it was shown that the IAD0 scheme is able to simulate the gross features of the growth of the RT instability for initial perturbations as low as Δvy = 0.01 in the velocity field. In this section we present clear proof that SPHYNX is able to cope with the growth of the RT instability also when the gravitational force is small.
Our initial model is similar to that described in Springel (2010a), where the numerical experiment takes place in a box sizing [0.5 × 1.5]. The lower and upper halves of the box are filled with equal-mass particles. The particle distribution is then arranged in the vertical direction so that ρu = 2 (upper region) and ρd = 1 (lower region).
![]() |
Fig. 13 Density color-map depicting the evolution of the Kelvin-Helmholtz instability for models KH3 (p = 0, first row) and KH4 (p = 0.7, second row) of Table 5, corresponding to a density jump of a factor 8, at times t = 1.5, 2.5. We notice that, despite the much larger jump in density, the KH instability is still able to develop in both cases. |
The contact discontinuity around the interface was smoothed using: (36)with Δy = 0.0083 and y0 = 0.75. The integration of the hydrostatic equilibrium equation
, where g is the gravitational acceleration, with an ideal EOS with γ = 1.4 gives the pressure profile in the vertical direction,
(37)with P0 = 2.5. We used 2562 particles and an initial number of neighbor particles nb = 50. The boundary conditions are periodic at the left and right sides of the box but reflective on the bottom and the top. The main features of the models used in this test are summarized in Table 6.
![]() |
Fig. 14 Density color-map depicting the evolution of the Rayleigh-Taylor instability for models RT1 (p = 0, first row), and RT2 (p = 0.7, second row) of Table 6 at times t = 1.5, 2.5, 3.5, 5.0, 6.5, and 7.5, respectively. The estimator |
The fluid was initially at rest everywhere, except for a small, single-mode perturbation applied to the vertical velocity field, (38)where w0 = 0.0025. In Fig. 14 we show the density color-map of the evolution of the instability when g = −0.5 is adopted. As we can see, the instability grows at a good rate and it enters into the non-linear regime at t> 3.5. The contact discontinuity between both fluids is well marked, but not totally sharp, owing to both the smooth initial conditions and to the spread of the physical magnitudes over the characteristic smoothing length h. During the non-linear stage, the calculation with p = 0.7 (model RT2 in Table 5) displays more structure than that of model RT2 calculated with p = 0. At t = 7.5 the vertical extension of the unstable region is nearly the same but model RT2 shows a richer structure inside the mushroom-like instability.
Settings of the calculated models in the 2D Rayleigh-Taylor tests.
The impact caused by the choice of VE is much more marked when a smaller gravitational force, g = −0.1, is adopted. This was the case reported in Springel (2010b) concerning the simulations of the RT instability with the AREPO code. As far as we know, there are no satisfactory calculations on this same scenario using the SPH technique. The combination of a small initial perturbation plus a tiny gravitational force makes the problem difficult because the E0 errors and the tensile instability conspire to totally suppress the growth of the instability (Valdarnini 2012). The upper panels in Fig. 15 depict the evolution of the instability as simulated by SPHYNX taking standard VE (i.e., p = 0 in Eq. (13)). As we can see, the instability is able to grow, although the growth-rate is slow and there is a clear lack of structure during the non-linear stage. The lower row of panels in the same figure depict the evolution when VE are calculated with p = 0.7. In this case there is a clear boost in the development of the instability as it grows faster and shows a richer structure in the non-linear regime.
Quantitative numbers on this test can be extracted from the evolution of the points located on the tip/bottom of the bubbles/spikes. Additionally, the numerically inferred terminal velocity of the vertex of the bubble can be compared with the analytical results by Goncharov (2002). The results are summarized in the Fig. 16, where the left panel indicates the bubble/spike evolution of the farthest point achieved by the lighter and heavier fluids (bubble and spike respectively), and the right panel indicates the evolution of the velocity of the bubble. As we can see, the evolution of the bubble/spike sample is faster when the VE are calculated with p = 0.7 and p = 1 (in this last case the averaged estimator, Eq. (34), was used). We note that at t ≃ 15, the tip/bottom of the bubble/spike are close to colliding with the box limits. Consequently the rising velocity of the bubble velocity was computed in the interval 0 ≤ t ≤ 15. As we can see, the terminal velocity of the bubble matches well the analytical estimation by Goncharov (2002), (39)where At = (ρ2−ρ1)/(ρ2 + ρ1) = 1/3 is the Atwood number, k = 4π is the wave-number of the applied single perturbation and Cg = 3 in two-dimensions. The terminal velocity calculated using Eq. (39) (solid pink horizontal line in Fig. 16) is close to those obtained using SPHYNX.
![]() |
Fig. 15 Same as Fig. 14 but for models RT3 (p = 0, first row), and RT4 (p = 0.7, second row) of Table 6 (with a very reduced gravity field of g = −0.1). Times are t = 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5, respectively. |
![]() |
Fig. 16 Rayleigh-Taylor test in 2D for models RT3 (p = 0, red lines), RT4 (p = 0.7, green lines), and RT5 (p = 1, blue lines). In the left panel we show the time evolution of the tips of the bubble (solid lines) and the spikes (dashed). From these it is clear that the inclusion of the new VE elements enhances the growth of the instability in the linear stage. Once the non-linear stage is reached all models grow at similar rates. It is worth noting here that simulations without IAD0 and standard VE (not shown) failed to develop the RT instability. In the panel on the right, we show the time evolution of the bubble velocity. It can be seen how a plateau is reached by all three models, which is closer to the expected terminal velocity (pink solid line) for the models using the new volume elements. |
![]() |
Fig. 17 Gresho-Chan vortex test in 2D. Radial profiles of the tangential velocity for several kernel choices and different number of neighbors, at time t = 1. The interpolators are the sinc(n = 5, 6.315, 7) and the Wendland C6. The theoretical velocity profile is also shown (black-dashed line). |
![]() |
Fig. 18 Gresho-Chan vortex test in 2D. Left: value of the magnitude L1, at time t = 1, calculated with the sinc kernels as a function of the kernel index n and number of neighbors nb (with constant total number of particles N = 2562). Right: convergence rate of sincs and Wendland C6 kernels as a function of the number of neighbors. |
Although our results are not as good as those obtained with the code AREPO for the same initial conditions (see Fig. 35 in Springel 2010b, for a comparison), they are encouraging, as they suggest that the simulation of the RT instability in a tiny gravitational field is also affordable for SPH codes when an E0-suppressing technique like IAD0 is used jointly with equipartition-preserving volume elements.
5.5. The Gresho-Chan vortex
The simulation of a stationary vortex that is in stable equilibrium is a very demanding test for any hydrodynamics code. This is a fundamental test for the accuracy of the numerical scheme, the preservation of symmetry and the conservation of angular momentum. The Gresho-Chan vortex has been especially problematic for SPH, which, in spite of having excellent angular momentum conservation, shows very poor convergence (Springel 2010b) due to numerical deviations from the initial conditions that degenerate the stability of the vortex within short timescales. This is specially relevant in the simulation of self-gravitating disks, where the centrifugal force and pressure gradient should be (up to a certain extent) balanced. Therefore, numerical noise can trigger deviations from this equilibrium configuration that lead, for example, to an artificial fragmentation of the disk.
Several recent works (Dehnen & Aly 2012; Zhu et al. 2015; Rosswog 2015a) have managed to keep the vortex in steady state, with low dispersion, during more than one orbit-cycle. In his work, Rosswog (2015a) proves that the use of high-order kernels is crucial to obtain accurate results, and in combination with IAD0 and an improved artificial viscosity with a noise dissipation trigger, leads to a roughly linear convergence rate.
In this test, we use the common initial conditions for the azimuthal velocity and pressure profile: (40)
(41)where ψ = r/R1, R1 = 0.2, v0 = 1, P0 = 5, for an ideal gas with γ = 5/3. This setting corresponds to a low Mach number test with
. We use N = 2562 particles evenly distributed in a square lattice of size [0,1] × [0,1] so that the density is uniform with value ρ = 1. The artificial viscosity parameter in Eq. (25) is set to the current value α = 4/3 and the Balsara coefficients estimated with Eqs. (28), (30), and (31). We note that because the vortex is in steady conditions, the divergence of the velocity should vanish everywhere and fa ≃ 0. As in other simulations, however, we allow a low level of AV to control the numerical noise and to reduce the dispersion of the particles. For this test, the particular choice of the volume element is not relevant because the particle distribution is homogeneous due to the constant density, hence the kernel is properly normalized at all times, independently of the exponent. Therefore, we take the nominal value of p = 0.7 in Eq. (12) in all calculated models.
We carried out several simulations of the vortex evolution for different choices of the kernel index n and initial number of neighbors7, nb(2D). We also calculated models with the Wendland C6 kernel which, according to Dehnen & Aly (2012) and Rosswog (2015a), was especially suited to handle this test. The quality of the simulations and convergence rate were checked with the parameter where vt and vs stand for the theoretical and the simulated value of velocity, respectively, and the summation runs over all particles.
In Fig. 17, we show the profile of the tangential velocity at time t = 1 (roughly the elapsed time needed by the vortex to complete one turn) for three kernel choices and different number of neighbors (nb). Using a low nb (≃ 30, in 2D) the best results were obtained with the sinc (n = 5) whereas the choice n = 6.315 or the Wendland C6 leads to qualitatively similar results. The situation is reverted when nb = 50, where the C6 interpolator gives a slender velocity profile and the lowest L1 value. However, the sinc(n = 6.315) also shows low dispersion profiles and L1 values. Increasing the number of neighbors to nb = 70 (≃ 500 in 3D) reduces the dispersion, as expected. The C6 provides the lowest L1 value here but closely followed by the sinc (n = 7).
Figure 18 summarizes the functional dependence of the estimator L1 with respect to the index n (left panel) and number of neighbors nb (right panel). Looking at the left panel we see that for nb ≃ 20 the sinc (n = 4) (similar to the M5 and C2 kernels) gives the best results. Increasing the number of neighbors moves the minimum value of L1 to the right (higher n values). At nb ≃ 30, the index n = 5 becomes the optimal choice. Setting the initial number of neighbors to 70 and 80 demands larger kernel indices (n ≃ 7–8) at the position of minimum L1. The right panel of Fig. 18 explores the convergence rate of the Gresho-Chan vortex test with respect to the number of neighbors at a constant total particle number N = 2562. When nb ≤ 30 it seems that the low-order interpolators with n ≤ 5 give the lowest L1 values. Nevertheless, above nb ≥ 40 it is necessary to raise the index of the sinc kernel, or use the Wendland C6, to achieve convergence. The C6 gives the lowest L1 values when nb ≥ 40. The curve referred as nmix (dotted black line) is a fit which takes advantage of the optimal n index at each nb. Both, the nmix profile and the C6 curve can be used to estimate the convergence rate in this test. These profiles show a rapid initial decline: , followed by a more gentle reduction of the convergence rate
.
![]() |
Fig. 19 Gresho-Chan vortex test in 2D. Convergence rate of L1 as a function of the equivalent 1D number of particles, N1d, at time t = 1, for different sinc kernels. |
![]() |
Fig. 20 Wind-cloud interaction: density color-map depicting the evolution of the collision and cloud fragmentation at three fiducial times t/tKH = 0.5, 2, and 3. Model WG refers to the calculation using GADGET-2. Models W0 and W1 were calculated with SPHYNX using the standard and the new VE, respectively. |
Figure 19 shows the convergence rate of L1 at t = 1 when the one-dimensional (1D) equivalent number of particles, N1D, is varied in the range 50 ≤ N1D ≤ 400. In this study we have only considered the sinc kernels, but with several exponents. The convergence rate is explored using the following (n,nb) pairs: (5,30), (6.315,50), (8,80) and (10,100). As can be seen, the high-order sincs with n = 8, n = 10 achieve a linear convergence of L1, provided that a sufficient number of neighbors is taken. Moreover, the convergence rate is larger than those reported in Springel (2010b) (
) or in Dehnen & Aly (2012;
) and closer to the value obtained with Eulerian codes,
(Springel 2010a); see also Valdarnini (2016) where the ability of the IAD0 scheme to reduce sampling errors is proved in a similar subsonic test, but with Mach number as low as ℳ = 0.02.
5.6. Interaction of a supersonic wind with a cold cloud of gas
Popularly known as the “blob” test (Agertz et al. 2007), this problem has challenged SPH codes for a long time. The basic setting of this test gathers many pieces of physics, such as strong shocks and mixing due to the KH and RT instabilities in a multi-phase medium with a large density contrast. The initial configuration consists in a dense spherical cloud of cold gas embedded in a hotter ambient medium. The cloud is initially at rest while the ambient background (the wind) moves supersonically. The wind-cloud interaction generates a bow shock and, in short time, the cloud is fragmented and mixed with the background owing to the combined effect of ablation and KH and RT instabilities.
Modern grid-based codes are able to handle this scenario and all of them agree in the gross features of the evolution. Nevertheless, simulations using the SPH technique have historically had difficulty disrupting the cloud because of the poor development of the hydrodynamic instabilities that drive the erosion of the cloud (Agertz et al. 2007). As a matter of fact, the challenge introduced by this problem has led to interesting advances of the SPH technique in recent years, such as the pressure, PSPH, formulation of the SPH equations by Saitoh & Makino (2013) and Hopkins (2013), and the conservative reproducing kernel method, CRKSPH by Frontiere et al. (2017). As shown in these works, it is feasible to suppress the tensile-instability acting at the contact discontinuity between the two fluids, leading to a better agreement between SPH and grid-based codes.
Here we show that the new volume elements given by Eq. (12) also suppress the tensile-instability at the contact discontinuity. The code SPHYNX is thus able to cope with the blob test without leaving the density-formulation of SPH. Our initial setting is similar to that described in Agertz et al. (2007), although restricted to two dimensions where the dense cloud is no longer a sphere but a circle (a cylinder in 3D). The wind is simulated with Nw = 2562 particles spread following a glass-like stable (previously relaxed) distribution in a box sizing [1,1/4]. The cloud is reproduced with NC = 682 particles, spread in a regular square lattice tailored in a circle of radius RC = 1/40, and centered at coordinates (1/8,1/8). All particles have the same mass. The features of the wind and cloud at t = 0 are ρw = 1,Pw = 1,uw = 3/2,vw = 2.7 and ρc = 10,Pc = 1,uc = 3/20,vc = 0, respectively, where the wind velocity vw is supersonic with Mach number ℳ = 2.7. As the particles in the cloud are fully ordered we add a small random initial radial velocity, with
.
![]() |
Fig. 21 Wind-cloud interaction: evolution of the surviving fraction of the cloud for the three calculated models. Even though a small remnant of the cloud is still present at t/tKH ≃ 4 the calculation W1 (SPHYNX with the new VE, that is, p = 1) leads to the best results. The other two calculations (SPHYNX with standard VE, i.e. p = 0, and GADGET with standard VE and without IAD0) are more inefficient in diluting the cloud as between 40–60% of it is still present even at t/tKH> 4. |
Values of the parameter L1 for different magnitudes.
We carried out two calculations of the blob test using SPHYNX: with the standard VE (model W0) and with the new VE (model W1), as well as with GADGET-2, (model WG). Because of the large density contrast between the wind and the cloud the model W1 was calculated with the averaged estimator X = ( ⟨ m/ρ ⟩ )p and p = 1.
In Fig. 20 we show several snapshots of the simulations at normalized times (t/tKH = 0.5,2,3), where tKH is the characteristic KH time, as defined in Agertz et al. (2007). As we can see, the fragmentation and mixing of the cloud is only attained in model W1, calculated with p = 1. Models W0 and WG give worse results, but we note that even in that case SPHYNX with p = 0 yields a slightly larger fragmentation of the cloud than GADGET-2 (see Fig. 21). Our results are also in good agreement with those obtained by Frontiere et al. (2017) using the CRKSPH method (see their Fig. 21). The large suppression of the tensile instability in the W1 calculation is the characteristic that allows the fragmentation of the cloud. This is clearly visible in the detailed density snapshots shown in Fig. 22. It is therefore evident that the long-standing tensile instability problem in SPH is bounded to kernel normalization. Having a good partition of unity considerably reduces the strength of that instability.
![]() |
Fig. 22 Wind-cloud interaction: impact of the VE choice in the tensile instability. Near the contact discontinuity the standard VE, m/ρ, produces the segregation of the two fluids, preventing mixing (left). Choosing VEa = ( ⟨ m/ρ ⟩ a)p/ ∑ ab( ⟨ m/ρ ⟩ b)pWab, where ⟨ m/ρ ⟩ is the SPH average of m/ρ and p = 1, suppresses the tensile instability leading to mixing of the fluids and the subsequent bubble fragmentation (right). |
![]() |
Fig. 23 Color-map of the mass density in the triple-point shock test. Each row shows a snapshot at a different time, while each column corresponds to a different VE. The column on the left was calculated with Xa = ( ⟨ m/ρ ⟩ a)p and p = 1, while the one on the right corresponds to Xa = 1 (i.e., standard VE). |
The evolution of the surviving fraction of the cloud as a function of time is depicted in Fig. 21. The expected outcome is that the surviving fraction of the cloud reduces to 0 in about t/tKH ~ 2.5, as obtained by grid codes (see, e.g., Fig. 25 of Hopkins 2015). As we can see, the W1 calculation leads to almost complete destruction of the cloud in t/tKH ≃ 4, while in the other two calculations hardly 50% of the cloud is mixed. Nevertheless, unlike in the grid-based calculations, the cloud is not totally mixed with the ambient gas as ≃ 10% of it remains unmixed in the W1 simulation. The difference may arise from the different dimensionality of the calculations, because in 2D the cloud is a cylinder whereas in 3D the shape of the blob is spherical. However, a calculation of the same model in 3D with 3 × 106 particles led to similar results. Actually, the complete dissolution of the cloud is only attainable if some heat-transport is included in the scheme (Hopkins 2015).
5.7. Shock plus vorticity: the triple-point shock test
The similarities among the results obtained by SPHYNX and the CRKSPH method, as found in the KH and the Blob tests above, are even more evident when considering the so called triple-point shock test. This is a shock-tube-like setting where three materials with different densities and pressures are put into contact through discontinuity lines. The first region contains a high-density, high pressure material, located in the vertical band on the left of the first snapshot in Fig. 23 (see also the sketch by Frontiere et al. 2017, shown in their Fig. 24). That region is in contact with two low-pressure horizontal bands, the upper band being considerably less denser than the lower band. The particular point where the three discontinuity lines intersect is the triple point. The vertical high-pressure band on the left launches a shock through the lower-pressure horizontal bands. Because of the pronounced difference in the speed of sound, the shock moves faster in the upper low-density, low-pressure region. As a consequence, a shear is induced around the line separating both low-pressure materials, and soon the KH instability develops, rolling-up the interface between these two regions. At late times, the reflected shocks at the boundaries of the box also induces the Richtmyer-Meskhov instability when they cross the interfaces between the different materials.
![]() |
Fig. 24 Results for the 3D-Sedov test for models S5, S6, S7, S8 of Table 3 and Gadget-2. We show here the radial profiles of density (upper-left), normalized pressure (upper-right), and the fulfillment of the conditions | ⟨ Δr ⟩ | = 0 (bottom-left) and ∑ bVbWab = 1 (bottom-right). Solid black lines are the analytic solutions for density and normalized pressure profiles. |
Our main aim is to simulate the triple-point shock scenario and to compare the results obtained with SPHYNX to those given by the CRKSPH method. A comparison among the performances of CRKSPH, the standard SPH formulation and the ReALE (Reconnecting arbitrary Lagrangian Eulerian) method by Loubère et al. (2010) is provided in the work by Frontiere et al. (2017) and we refer the reader to that paper. Their results show that for the triple-point shock scenario the CRKSPH scheme gives the best results.
The setting of the initial model, particle number in each region and EOS is exactly the same as described in Frontiere et al. (2017). The evolution of the system with SPHYNX and two choices for the VE, p = 0 and p = 1 in Eq. (34), was tracked until t = 8. We show in Fig. 23 the density color-maps for both cases at times t = 1, 3, 5, 7. As we can see, the results with SPHYNX and p = 1 are similar to those obtained with the state-of-the-art CRKSPH scheme. The main features of the shock wave are well simulated for both, the p = 0 and p = 1 cases, but the amount of structure seen in the calculation with p = 1 at t = 5 and t = 7 is larger than that of the standard VE choice. It is worth noting that our recipe for implementing the artificial viscosity is relatively basic and there is still some room for improvement in this area, such as, for instance, the use of time-dependent AV coefficients (Cullen & Dehnen 2010; Rosswog 2015a).
6. Three-dimensional tests
In this section we present three test cases where SPHYNX is applied to three-dimensional (3D) scenarios. We show here that the combined use of the new VE, the sinc kernels, and IAD0 leads to better results when trying to obtain an homogeneous density field, decreasing discretization errors. In the following, we re-do the Sedov test, now in 3D, where the new VE stand out by better handling shock-waves and the equalization enhances the outcome in simulations with high number of neighbors. Finally, the collapse of an isothermal cloud tests the implementation of SPHYNX coupled with gravity evaluation. The results show that the new VE enhance the kernel normalization, and consequently the overall interpolations, leading to a better shock treatment when compared with standard implementations.
6.1. Approaching a uniform scalar field
The initial models used in many SPH simulations are often built so that the density and/or the pressure are as homogeneous as possible. These systems should ideally remain in equilibrium during many sound-crossing times. Although very common, arranging particles in ordered lattices is not the best option because the particles move off the lattice and introduce noise. In this section we discuss the ability of SPHYNX to build a uniform density/pressure field, which is obtained after relaxing a pseudo-ordered initial distribution of particles. Several features of the resulting glass-like structure are discussed as a function of the VE choice. To obtain a homogeneous density profile we follow a similar procedure such as that explained in Rosswog (2015a).
We first spread N = 128 000 particles in an ordered 3D squared bcc lattice. Afterwards, their position is displaced at random in each direction, with maximum amplitude 40% of the size of the smoothing length. Then, we allow the system to evolve keeping internal energy constant, at u0 = 1, to regain equilibrium. During the relaxation, the velocities are set to zero but the particles are displaced with the simple recipe: Δr = 10-3h (ac/ | ac |) where h is the smoothing-length and ac stands for the acceleration vector. The value of h was the same for all particles, so that the number of neighbors is nb ≃ 100 with low dispersion. During the evolution, the L1 error estimators were monitored for several magnitudes, such as density, partition of unity, and ⟨ Δr ⟩. These are defined as: We note that as P = (γ−1)u0ρ, with γ = 5/3, the error estimator L1(ρ) also provides the dispersion in pressure.
The L1 values after 800 iterations are summarized in Table 7. All settings of the VE lead to stable glass-like structures with small departures from homogeneity (L1(ρ) ≤ 10-3). Model D, calculated with Xa = ( ⟨ ma/ρa ⟩ )p; (p = 1), displays the lowest L1(ρ) values. As expected, the calculation with Xa = (ma/ρa)0.7 leads to the best partition of unity but, paradoxically, is the most inhomogeneous of the sample. Model C, calculated with a negative exponent p = −0.7 is very similar to model A computed with the standard VE choice (p = 0.0). Because the internal energy was kept constant during the relaxation, a negative p value in Eq. (11) is equivalent to taking a VE linked to pressure. We therefore conclude that SPHYNX is able to handle homogeneous systems in pressure equilibrium for a wide range of the parameter p. For these systems, the best choice of VE is that given by Eq. (34) with p ≃ 1.
6.2. Point explosion
The description of an explosion in the 3D space is usually a challenge for any hydrodynamics code. The lower resolution (in comparison with 2D calculations) makes it difficult to capture the density, pressure, and velocity at the peak of the blast and also to correctly reproduce the post-tail values of these variables. We have investigated the ability of our code to cope with a point-like explosion in 3D and compared the results with those obtained using GADGET-2.
We have run two sets of models labeled as { S5,S6,S7,S8 } and { S9,S10,S11,S12 } in Table 3. The first set consists of low-resolution calculations whereas the second set of simulations has not only more resolution, but also uses more neighbors to make the interpolations.
Figure 24 summarizes the results for the set of low-resolution models. As in the 2D simulations, models S6 and S8 in Table 3, with the new volume elements, lead to the best results. Both models give a higher density peak and a better post-shock evolution of density and pressure. They also exhibit an enhanced volume normalization and | Δr | has a better behavior across the wave front. The model calculated with p = 0 and Δn = 5 (S7) is the third-best case, followed by model S5 (standard VE setting p = 0,Δn = 0). Figure 24 also includes the density and pressure profiles (light blue lines) calculated using GADGET-2 for the same initial setting. Its density profile shows a similar post-shock tail evolution as model S6, but lower peak value and the post-tail pressure profile diverges considerably from the analytical curve.
![]() |
Fig. 26 Radial profiles of the sinc kernel index, n, and density for model S12 of Table 3. We note how the scheme increases the kernel index to improve the interpolations in the complicated regions of low- or of fast-changing density. |
![]() |
Fig. 27 Upper panels: density (left) and radial velocity profiles (right) for the collapse of an initially isothermal sphere of gas at t = 0.8. The solid black line is the 1D-PPM solution. We show the SPHYNX calculations with p = 0 (red) and p = 1 (green) in Eq. (34). Calculations using GADGET-2 are in blue. Lower panels: profiles of | ⟨ Δr ⟩ | (left) and ∑ bVbWab (right) for p = 0 (red) and p = 1 (green). |
Figure 25 is the same as Fig. 24 but for models { S9,S10,S11,S12 } calculated with a larger number of particles and taking two times more neighbors in the summations. Therefore, these models are characterized by both a higher resolution and a less noisy estimation of gradients. According to our calculations, model S12 computed using p = 0.7, Δn = 5 is the one closer to the analytic expectations. The density peak of the blast was the highest and at the correct position, while the values of density and pressure close to the origin behave well. Further, model S12 also has the best volume normalization and minimum | Δr | at the position of the shock front. The profile of the sinc kernel index n of model S12 is shown in Fig. 26. We can see that the highest value of the exponent (n ≃ 9.8) is reached at x = 0.3 behind the density peak. These results are closely followed by model S10 (p = 0.7 and no equalization), and, in decreasing quality, by S11 (p = 0 with equalization), and S9 (p = 0 and no equalization).
The results above suggest that: 1) the use of the new volume elements is beneficial when handling shock-waves; 2) when using these VE, the equalization algorithm (i.e., considering n(r,t), variable kernel exponents) is only able to enhance the results when the number of neighbors is high (as in the cases depicted in Fig. 25).
6.3. Collapse of an isothermal cloud
A relevant test in Astrophysics is the numerical study of the gravitational collapse of a gaseous configuration, also known as the Evrard test (Evrard 1988). This test includes gravity and has been used many times to check hydrodynamics codes (see for example Springel 2010a and references therein).
We used here the initial configuration described in Springel (2010a) consisting of a sphere of gas with unit radius and mass. The density profile at t = 0 is (45)where R = 1 and M = 1. The initial configuration is assumed isothermal with u0 = 0.05 per unit mass, obeying an ideal EOS with γ = 5/3. All particles are initially at rest. For this choice the internal energy is much lower than the gravitational energy (assuming G = 1) and the system collapses.
To build the initial model we have conveniently stretched a uniform grid with N = 403 particles so that the profile given by Eq. (45) is obtained. This procedure reproduces well the 1D density profile, except near the surface layers. We have tracked the collapse of the sphere for two different values of the parameter p in both Eq. (11) and in its smoothed counterpart, Eq. (34). The results at time t = 0.8 are compared in Fig. 27 not only with an accurate 1D-PPM calculation (Steinmetz & Mueller 1993), but also with the results obtained using GADGET-2 for the same initial model.
The evolution of models calculated using Eq. (11) with p = 0.7, and with p = 1 in Eq. (34) were rather similar, so we describe the collapse of the configuration only for the latter case. The upper panels of Fig. 27 depict the density and radial velocity profiles at t = 0.8. We see only minor differences among the three calculated models. The density profiles are almost indistinguishable. Nevertheless, a close inspection of the region around the shock front (see the zoomed region in Fig. 27) reveals that the model calculated with p = 1 in Eq. (34) has a slightly better match to the exact solution than p = 0 and that obtained with GADGET-2. The profile of the radial velocity is similar in all three calculations. The evolutions calculated with GADGET-2 and with SPHYNX with p = 0 were almost identical and some minor differences may be due to the details in the implementation of the AV and gravity.
The new volume elements provide a better kernel normalization at the center and front locations (bottom-right panel in Fig. 27). Nevertheless, unlike in the point explosion tests, we have not seen a clear enhancement of | Δr | (Eq. (9)) when p ≃ 1 is used to compute the volume elements, although, overall, the profile is smoother (bottom-left panel).
7. Discussion and conclusions
In this paper, we present a new density-based SPH code, named SPHYNX, and test it in a series of traditionally problematic simulations for SPH codes in 2D and 3D. In particular, we have been able to perform a Rayleigh-Taylor simulation in a weak gravitational field g = 0.1. Additionally, the shock-blob interaction test proved that SPHYNX can efficiently suppress the tensile instability that prevents the rise of hydrodynamical instabilities and mixing in many scenarios simulated with SPH. Additionally, the outcome of other tests, such as the hydrostatic square, Kelvin-Helmholtz instability, Gresho-Chan vortex, Sedov explosion, Noh wall-shock, Evrard collapse and Triple point-shock, prove that our implementation provides results competitive with other state-of-the-art calculations. For these problems, SPHYNX produces better results than many of the extant density-based SPH codes, being qualitatively similar to those obtained with the recently developed CRKSPH scheme (Frontiere et al. 2017). But, unlike the CRKSPH method, our approach ensures the angular momentum conservation from the onset.
To achieve this, SPHYNX benefits from recent advances in the field and gathers together the latest methodologies to perform numerical simulations of astrophysical scenarios via the smoothed particle hydrodynamics technique. These methodologies include, as a novelty, a new generalized volume element estimator and a consistent update of the smoothing length and the sharpness of the interpolating kernel along with the particle density. Additionally, it counts with an integral approach to calculate gradients and a pairing-resistant family of interpolators. These features are summarized and discussed in the following.
The choice of non-standard volume elements to approximate the Euler integral equations as finite summations has a significant impact on the simulations. Following the works by Saitoh & Makino (2013) and Hopkins (2013), who generalized the VE so that they are not necessarily the trivial m/ρ choice, we postulate a new volume element which enhances the normalization of the kernel. As discussed in Sect. 3.1, the VE assigned to a particle is Va = Xa/ ∑ bXbWab, where Xa = (ma/ρa)p is the weighting estimator of the kernel and 0 ≤ p ≤ 1 is a parameter chosen by the user. The value p = 0 reduces the VE to 1/ ∑ bWab, which is the standard VE when the mass of the particles is the same. For p = 1, we have Va = (ma/ρa)/ ∑ b(mb/ρb)Wab which is simply the re-normalized traditional volume element. As expected, a better kernel normalization (between a factor 2 and a factor 5) is obtained when these VE are used. A negative feature of the proposed VE is their tendency to overshoot the density estimation in the presence of sharp gradients when p ≃ 1. Actually, that is the fundamental reason for not taking p = 1 in the estimator Xa = (ma/ρa)p. The optimal value of p depends on the particular problem at hand, but the range 0 ≤ p ≤ 0.7, explored in this work seems to be safe. Nevertheless, a most robust implementation that allows taking p = 1 is to consider Xa = ( ⟨ ma/ρa ⟩ )p, where ⟨ . ⟩ is the SPH average of the magnitude. Although this last procedure requires the computation of the averages ⟨ ma/ρa ⟩, it is the recommended default choice because of its robustness and ability to keep track of strong shocks and instabilities in the presence of sharp density gradients.
Another important feature is the dynamical choice of the interpolating kernel function. A large body of calculations carried out with SPH in the past made use of the M4 cubic-spline function to perform interpolations. The M4 polynomial has, however, a serious drawback: it is prone to the pairing-instability when the number of neighbors increases (e.g., exceeding nb ≃ 60 in 3D calculations which uses the M4 kernel). This is clearly a limitation, because in practical applications it is advisable to take as many neighbors as possible to reduce the E0 errors in the SPH equations. A growing number of kernel candidates has been proposed during the last decade to alleviate this problem. For example, one option is to consider the natural extension of the Mn family to higher polynomial degrees, such as the quartic (M5) or the quintic (M6) kernels. More recently, a different family of interpolators has been proposed based on the Wendland functions, as discussed in Dehnen & Aly (2012), which shows a strong endurance against the pairing-instability. A third family of interpolators, called the sinc (harmonic-like) kernels, was introduced by Cabezón et al. (2008), which are also implemented in SPHYNX. As mentioned in Sect. 2.1, the definition of the sinc kernels is directly linked to that of the Dirac-δ function. Unlike the Mn family, which is discrete in the index n ∈ Z( +), the sinc kernels do form a continuous family, which depends on a leading exponent n ∈ ℜ( +). Actually, the Mn family could be considered as a subset of the sinc family (García-Senz et al. 2014). Using the sinc family of kernels endows the SPH technique with a flexible engine, as the shape of the kernel can be dynamically changed, in a continuous way, during run-time. This feature can be used, for example, to suppress the pairing instability (see Sect. 5.1) or to equalize the resolution behind a shock-wave (as shown in Fig. 26).
Additionally, SPHYNX estimates gradients by an integral approximation (IAD0) which is more accurate than the traditional procedure based on the analytic derivative of the kernel function, and reduces the E0 errors caused by the particle sampling of the fluid. We fully confirm in this work the importance of this new approach, especially for handling hydrodynamic instabilities, in agreement with previous publications (García-Senz et al. 2012; Cabezón et al. 2012; Rosswog 2015a,b; Valdarnini 2016).
SPHYNX has been validated with several standard tests in two and three dimensions, ranging from strong shocks and subsonic fluid instabilities in boxes, to larger systems where the gravitational force takes over. From the analysis of these test cases we summarize the following conclusions.
The use of the Integral Approach to calculate gradients along with the traditional volume elements, Va = ma/ρa (p = 0 in Eq. (12)) and a sinc kernel with n = 5 improves the simulation of hydrodynamics instabilities subjected to small initial perturbations with respect the standard SPH. The quantitative amplitude growth-rate of the KH instability is closer to the correct growth-rate (as computed with state-of-the-art Eulerian codes) than current density-based SPH codes (with smaller L1 errors by a factor 1.5–4), being similar to the results of the modern PSPH formulation. It is also able to reproduce the KH instability in stratified fluids with high density contrasts (ρ2/ρ1 ≃ 8). In the case of the RT instability, the scheme is also able to cope with small perturbations (w0 = 0.0025) and tiny gravity values (g = −0.1), although in the latter case the non-linear evolution scarcely shows structure. In shocks, the results are similar to those provided by the standard method in identical conditions.
When the new VE are switched-on there is, in general, an increase of the quality of the simulations. We have monitored the volume normalization condition ∑ bVbWab = 1 in all calculated models and, without exception, it is better fulfilled (usually in the range of 10–20% closer to unity) with the new volume elements. This change has an impact on the overall evolution of the simulation, considerably improving the results of the simulations. A paradigmatic case is the RT instability, where the use of the VE leads to an increase of the growth-rate of the instability and to a richer evolution in the linear stage, even for the low-gravity simulation. In shock-waves, the front of the blast becomes steeper and the density peak is 10–25% higher, even in 3D. Regarding the Sedov test, the post-shock evolution of density and pressure is 5–10% closer to the analytic expectations. It is also worth noting that the VE also improve the condition | Δr | = 0 which, according to Eq. (9), is a necessary condition to exactly compute the gradient of linear functions when the IAD0 scheme is used.
During the course of the simulations we did not see any sign of pairing instability, even when working with ≃ 50 neighbors in the 2D tests. In any case, to avoid the instability it is enough to raise the exponent of the sinc kernel above the adopted default value n = 5. We stress that, unlike other recent SPH schemes, the simulations of the KH and RT instabilities were carried out without including any artificial flux of heat or any other procedure to smooth the pressure.
Among the several improvements left for future work, we plan to improve the calculation of gravity by including a better treatment of the gravitational softening on short distances. The best way to do that is to include the gravity into the discretized SPH Lagrangian as described in Price & Monaghan (2007) and Springel (2010b). Also, the implementation and validation of switches to ensure that the AV is only added in regions where there are shocks (Cullen & Dehnen 2010), as well as noise triggers to control the velocity in subsonic flows (Rosswog 2015a) could be done with moderate effort. A more ambitious goal would be to directly calculate the volume elements solving implicitly the equation ∑ bVbWab = 1 on each particle of the system. Even though the strong coupling between particles renders any implicit calculation computationally expensive, it will probably solve the density overshooting problem seen in our explicit approach.
This is not the only way to compare the kernels as one could, for instance, compare the logarithms of the functions. Alternatively, the minimization of the metric distance d between the first derivative of the kernels can be also of interest. All these procedures lead to only slight variations of the kernel exponents with respect to those given in Table 2.
We note that the standard Va = ma/ρa is a particular case of Eq. (12) when p = 0 and all particles have the same mass. Therefore, we can use the same expression to explore the impact of using both the standard and the new VE.
Available at pencil-code.nordita.org
Nevertheless, the energy Eq. (23) also conserves entropy if the smoothing length is self-consistently calculated with density using Eq. (17).
Acknowledgments
The authors want to thank S. Rosswog, M. Liebendörfer, and R. Käppeli for useful discussions and comments. We also thank M. Steinmetz for providing the PPM calculations of the Evrard test and C. McNally for giving us the reference model describing the growth-rate of the KH instability. This work has been supported by the Swiss Platform for Advanced Scientific Computing (PASC) project DIAPHANE and by the European Research Council (FP7) under ERC Advanced Grant Agreement No. 321263 – FISH. (R. Cabezón). It has also been supported by the MINECO-FEDER Spanish projects AYA2013-42762-P, AYA2014-59084-P and by the AGAUR (D. García-Senz and J. Figueira). The authors acknowledge the support of sciCORE (http://scicore.unibas.ch/) scientific computing core facility at University of Basel, where some of the calculations were performed.
References
- Agertz, O., Moore, B., Stadel, J., et al. 2007, MNRAS, 380, 963 [NASA ADS] [CrossRef] [Google Scholar]
- Balsara, D. S. 1995, J. Comput. Phys., 121, 357 [NASA ADS] [CrossRef] [Google Scholar]
- Brandenburg, A., & Dobler, W. 2002, Comput. Phys. Comm., 147, 471 [Google Scholar]
- Brookshaw, L. 1985, PASA, 6, 207 [NASA ADS] [CrossRef] [Google Scholar]
- Brookshaw, L. 2003, ANZIAM, 44, 114 [CrossRef] [Google Scholar]
- Cabezón, R. M., García-Senz, D., & Relaño, A. 2008, J. Comput. Phys., 227, 8523 [NASA ADS] [CrossRef] [Google Scholar]
- Cabezón, R. M., García-Senz, D., & Escartín, J. A. 2012, A&A, 545, A112 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Couch, L. 1997, Digital and Analog Communication Systems, 5th edn. (Prentice Hall) [Google Scholar]
- Cullen, L., & Dehnen, W. 2010, MNRAS, 408, 669 [NASA ADS] [CrossRef] [Google Scholar]
- Dehnen, W., & Aly, H. 2012, MNRAS, 425, 1068 [NASA ADS] [CrossRef] [Google Scholar]
- Evrard, A. E. 1988, MNRAS, 235, 911 [NASA ADS] [CrossRef] [Google Scholar]
- Frontiere, N., Raskin, C. D., & Owen, J. M. 2017, J. Comput. Phys., 332, 160 [NASA ADS] [CrossRef] [Google Scholar]
- Fulk, D. A., & Quinn, D. W. 1996, J. Comput. Phys., 126, 165 [NASA ADS] [CrossRef] [Google Scholar]
- García-Senz, D., Cabezón, R. M., & Escartín, J. A. 2012, A&A, 538, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- García-Senz, D., Cabezón, R. M., Escartín, J. A., & Ebinger, K. 2014, A&A, 570, A14 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- García-Senz, D., Cabezón, R. M., Domínguez, I., & Thielemann, F. K. 2016, ApJ, 819, 132 [NASA ADS] [CrossRef] [Google Scholar]
- Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 [Google Scholar]
- Goncharov, V. N. 2002, Phys. Rev. Lett., 88, 134502 [NASA ADS] [CrossRef] [Google Scholar]
- Hernquist, L., & Katz, N. 1989, ApJS, 70, 419 [NASA ADS] [CrossRef] [Google Scholar]
- Hopkins, P. F. 2013, MNRAS, 428, 2840 [NASA ADS] [CrossRef] [Google Scholar]
- Hopkins, P. F. 2015, MNRAS, 450, 53 [NASA ADS] [CrossRef] [Google Scholar]
- Junk, V., Walch, S., Heitsch, F., et al. 2010, MNRAS, 407, 1933 [NASA ADS] [CrossRef] [Google Scholar]
- Loubère, R., Braeunig, J.-P., & Ghidaglia, J.-M. 2010, ArXiv e-prints [arXiv:1010.4208] [Google Scholar]
- Lucy, L. B. 1977, AJ, 82, 1013 [NASA ADS] [CrossRef] [Google Scholar]
- McNally, C. P., Lyra, W., & Passy, J.-C. 2012, ApJS, 201, 18 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 1992, ARA&A, 30, 543 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 1997, J. Comput. Phys., 136, 298 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
- Monaghan, J. J. 2000, J. Comput. Phys., 159, 290 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J. 2005, Rep. Prog. Phys., 68, 1703 [Google Scholar]
- Monaghan, J. J., & Gingold, R. A. 1983, J. Comput. Phys., 52, 374 [NASA ADS] [CrossRef] [Google Scholar]
- Monaghan, J. J., & Lattanzio, J. C. 1985, A&A, 149, 135 [NASA ADS] [Google Scholar]
- Noh, W. F. 1987, J. Comput. Phys., 72, 78 [NASA ADS] [CrossRef] [Google Scholar]
- Perego, A., Cabezón, R. M., & Käppeli, R. 2016, ApJS, 223, 22 [NASA ADS] [CrossRef] [Google Scholar]
- Price, D. J. 2008, J. Comput. Phys., 227, 10040 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
- Price, D. J. 2012, J. Comput. Phys., 231, 759 [NASA ADS] [CrossRef] [Google Scholar]
- Price, D. J., & Monaghan, J. J. 2007, MNRAS, 374, 1347 [NASA ADS] [CrossRef] [Google Scholar]
- Read, J. I., Hayfield, T., & Agertz, O. 2010, MNRAS, 405, 1513 [NASA ADS] [Google Scholar]
- Ritchie, B. W., & Thomas, P. A. 2001, MNRAS, 323, 743 [NASA ADS] [CrossRef] [Google Scholar]
- Rosswog, S. 2015a, MNRAS, 448, 3628 [NASA ADS] [CrossRef] [Google Scholar]
- Rosswog, S. 2015b, Liv. Rev. Comput. Astrophys., 1, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Saitoh, T. R., & Makino, J. 2013, ApJ, 768, 44 [NASA ADS] [CrossRef] [Google Scholar]
- Saitoh, T. R., & Makino, J. 2016, ApJ, 823, 144 [NASA ADS] [CrossRef] [Google Scholar]
- Schoenberg, I. J. 1946, Q. Appl. Math, IV, 1, 54 [Google Scholar]
- Schuessler, I., & Schmitt, D. 1981, A&A, 97, 373 [NASA ADS] [Google Scholar]
- Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V. 2010a, MNRAS, 401, 791 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V. 2010b, ARA&A, 48, 391 [NASA ADS] [CrossRef] [Google Scholar]
- Springel, V., & Hernquist, L. 2002, MNRAS, 333, 649 [NASA ADS] [CrossRef] [Google Scholar]
- Steinmetz, M., & Mueller, E. 1993, A&A, 268, 391 [NASA ADS] [Google Scholar]
- Valdarnini, R. 2012, A&A, 546, A45 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Valdarnini, R. 2016, ApJ, 831, 103 [NASA ADS] [CrossRef] [Google Scholar]
- Wendland, H. 1995, Adv. Comput. Math., 4, 389 [CrossRef] [MathSciNet] [Google Scholar]
- Zhu, Q., Hernquist, L., & Li, Y. 2015, ApJ, 800, 6 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Kernel separability
![]() |
Fig. A.1 Value of Sn, for indices n = 3, n = 6, and n = 12 (left), and Wendland kernels C2 and C6 (right) in 2D (continuum thick lines). We show in dots the direct product |
Spherically symmetric kernels sacrifice tensorial features in order to preserve the second-order accuracy. The notable exception is the Gaussian kernel, which is both symmetric and separable in each axis direction. But, unlike the Gaussian function, the compact-supported spherically symmetric kernels are not separable, hence neither is the sinc kernels (Sn) used in this work. Nevertheless, the Sn family approaches separability as n → ∞. This property is shown in Fig. A.1 (left panel) which depicts the profiles of (continuum lines) and that of the direct product
(dotted lines) for n = 3, n = 6, and n = 12, in green, pink, and black, respectively. As we can see, the dispersion in the kernels product becomes narrower as the kernel index n increases. As a comparison, we show the same profiles for some Wendland kernels in Fig. A.1 (right panel). In this case, it is clear that Wendland kernels are not separable as the product of the unidimensional components departs from the direct multidimensional calculation. Furthermore, the result does not improve as we increase the order of the Wendland kernel. The study of the consequences that this property may have is beyond the scope of this paper, but it is undoubtedly worthy to explore in future works.
Appendix B: A better partition of unit improves the estimation of the gradient of a linear function
We consider the following integral in 1D: (B.1)Because the integrand is an odd function, I0 is trivially equal to zero. Nevertheless, approaching I0 with finite summations does not necessarily ensure I0 = 0. We know that a necessary and sufficient condition to exactly reproduce the gradient of a linear function in SPH is that I0 = ∑ bVbxbWab = 0, (Eq. (9)), where Vb is the volume element of particle b.
Integrating I0 by parts, (B.2)where G(x,h) is the primitive integral of W(x,h): G(x,h) = ∫W(x,h) dx.
We now consider a very simple kernel W(x,h) whose primitive G(x,h) can be obtained analytically: (B.3)This kernel (Fulk & Quinn 1996) is spherically symmetric but currently it is not the preferred one because it is not a flat-top kernel. The normalization constant in 1D is C = 0.5. Nevertheless, it is an adequate kernel for this proof, as it admits an analytical primitive,
(B.4)The first term of the RHS of Eq. (B.2) is zero, because G(x,h) vanishes very quickly when x/h increases. For spherically symmetric kernels I0 = 0, thus the integral giving the second term should also be zero. The integral I0 can be estimated as I0 = I1 + I2 where,
(B.5)
(B.6)with W(x,h) given by Eq. (B.3). The analytical calculation of these integrals gives I1 = −Ch, I2 = + Ch, and I0 = I1 + I2 = 0, which is obvious because Eq. (B.1) is an odd function. Things are different when we approach these integrals by finite summations:
(B.7)On the other hand, if the Vb are well chosen, they should fulfill the equipartition condition:
(B.8)Now we make the reasonable ansatz that improving Vb is actually enhancing each term inside the summation in Eq. (B.8). Thus,
(B.9)
and, according to Eq. (B.7), I0 vanishes (or, in any case, it becomes very small) even when it is approached using summations. We note that this proof is strictly valid only for exponential kernels. Nevertheless, the main conclusion also holds (at least qualitatively) for other centrally-peaked, spherically-symmetric interpolating functions. The positive feedback between IAD0 and the VE is also supported by many of the test cases presented in this work.
Appendix C: Partition of the unity: convergence rate of the estimator X = (m/ρ)p
We discuss here a simple 2D static test which explores the effect of changing the exponent p of Xa = (ma/ρa)p into the normalization of the kernel. We considered a shock-tube filled with a two-density fluid separated by a contact discontinuity. The jump of the density across the frontier is estimated for several values of 0 ≤ p ≤ 1. The SPH particles were arranged in an ordered grid according to the density value at both sides of the discontinuity. The contact between both fluids was not smoothed. We have considered two density ratios: ρ1/ρ2 = 2 and ρ1/ρ2 = 8, and calculated the magnitude ∑ bVbWab for different values of p. The results are depicted in Fig. C.1 which shows the maximum relative error in the kernel normalization | ∑ bVbWab−1 | as a function of p, the density contrast and the particular procedure to estimate Xa, either Xa = (ma/ρa)p (continuum red lines) or Xa = ( ⟨ ma/ρa ⟩ )p (green dashed lines), where ⟨ . ⟩ is the SPH average. The smoothing length was adjusted so that there were nb = 50 neighbors contributing to the summations.
![]() |
Fig. C.1 Static shock-tube problem described in the Appendix C: maximum relative error in the partition of unit as a function of the exponent p used to infer the particle volume. The red solid line is for X = (m/ρ)p and the green dashed line is for X = ( ⟨ m/ρ ⟩ )p. |
For not too large density ratios, raising the exponent p leads to a linear improvement in kernel normalization. The convergence rate is slower when Xa = ( ⟨ ma/ρa ⟩ )p, as expected. Things are different for large density ratios where the convergence rate follows a parabolic line with a local minimum around p ≃ 0.7 (uppermost red line in Fig. C.1). Above that value it seems that any increase of p makes the convergence worse. The reason is that, in static configurations with sharp contact discontinuities, the density under/over-shoots at the two sides of the discontinuity. Taking both a large density ratio and a high p value induces a catastrophic feedback between the overshooting and Xa. Such behavior is due to the explicit nature of the implementation of the volume elements in SPH and would probably be overcome using an implicit approach. A simple recipe to circumvent the problem is to take Xa = ( ⟨ ma/ρa ⟩ )p, which leads to a slower, albeit safer, convergence rate (uppermost green line in Fig. C.1).
Appendix D: Included physics and alternative formulations of the energy equation
Most astrophysical applications need to calculate gravity, and to that extent SPHYNX incorporates an octal-tree structure (Hernquist & Katz 1989) with several levels of particle clustering. When the particle inter-distance rab is shorter than ha + hb we apply a simple smoothing to the gravitational force, (D.1)This softening usually gives satisfactory results but it is not fully compatible with the Euler-Lagrange derivation of the SPH equations (Price & Monaghan 2007).
SPHYNX also incorporates a thermal conductive transport equation compatible with the IAD formulation. That equation was described and checked in Cabezón et al. (2012) and it is reproduced here for completeness, (D.2)where d is the dimension of the space and the tilde symbol means the arithmetic average of the magnitude.
In high density plasmas with finite temperature, which characterize compact objects such as white dwarfs and neutron stars, it is often preferable to directly compute the temperature instead of the internal energy. SPHYNX can switch the energy equation, Eq. (23), to the temperature equation, (D.3)where Cv is the specific heat. This formulation of the temperature equation also leads to an almost perfect energy conservation. The internal energy can be obtained through the EOS when necessary. Equation (D.3) was recently used to calculate explosion models of Type Ia supernova with the SPHYNX code (García-Senz et al. 2016).
Also, the energy equation may be substituted without much effort by an entropy equation which ensures perfect entropy conservation in pure adiabatic fluxes (Springel & Hernquist 2002)8. An isentropic flow is characterized by (D.4)where Aa is a constant determined by the initial conditions of the flux. The entropy equation was incorporated as default in the GADGET-2 code (Springel 2005), bringing excellent results for ideal EOS. However, it has to be adapted to more complex and realistic EOS where the adiabatic index γ can be time-dependent and not so straightforward to know.
All Tables
Values of the exponent n of the sinc kernels that provide the best match (i.e., minimum metric distance ) to the profile of known interpolators, such as members of the B-splines, Mn, and Wendland, Cm families.
All Figures
![]() |
Fig. 1 Fourier transform for the spline kernel B4 and B6 (black lines), the Wendland kernels C4 and C6 (light blue lines), and the sinc kernels S5, S7, and S10. Only the positive part of the transform is shown. |
In the text |
![]() |
Fig. 2 Impact of changing the sinc kernel index n and number of neighbors nb on the pairing-instability during the relaxation towards equilibrium of a perturbed homogeneous system. In the X-axis the minimum interparticle distance normalized to its initial value is represented. The Y-axis shows the corresponding percentage of particles at different times. We show the results for n = 3 (red), n = 5 (green), n = 7 (blue) and n = 10 (pink). Continuum lines are for t = 0.01, dashed lines for t = 0.19 and big dots for t = 1.79. The continuum line in light blue for nb = 100 was calculated using the Wendland C3 interpolator and shows the results at t = 0.19. |
In the text |
![]() |
Fig. 3 Same as Fig. 2 but using IAD0 instead of the standard gradient estimator. In this case, particle distributions peak at higher q/q0 ratios compared to those in Fig. 2, showing more resistance to particle clumping. |
In the text |
![]() |
Fig. 4 The unmixed isobaric two-fluid test at times t = 0, t = 0.4, t = 1 and t = 2 (columns from left to right) calculated with different volume elements. The initial model is that of two nested squares with density contrast of four (the one with higher density being the inner yellow square). First and second rows use particles with identical mass but unevenly spaced grid, while the opposite applies for the two last rows. The first row depicts the evolution with the volume estimator |
In the text |
![]() |
Fig. 5 Impact of Xa = (ma/ρa)p in both the volume normalization, ∑ bVbWab = 1, (left) and ⟨ Δr ⟩ = 0 condition (right) for different values of the exponent. The figure depicts the profile of these magnitudes along a 1D cut taken around y = 0.5 in the hydrostatic system shown in Fig. 4. We notice how the jump at the contact discontinuity is largely reduced in both conditions when p ≃ 1. |
In the text |
![]() |
Fig. 6 Results for the 2D Sedov test at t = 0.1. S1 corresponds to p = 0 (standard VE) and Δn = 0 (no equalization). S2 shows the effect of the VE with p = 0.7 in |
In the text |
![]() |
Fig. 7 Results for the 2D Sedov test at t = 0.1 taking Xa = Pk, where P is the pressure and k = 0.05 (black dashed line) compared to models S1 (red) and S2 (green). We show here the radial profiles of the density (left panel) and the equipartition condition (right panel). Solid black line is the analytic solution for the density profile. We note on the left panel that there is a small precursor bump in density for the calculation with Xa = Pk which, for models S1 and S2, is absent. |
In the text |
![]() |
Fig. 8 Two-dimensional wall-shock test at time t = 0.5. Left: density profiles for models described in Table 4: WHS1 (red), WHS2 (green), and WHSG (blue) in dots. The analytic result is the solid black line. The central panel depicts the particle distribution with density as color at t = 0.5 as obtained with GADGET-2 whereas the last panel is that of model WHS2. We note how WHS2 achieves lower particle disorder in the central region, and smaller density oscillations at the shock position than GADGET-2. |
In the text |
![]() |
Fig. 9 Two-dimensional wall-shock test. Left: volume normalization at t = 0.5 for model WHS1 (red) and WHS2 (blue). Right: same but for the condition, | ⟨ Δr ⟩ | = 0, among mass points within the kernel range of 2h. WHS2 (with new VE) fulfills both conditions better than WHS1 (with the standard VE). |
In the text |
![]() |
Fig. 10 Density color-map depicting the evolution of the Kelvin-Helmholtz instability for models KH1 (p = 0, upper panels) and KH2 (p = 0.7, lower panels) of Table 5. Each snapshot gives the density color-map at times t = 1.5 (left) and t = 2.5 (right). |
In the text |
![]() |
Fig. 11 Amplitude mode growth of models KH1 (p = 0) and KH2 (p = 0.7) of Table 5. The solid red line is the reference solution (McNally et al. 2012) obtained with the PENCIL code (Brandenburg & Dobler 2002), while the solid green and blue lines were obtained with SPHYNX for two different values of the p parameter. The dashed lines correspond to calculations performed with the NDSPMHD code (Price 2012) with a resolution of 5122 and kernel B4 (pink) and B6 (light blue). See McNally et al. (2012) for further details. We also show the L1 errors with respect to the reference values for each calculation. |
In the text |
![]() |
Fig. 12 Kernel normalization of models KH1 (left) and KH2 (right) of Table 5 at t = 2.5. The dispersion of ∑ bVbWab around unity is shown for two options of the exponent of the volume estimator, Xa = (ma/ρa)p, p = 0 (left) and p = 0.7 (right). Solid red lines are maximum and minimum values to help as a visual aid. |
In the text |
![]() |
Fig. 13 Density color-map depicting the evolution of the Kelvin-Helmholtz instability for models KH3 (p = 0, first row) and KH4 (p = 0.7, second row) of Table 5, corresponding to a density jump of a factor 8, at times t = 1.5, 2.5. We notice that, despite the much larger jump in density, the KH instability is still able to develop in both cases. |
In the text |
![]() |
Fig. 14 Density color-map depicting the evolution of the Rayleigh-Taylor instability for models RT1 (p = 0, first row), and RT2 (p = 0.7, second row) of Table 6 at times t = 1.5, 2.5, 3.5, 5.0, 6.5, and 7.5, respectively. The estimator |
In the text |
![]() |
Fig. 15 Same as Fig. 14 but for models RT3 (p = 0, first row), and RT4 (p = 0.7, second row) of Table 6 (with a very reduced gravity field of g = −0.1). Times are t = 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5, respectively. |
In the text |
![]() |
Fig. 16 Rayleigh-Taylor test in 2D for models RT3 (p = 0, red lines), RT4 (p = 0.7, green lines), and RT5 (p = 1, blue lines). In the left panel we show the time evolution of the tips of the bubble (solid lines) and the spikes (dashed). From these it is clear that the inclusion of the new VE elements enhances the growth of the instability in the linear stage. Once the non-linear stage is reached all models grow at similar rates. It is worth noting here that simulations without IAD0 and standard VE (not shown) failed to develop the RT instability. In the panel on the right, we show the time evolution of the bubble velocity. It can be seen how a plateau is reached by all three models, which is closer to the expected terminal velocity (pink solid line) for the models using the new volume elements. |
In the text |
![]() |
Fig. 17 Gresho-Chan vortex test in 2D. Radial profiles of the tangential velocity for several kernel choices and different number of neighbors, at time t = 1. The interpolators are the sinc(n = 5, 6.315, 7) and the Wendland C6. The theoretical velocity profile is also shown (black-dashed line). |
In the text |
![]() |
Fig. 18 Gresho-Chan vortex test in 2D. Left: value of the magnitude L1, at time t = 1, calculated with the sinc kernels as a function of the kernel index n and number of neighbors nb (with constant total number of particles N = 2562). Right: convergence rate of sincs and Wendland C6 kernels as a function of the number of neighbors. |
In the text |
![]() |
Fig. 19 Gresho-Chan vortex test in 2D. Convergence rate of L1 as a function of the equivalent 1D number of particles, N1d, at time t = 1, for different sinc kernels. |
In the text |
![]() |
Fig. 20 Wind-cloud interaction: density color-map depicting the evolution of the collision and cloud fragmentation at three fiducial times t/tKH = 0.5, 2, and 3. Model WG refers to the calculation using GADGET-2. Models W0 and W1 were calculated with SPHYNX using the standard and the new VE, respectively. |
In the text |
![]() |
Fig. 21 Wind-cloud interaction: evolution of the surviving fraction of the cloud for the three calculated models. Even though a small remnant of the cloud is still present at t/tKH ≃ 4 the calculation W1 (SPHYNX with the new VE, that is, p = 1) leads to the best results. The other two calculations (SPHYNX with standard VE, i.e. p = 0, and GADGET with standard VE and without IAD0) are more inefficient in diluting the cloud as between 40–60% of it is still present even at t/tKH> 4. |
In the text |
![]() |
Fig. 22 Wind-cloud interaction: impact of the VE choice in the tensile instability. Near the contact discontinuity the standard VE, m/ρ, produces the segregation of the two fluids, preventing mixing (left). Choosing VEa = ( ⟨ m/ρ ⟩ a)p/ ∑ ab( ⟨ m/ρ ⟩ b)pWab, where ⟨ m/ρ ⟩ is the SPH average of m/ρ and p = 1, suppresses the tensile instability leading to mixing of the fluids and the subsequent bubble fragmentation (right). |
In the text |
![]() |
Fig. 23 Color-map of the mass density in the triple-point shock test. Each row shows a snapshot at a different time, while each column corresponds to a different VE. The column on the left was calculated with Xa = ( ⟨ m/ρ ⟩ a)p and p = 1, while the one on the right corresponds to Xa = 1 (i.e., standard VE). |
In the text |
![]() |
Fig. 24 Results for the 3D-Sedov test for models S5, S6, S7, S8 of Table 3 and Gadget-2. We show here the radial profiles of density (upper-left), normalized pressure (upper-right), and the fulfillment of the conditions | ⟨ Δr ⟩ | = 0 (bottom-left) and ∑ bVbWab = 1 (bottom-right). Solid black lines are the analytic solutions for density and normalized pressure profiles. |
In the text |
![]() |
Fig. 25 Same as Fig. 24 but for models S9, S10, S11, S12, with nb = 220, of Table 3. |
In the text |
![]() |
Fig. 26 Radial profiles of the sinc kernel index, n, and density for model S12 of Table 3. We note how the scheme increases the kernel index to improve the interpolations in the complicated regions of low- or of fast-changing density. |
In the text |
![]() |
Fig. 27 Upper panels: density (left) and radial velocity profiles (right) for the collapse of an initially isothermal sphere of gas at t = 0.8. The solid black line is the 1D-PPM solution. We show the SPHYNX calculations with p = 0 (red) and p = 1 (green) in Eq. (34). Calculations using GADGET-2 are in blue. Lower panels: profiles of | ⟨ Δr ⟩ | (left) and ∑ bVbWab (right) for p = 0 (red) and p = 1 (green). |
In the text |
![]() |
Fig. A.1 Value of Sn, for indices n = 3, n = 6, and n = 12 (left), and Wendland kernels C2 and C6 (right) in 2D (continuum thick lines). We show in dots the direct product |
In the text |
![]() |
Fig. C.1 Static shock-tube problem described in the Appendix C: maximum relative error in the partition of unit as a function of the exponent p used to infer the particle volume. The red solid line is for X = (m/ρ)p and the green dashed line is for X = ( ⟨ m/ρ ⟩ )p. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.