Issue 
A&A
Volume 606, October 2017



Article Number  A78  
Number of page(s)  30  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201630208  
Published online  16 October 2017 
SPHYNX: an accurate densitybased SPH method for astrophysical applications
^{1} Departement Physik, Universität Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland
email: ruben.cabezon@unibas.ch
^{2} Scientific Computing Core, sciCORE, Universität Basel, Klingelbergstrasse, 61, 4056 Basel, Switzerland
^{3} Departament de Física, Universitat Politècnica de Catalunya, EEBE, Eduard Maristany 10–14 mòdul C2, 08019 Barcelona, Spain
email: domingo.garcia@upc.edu; joana.figueira@upc.edu
^{4} Institut d’Estudis Espacials de Catalunya, Gran Capità 2–4, 08034 Barcelona, Spain
Received: 7 December 2016
Accepted: 30 June 2017
Aims. Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here.
Methods. SPHYNX is of Newtonian type and grounded in the EulerLagrange formulation of the smoothedparticle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a densitybased SPH code.
Results. A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the socalled tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes.
Conclusions. On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET2, PSPH or with the recent densityindependent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.
Key words: methods: numerical / hydrodynamics / instabilities
© ESO, 2017
1. Introduction
Many interesting problems in Astrophysics involve the evolution of fluids and plasmas coupled with complex physics. For example, in core collapse supernova, magnetohydrodynamics meets with general relativity, nuclear processes and radiation transport. Other scenarios, such as neutron star mergers, Type Ia supernova, and planet or star formation, face similar challenges in terms of complexity. Besides that, these phenomena often have a strong dependence on the dimensionality and they must be studied in three dimensions. This requires accurate numerical tools which translate to rather sophisticated hydrodynamic codes. Because of its adaptability to complex geometries and good conservation properties, the Smoothed Particle Hydrodynamics (SPH) method is a popular alternative to gridbased codes in the astrophysics community. SPH is a fully Lagrangian method, born forty years ago (Lucy 1977; Gingold & Monaghan 1977), that since then has undergone sustained development (Monaghan 1992, 2005; Rosswog 2015a; Springel 2010b; Price 2012). Recent years have witnessed a large range of improvements specially aimed at reducing the numerical errors inherent to the technique. These errors are known as E_{0} errors (Read et al. 2010) and mainly appear due to the conversion of the integrals, representing localaveraged magnitudes of the fluid, into finite summations. The most simple and naive way to get rid of them would be to work closer to the continuum limit, which implies working with a number of particles and neighbors as big as possible (ideally, N → ∞ and N_{nb} → ∞, see Zhu et al. 2015). Unfortunately, this is not feasible in common applications of the technique because the total number of particles is limited by the computing power (both by speed and storage). Moreover, the number of neighbors of a given particle cannot be arbitrarily increased without suffering pairinginstability (Schuessler & Schmitt 1981). This is a numerical instability that acts as an attractive force which appears at scales slightly shorter than the smoothing length h, provoking artificial particle clumping and effectively decreasing the quality of the discretization, which eventually leads to unrealistic results.
In order to reduce the E_{0} errors, another more practical possibility has been studied during recent years: finding interpolating functions that are less prone to particle clustering than the widely used M_{4} or cubicspline kernel (Monaghan & Lattanzio 1985). Among the various candidates, the most used (pairingresistant) kernels come either from an extension of the M_{n} family to higherorder polynomials (Schoenberg 1946) or those based on the Wendland functions (Wendland 1995). In particular, the Wendland family is specially suited to cope with pairing instability (Dehnen & Aly 2012). Another possibility is the sinc family of kernels (Cabezón et al. 2008), which are functions of type S(x) = C(n)(sinx/x)^{n}, and add the capability of dynamically modifying their shape simply by changing the exponent n. That adaptability of the sinc kernels makes the SPH technique even more flexible and can be used, in particular, to prevent particle clustering, as shown in Sect. 2.1.
Historically, the growth of subsonic hydrodynamical instabilities has been problematic for SPH simulations, as they damp them significantly. The RayleighTaylor (RT) instability is a ubiquitous phenomenon which serves as a paradigmatic example. It appears wherever there is cold and dense fluid on top of a hot and diluted one in the presence of gravity (or any inertial force, in virtue of the principle of equivalence). The entropy inversion leads to the rapid overturn of the fluid layers. In the real world, the overturn is triggered by small perturbations at the separation layer between the light and dense fluids. The RT instability is one of the most important agents driving the thermonuclear explosion of a white dwarf, which gives rise to the Type Ia supernova (SNIa) explosions. Their correct numerical description is also crucial to understanding the structure of the supernova remnants (SNR) and to model core collapse supernova (CCSN) explosions. Additionally, the RT instability is also the source of other interesting phenomena such as the KH instability or turbulence. The numerical simulation of the RayleighTaylor instability using SPH has traditionally been a drawback for the technique, especially for lowamplitude initial perturbations in the presence of a weak gravitational force. At present, for a similar level of resolution, the best SPH codes cannot yet compete with the stateofart gridbased methods. For example, the finitevolume/difference Godunov methods such as ATHENA and PLUTO, AMR codes such as FLASH, the Meshless Finite Mass (MFM) and Volume methods (MFV), and especially the moving mesh methods based on Voronoi tessellations, as in the code AREPO, provide a good approach to the RT instability. This problem is partially overcome using a large number of neighbors and by adding an artificial heat diffusion term to the energy equation, as in the PSPH proposal by Saitoh & Makino (2013) and Hopkins (2013). However, these problems still persist when either the size of the initial perturbation or the gravity value are reduced (Valdarnini 2012), meaning that they are a symptom of another source of numerical error in SPH named tensile instability. This is an artificial surface tension that appears at contact discontinuities because of an insufficient smoothness of pressure between both sides of the discontinuity (Monaghan 2000). As a consequence, the integration of the momentum equation gives incorrect results. An excess of that tension provokes the damping of fluid instabilities, especially those with short wavelengths. Several techniques have been proposed to treat this problem, like averaging the pressure by means of the interpolating kernel itself, scheme PSPH (Hopkins 2015), volume element estimation bounded to pressure, the density independent scheme (DISPH; Hopkins 2013; Saitoh & Makino 2013, 2016), or by adding an artificial diffusion of heat to the energy equation, which helps to balance the pressure across the discontinuity (Price & Monaghan 2007). They have paved the road that led the SPH technique to a new standard within the last few years, and have helped to overcome this long lasting inconvenience. In particular, it has been proved that it is fundamental to increase the accuracy of the gradient estimation across the contact discontinuities via reducing its numerical noise. To achieve that, GarcíaSenz et al. (2012) used an integral scheme to calculate spatial derivatives, which proved to be especially efficient at handling fluid instabilities. The validity of that approach was assessed in subsequent works by Cabezón et al. (2012) and Rosswog (2015a). In this last case, the integral approach to the derivatives (IAD) was used to extend the SPH scheme to the specialrelativistic regime (see also Rosswog 2015b). See also the recent work of Valdarnini (2016), where the efficiency of the IAD scheme to reduce the E_{0} errors is studied in detail.
Finally, a recent breakthrough in SPH was the emergence of the concept of generalized volume elements (Ritchie & Thomas 2001; Saitoh & Makino 2013; Hopkins 2013). In these works, it was shown that a clever choice of the volume element (VE) can reduce the tensileinstability, leading to a better description of hydrodynamic instabilities. In this paper, we present a novel estimator to the VE that preserves the normalization of the kernel. We show in this work that having a good partition of the unity is also connected to the tensileinstability problem. The calculations of the growth of the KelvinHelmholtz and RayleighTaylor instabilities using these new VE are encouraging.
In this work, we also introduce the hydrodynamics code SPHYNX, that gathers together the latest advances in the SPH technique, including those new ones presented here. SPHYNX has already been used in production runs simulating type Ia and core collapse supernova and is publicly accessible^{1}.
The organization of this paper is as follows. In Sect. 2 we review the main properties of the sinc kernels as well as the integral approach to the derivatives, which are at the heart of SPHYNX. Section 3 is devoted to the choice of the optimal volume element and to the update of the smoothinglength h and of the kernel index n. Section 4 describes the structure of the hydrodynamics code SPHYNX: moment and energy equations and included physics. Sections 5 and 6 are devoted to describing and analyzing a variety of tests carried out in twodimensions and threedimensions, respectively. Finally, we present our main conclusions and prospects for the future in Sect. 7.
2. Interpolating kernels and gradients evaluation
2.1. The Sinc kernels
Interpolating kernels in SPH must fulfill several basic requirements in order to be considered suitable interpolators. Some of these features are automatically satisfied if the functional form of the kernel is spherically symmetric and approaches the Dirac−δ in the continuum limit. A functional representation of the Diracδ is (Couch 1997) (1)Writing the function above becomes (2)where the magnitude is a widely known function used in signal analysis. Nevertheless, the expression given by Eq. (2) does not have compact support, limiting its practical applications as SPH interpolation kernel. Moreover, for x/h> 2, the sinc function oscillates and eventually produces negative values. An obvious solution to this problem is to restrict the domain to  x/h  ≤ 2, but in that case the first derivative of this function does not go to zero at the limits of the interval. To fix that, we define the kernel set as: (3)where S_{n}(·) = sinc^{n}(·), q =  r  /h, and d is the spatial dimension. B_{n} is a normalization constant, whereas n is a real number with n ≥ 2 to guarantee the nullity of the first derivative at q = 2. We generically refer to the set as sinc kernels; see Cabezón et al. (2008; where they were called harmonic kernels) and GarcíaSenz et al. (2014). These interpolators fulfill the regular desirable characteristics for SPH interpolators. Namely, tending to a δfunction in the continuum limit, having compact support, and being spherically symmetric. But they have a number of interesting additional features: a) they are connected by definition to the δfunction; b) they add flexibility to the SPH calculations as the kernel profile can be changed by varying the kernel index n; c) working with a high index (n ≥ 5) not only ensures that highorder derivatives are well behaved, but also overcomes particle clustering (see Sect. 2.1); and d) they tend to fulfill separability when the index n increases (see Appendix A).
Unfortunately, there is not a general analytic expression for the normalization constant B_{n}, but this small problem can be circumvented by fitting B_{n}, numerically calculated for a series of values of n. A fitting formula for B_{n}, valid in the range 3 ≤ n ≤ 12, was provided in GarcíaSenz et al. (2014). We reproduce it here for the sake of completeness, (4)where the values of coefficients b_{0},b_{1},b_{2},b_{3} as a function of the dimensionality are provided in Table 1.
In the limit of large n, the sinc kernels also display an interesting feature: separability, which has not been sufficiently emphasized in the extant literature concerning SPH. Standard kernels used in SPH are spherically symmetric functions which naturally lead to a secondorder accurate scheme. Interestingly, the majority of these kernels do not fulfill the identity: (5)This property guarantees the consistency of simulations involving planar symmetry, which should render identical results when calculated in 1D, 2D, or 3D, if the resolution and the interpolating kernel are the same in all three cases. Exploring in detail the applications of this property is beyond the scope of this paper. Nevertheless, we added a short discussion in Appendix A.
Additionally, some particular values of the kernel index n can mimic the profile of the most notable kernels used in SPH. In Table 2 we show the correspondence between the sinc and both, the M_{n} and C_{m} families of interpolators. The value of n shown in Table 2 was obtained by minimizing the metric distance , where W1 and W2 are the interpolators to be compared^{2}. It is remarkable that n ≃ 5 is able to approach both the quintic M_{6} Bspline and the C_{4} Wendland kernel. Thus, the choice n = 5 seems a suitable default value when the number of neighbors is moderate, n_{b} ≃ 60–120 in 3D (see f.e. Figs. 4 and 5 of Rosswog 2015a). For n_{b} ≥ 120 it may be advisable to raise the index of the sinc kernel to avoid the pairing instability. Interestingly, the choice n = 6.315 provides a very similar profile to that of the Wendland C_{6} kernel. Nevertheless, a similarity in the real space is not the only characteristic to take into account. As proved by Dehnen & Aly (2012), having a positive Fourier transform for a large range of modes is of utmost importance when dealing with pairing instability (see Sect. 2.1).
It is also worth noting that the family forms a continuous set, making it possible to locally change the index n during the runtime of the simulation so that one can, for example, raise the value of n in presence of sharp density gradients, (see Sect. 3.2).
Finally, the evaluation of a sinc function is indeed computationally more costly than the regular spline kernels. Nevertheless, it is very easy to circumvent this problem by interpolating the value of the sinc function from a relatively small precalculated table. In practical applications, though, the choice of the kernel has a negligible impact on the computational burden, being completely masked by efficient cache usage and other sections of the code, as neighbor search or gravity calculation.
Although SPHYNX uses the sinc kernels by default, it also incorporates the Wendland C_{2},C_{4} and C_{6} kernels which can easily be selected by the user whenever necessary. On another note, the impact of the C_{m} kernels has recently been studied by other authors (Dehnen & Aly 2012; Rosswog 2015a) so we focus here almost exclusively on the set. Nevertheless, a comparison between the performance of the C_{6} and sinc interpolators is provided in Sect. 5.5, where the evolution of the GreshoChan vortex is simulated and discussed.
Values of the exponent n of the sinc kernels that provide the best match (i.e., minimum metric distance ) to the profile of known interpolators, such as members of the Bsplines, M_{n}, and Wendland, C_{m} families.
Convergence to the continuum limit is only achieved when h → 0. According to Zhu et al. (2015), it can only be reached using both a large number of particles N, and a large number of neighbors n_{b}. The value of N chiefly depends on the available computer resources and on algorithmic details. However, working with a large number of neighbors is not only expensive but it may be totally impractical due to the tendency of the massparticles to gather in clusters when n_{b} is high, a phenomenon known as pairing instability. A leap forward to alleviate this problem was the proposal of using Wendland functions (Wendland 1995) as interpolators by Dehnen & Aly (2012), which are much less sensitive to this instability than the loworder members of the M_{n} Bspline family. Another possible option to avoid particle clustering is to choose a sinc kernel with a large enough exponent, GarcíaSenz et al. (2014).
Fig. 1
Fourier transform for the spline kernel B_{4} and B_{6} (black lines), the Wendland kernels C_{4} and C_{6} (light blue lines), and the sinc kernels S_{5}, S_{7}, and S_{10}. Only the positive part of the transform is shown. 
As mentioned above, having a Fourier transform with positive modes is necessary for a kernel to be stable. In Fig. 1 we show the Fourier transform of several Bspline, Wendland, and sinc kernels. As expected, Wendland kernels show positive Fourier transform at very large modes. Nevertheless, it is clear that the sinc family systematically becomes negative at higher modes and their negative regions decrease in size very quickly as the exponent n increases. Therefore, particle clustering can be totally suppressed, in practice, simply by raising n. It is also worth noting that, in order to improve the performance of Wendland kernels, it is advisable to considerably increase the number of neighbors up to n_{b} ≃ 400 in 3D (Valdarnini 2016) which has a substantial increase on the computational burden of actual simulations^{3}.
We tested the impact of the kernel and the number of neighbors on the emergence of pairing instability in a dynamical simulation. Results can be found in Sect. 5.1.
2.2. The integral approach to derivatives
In SPH, gradients are usually calculated simply by applying the nabla operator to the kernel function (Gingold & Monaghan 1977). Nevertheless, the same procedure is not applied to calculating higherorder derivatives, such as the Laplace operator, needed, for example, to evaluate the heat transport contribution to the energy equation. For these cases, an integral approach is preferred (Brookshaw 1985) because it is less noisy and provides excellent results. Working in that direction, an integral approach to calculating first derivatives, called IAD, was recently introduced by GarcíaSenz et al. (2012). The IAD approach provides a more accurate estimation of the first derivative, thus improving the overall quality of the simulations. This was demonstrated through the study of several test cases in 1D, 2D, and 3D by Cabezón et al. (2012), Rosswog (2015a), and Valdarnini (2016), hence we refer the reader to these papers for technical details of the method. In GarcíaSenz et al. (2012) it was shown that a restriction of the full integral approach (called IAD_{0}) leads to a conservative formulation of the SPH equations in the same way as the standard scheme^{4} does. This, however is not without cost. Some accuracy is lost in the gradient calculation, which is exact for linear functions when IAD is used. In his work, Rosswog (2015a) evaluated this accuracy loss in several tests, and the outcome was that even if IAD_{0} was used, the accuracy loss, in comparison with IAD, was much smaller than the accuracy gain when comparing with standard gradient estimators. Furthermore, Valdarnini (2016) presents a series of tests that show the effectiveness of the IAD_{0} method at removing sampling errors in subsonic flows. Additionally, the SPH equations linked to the IAD_{0} scheme are formally similar to those of the standard method provided that the kernel gradient is computed as follows (for the sake of clarity, from now on we drop the dependence of the kernel on the particle interdistance, r_{b}−r_{a}), (6)where, (7)c_{ij,a} being the coefficients of the inverse of matrix for particle a, whose elements are, (8)In GarcíaSenz et al. (2012), it is shown that for the Gaussian kernel the IAD_{0} scheme is equivalent to the gradient estimation with the analytical derivative of the kernel function, this being a particular case of the integral approach. It was also proven that IAD_{0}, in fact, exactly reproduces the gradient of linear functions provided that, (9)Equation (9) is, in general, fulfilled only approximately, and this is the main difference between IAD and IAD_{0} accuracies. The best gradient estimation is therefore attained by IAD_{0} only when ⟨ Δr ⟩ ≃ 0. In this respect, the initial model should be built so that Eq. (9) is approached as much as possible. As we see in the following section, it turns out that getting a small ⟨ Δr ⟩ is closely related to the choice of the adequate volume element for the integration. An analytical proof showing that an improvement in the partition of the unit translates to a better estimation of gradients within the IAD_{0} paradigm is provided in the Appendix B.
A collateral, albeit positive, feature of IAD_{0} is that it hinders the emergence of the pairing stability. This is shown in Sect. 5.1.
3. Preconditioning the calculation
3.1. The choice of the volume element
A recent breakthrough in SPH has been the development of a general method to assign the volume element to a particle (Saitoh & Makino 2013; Hopkins 2013, 2015; Rosswog 2015a). In SPH, the volume element of particle a has traditionally been V_{a} = m_{a}/ρ_{a} but, as noted in the seminal proposal by Ritchie & Thomas (2001), other options could be more convenient for handling specific problems. SPHYNX makes use of the scheme developed by Hopkins (2013), where an estimator X_{a} is introduced so that the particle volume is (10)The density of the particle is then calculated as ρ_{a} = m_{a}/V_{a}. Current choices for the estimator are X_{a} = 1, m_{a}, and (k ≤ 1), where P_{a} is the pressure. Taking X_{a} = m_{a} leads to the common choice V_{a} = m_{a}/ρ_{a}. Actually, the same volume element is obtained with X_{a} = 1 when the mass of the particles is the same. On the other hand, the choice helps to suppress the surface tension that appears at contact discontinuities.
Here we want to discuss another option for X_{a}, not considered in the aforementioned papers, which could also be of value for SPH users. We choose (11)and put it into Eq. (10): (12)which for p = 1 becomes, (13)where the summation underneath is simply the kernel normalization condition for the standard volume choice m_{a}/ρ_{a}. Therefore, the volume element V_{a} given by Eq. (13) comes after renormalizing m_{a}/ρ_{a}. We note that if , then Eq. (13) leads to the identity ρ_{a} = ρ_{a}, as expected. Furthermore, one can take Eq. (12) as a recursive equation, which, jointly with ρ_{a} = m_{a}/V_{a}, allows to find the optimal ρ_{a} leading to an almost perfect kernel normalization after several iterations. That is, starting from an initial guess of the density, for example ρ_{a} = ∑ _{b}m_{b}W_{ab}, the value of V_{a} is computed in an explicit way using Eq. (12). This gives a new density ρ_{a} = m_{a}/V_{a} to be used in the next integration step, and so on. The consistency and robustness of this procedure has been checked with the tests described in Sects. 5, 6, and in Appendix C.
The impact of changing the volume element is highlighted in the hydrostatic square test described by Saitoh & Makino (2013) and that we reproduce in Sect. 5.2.
3.2. Choosing smoothing length and kernel exponent: equalization
The use of the sinc kernels allows us to work with continuum adaptive index n(r,t). Interestingly, the smoothing length h(r,t) and kernel index n(r,t) can be calculated jointly with the density estimation. Because of the large number of involved variables, ρ_{a}, h_{a}, n_{a}, V_{a}, and Ω_{a}, this point deserves some discussion.
Firstly, the volume estimator X_{a} is updated only when the global iteration has finished. Therefore, X_{a} is handled explicitly so that the coupling with other variables during the current iteration is avoided. Secondly, selfconsistent values of h_{a}, n_{a}, ρ_{a}, and Ω_{a} are simultaneously calculated with a NewtonRaphson iterative scheme, that we named equalization (because it equalizes the resolution at the postshock tail of shock waves) and is described in GarcíaSenz et al. (2014). We summarize it here for the sake of completeness:

1.
Choose a trial value of h_{a}, as well as the baseline kernel index n = n_{0}, at thebeginning of the integration step.

2.
Calculate the density of each particle and its logarithm average over neighbors: .

3.
Evaluate λ_{a}, the ratio between ρ_{a} and , which is taken as a local indicator of linearity and is quantified with the following:

4.
Use λ_{a} to assign a new kernel index n_{a} according to:
(15)where Δn is the maximum allowed jump from n_{0}, λ_{c} ≃ 1 is a scaling parameter, and f(ξ): (16)
 5.
Solve Eq. (15) jointly with mass conservation:
(17)where is a constant set at the beginning of the simulation jointly with n_{0}, Δn, and λ_{c}. After this process, we obtain the values of h_{a}, n_{a}, ρ_{a}, and Ω_{a}. Obviously, setting Δn = 0 turns the equalization off and the evolution is computed with n_{a} = n_{0}. Typical values for the simulations presented in the following sections are n_{0} = 5, 0 ≤ Δn ≤ 5, and λ_{c} = 0.5.
4. The hydrodynamics code SPHYNX
SPHYNX gathers all the advances on the SPH technique that have been presented in Sects. 2 and 3 and represents our flagship code for astrophysical applications. In the following, we present the most relevant details on its implementation and itemize the mathematical expressions of the SPH equations implemented in the Cartesian version of the code. The precise form of the equations as well as the notation follows the discussion made in the preceding sections. They are similar to those presented in Rosswog (2015b).
4.1. Workflow and formulation
First of all, we specify the initial number of neighbors n_{b} and kernel index variables: n_{0} (baseline value), Δn (maximum allowed index jump), and λ_{c} (scaling parameter). This is followed by the choice of the volume estimator X_{a}. Then, the preconditioning stage starts with the selfconsistent calculation of the volume element V_{a}, smoothinglength h_{a}, kernel index n_{a}, density ρ_{a}, and gradh/n term Ω_{a} (see Sect. 3.2).

•
Volume element (18)with, (19)For the volume elements we use X_{a} = (m_{a}/ρ_{a})^{p}. From the experiments presented below, we saw that taking p ≃ 1 leads to the best results in most cases, but it has the disadvantage of overshooting density interpolations in contact discontinuities when p> 0.7. We overcome this problem by using a SPHaveraged value of the VE estimator, making it robust and allowing us to safely rise the exponent to p = 1 (see Sect. 5.2 for more details.)

•
Density equation (20)which we calculate jointly with Ω_{a} including both contributions: the gradh and gradn terms (GarcíaSenz et al. 2014), (21)Once the preconditioning is completed, the hydrodynamic equations are evaluated.
 •
 •
4.2. The artificial viscosity
Regarding the inclusion of physics, SPHYNX incorporates, by default, an artificial viscosity algorithm to handle shocks, as well as routines for the calculation of the equation of state (EOS) and gravitational force. Heat transport by conductive and diffusive means is also included as a basic code unit. More specific routines dealing with nuclear or chemical reactions, ionization or more complex transport schemes can be modularly added to the code (see, e.g., the application of SPHYNX to simulate core collapse supernova, including a spectral neutrino treatment, in Perego et al. 2016). In the following, we explain with some detail the AV algorithm.
Fig. 2
Impact of changing the sinc kernel index n and number of neighbors n_{b} on the pairinginstability during the relaxation towards equilibrium of a perturbed homogeneous system. In the Xaxis the minimum interparticle distance normalized to its initial value is represented. The Yaxis shows the corresponding percentage of particles at different times. We show the results for n = 3 (red), n = 5 (green), n = 7 (blue) and n = 10 (pink). Continuum lines are for t = 0.01, dashed lines for t = 0.19 and big dots for t = 1.79. The continuum line in light blue for n_{b} = 100 was calculated using the Wendland C_{3} interpolator and shows the results at t = 0.19. 
First of all, we take the viscosity Π_{ab} as in Monaghan (1997): (25)where is an estimate of the signal velocity between particles a,b and w_{ab} = v_{ab}·r_{ab}/  r_{ab} . It is common to take but in that case the volume element is implicitly assumed to be m_{a}/ρ_{ab}. Thus, the VE of particle a is not an unequivocally defined magnitude because it also depends on the density of the neighbor particle b. A more compatible option between AV and VE, which can be generalized to other VE, is obtained using , which leads to the following viscous acceleration, (26)where (27)and f_{a},f_{b} are the Balsara limiters (Balsara 1995): (28)Expression (26) can be written as (29)with V_{a} = m_{a}/ρ_{a}, V_{b} = m_{b}/ρ_{b}. Unlike Π_{ab}, the magnitude is not divided by the density and the viscous force, , is symmetric with respect to any pair of particles, a,b. We note that the volume elements are now unequivocally defined when the mass of the particles is the same. The Balsara limiters work more efficiently if they are averaged as, (30)because it gives a lower AV than the arithmetic average in regions with strong shear flows. By default SPHYNX sets these limiters to: (31)which retains a residual viscosity to damp the smallest numerical fluctuations. Equation (31) works remarkably well in regions where strong shocks and instabilities cohabit, such as in the Triple Point test described below. In the case of strong shocks with no shear, Eq. (30) could be replaced by the standard arithmetic mean if necessary.
Following Springel (2010b), we used a constant α = 4/3 so that Π_{ab} remains close to the classical SPH artificial viscosity introduced by Monaghan & Gingold (1983). The contribution of the AV to the energy equation (Eq. (23)) is then, (32)Additional information on the physics included in SPHYNX, such as gravity, heat transport or alternative energy equations is deferred to Appendix D.
Fig. 3
Same as Fig. 2 but using IAD_{0} instead of the standard gradient estimator. In this case, particle distributions peak at higher q/q_{0} ratios compared to those in Fig. 2, showing more resistance to particle clumping. 
5. Twodimensional tests
In this section we present the outcome of applying SPHYNX to several twodimensional (2D) tests that have traditionally been problematic for the SPH technique. We show that the sinc kernels help against the appearance of pairing instability, that the new VE provide a better treatment of discontinuities than the oftenused m/ρ value, and that, when applied jointly with IAD_{0}, there is an overall improvement in the results of the shock treatment in the Sedov and wallshock tests, and also in the development of subsonic instabilities; in particular, the challenging cases of KelvinHelmholtz with a density jump of 8 and RayleighTaylor with very weak gravitational field (g = −0.1). We also found a convergence on the GreshoChan test closer to those of Eulerian codes. Finally, we prove that our implementation suppresses the tensile instability, enabling mixing in tests like the windcloud interaction and the triplepoint shock test.
Our results below suggest that improving volume conservation produces better results than using volume elements that do not ensure volume equipartition. At first glance, it also renders equalization useless because the advantages of using variable adaptive kernel indices are apparently obscured by the new VE. Nevertheless, equalization and volume elements work on different bases. As demonstrated in GarcíaSenz et al. (2014), the use of a variable kernel index improves the interpolation of nonlinear functions, even when they are estimated using integrals instead of finite summations. It is therefore expected that the equalization will be useful when both the number of particles, N, and neighbors, n_{b}, are high enough so that the error in interpolations is dominated by nonlinear effects rather than by the evaluation of integrals as finite summations (see also Sect. 6.2). For a moderate amount of neighbors the use of the new VE given by Eq. (12) makes equalization unnecessary. Thus, unless explicitly stated, many hydrodynamic tests presented in this work have been carried out taking Δn = 0.
5.1. Pairing instability
Here we show a numerical experiment, similar to that described in Zhu et al. (2015), used to study the relationship between the kernel index and pairing instability. We set a sample of particles in a regular 2D lattice leading to a homogeneous density distribution. Then, we seed a random small fluctuation of the internal energy, taking the system out of mechanical equilibrium. Afterwards, we add a frictional dissipative force proportional to the velocity in the momentum equation so that the system returns to equilibrium after a few sound crossing times. The final equilibrium distribution is sensitive to the number of neighbors and to the exponent of the sinc function. This is summarized in Fig. 2, which depicts the distribution of the minimum interparticle distance (q), normalized to its initial value (q_{0}), as a function of pairs (n,n_{b}) at several elapsed times. As we can see, a careful choice of the exponent n keeps the distribution of normalized particle distances closer to 1, making particle clustering difficult, thus avoiding the pairinginstability even at large n_{b}. For example, n = 3 keeps q/q_{0} close to 1 when the number of neighbors is small (topleft panel of Fig. 2), even at large times, while it completely fails when n_{b} increases (topright panel). Nevertheless, increasing the exponent n of the sinc kernel suppresses again the pairing instability. This numerical experiment points to n = 3, 5, 7 and n ≥ 10 as indicative values to handle with n_{b} ≃ 15, 25, 50, and 100 neighbors, respectively (in 2D). For equivalent numbers in 3D, for example n_{b} ≃ 60–120, currently used in SPH calculations, a sinc kernel with n ≃ 4–6 will be sufficient to suppress clustering in many applications (see also Fig. 3 in GarcíaSenz et al. 2014). We provide a comparison between S_{n} and the Wendland C_{3} in the bottomright panel of Fig. 2. As we can see, the C_{3} kernel works better than S_{10} but the difference is small.
Fig. 4
The unmixed isobaric twofluid test at times t = 0, t = 0.4, t = 1 and t = 2 (columns from left to right) calculated with different volume elements. The initial model is that of two nested squares with density contrast of four (the one with higher density being the inner yellow square). First and second rows use particles with identical mass but unevenly spaced grid, while the opposite applies for the two last rows. The first row depicts the evolution with the volume estimator (p = 0.9). The second row is the same, but using either X_{a} = 1, m_{a}, and (k = 0.5) (all three giving similar results). The evolution shown in the third row was calculated using either , and , while the outcome using X_{a} = m_{a} is shown in the fourth row. 
Fig. 5
Impact of X_{a} = (m_{a}/ρ_{a})^{p} in both the volume normalization, ∑ _{b}V_{b}W_{ab} = 1, (left) and ⟨ Δr ⟩ = 0 condition (right) for different values of the exponent. The figure depicts the profile of these magnitudes along a 1D cut taken around y = 0.5 in the hydrostatic system shown in Fig. 4. We notice how the jump at the contact discontinuity is largely reduced in both conditions when p ≃ 1. 
The previous test was done using the standard gradient estimator. Figure 3 depicts the results for the same numerical experiment, but carried out with IAD_{0}. The comparison between Figs. 2 (standard gradient estimator) and 3 (IAD_{0}) suggests that the integral scheme also helps with the pairing problem, as it keeps the particle distribution closer to q/q_{0} = 1.
5.2. Static square of test
We consider a system of two fluids with different density but identical pressure: (33)The system is isobaric with P = 2.5 and the EOS is that of a perfect gas with γ = 5/3.
To carry out the simulations we use Eqs. (22) and (23) from Sect. 4, with constant kernel index n = 5, n_{b} = 40 neighbors, and a variety of volume elements. We tried two different initial settings: using particles with the same mass (and uneven spaced grid), as well as particles with unequal mass spread on a uniform lattice. The outcome of the simulations at times t = 0, 0.4, 1, and 2 is depicted in Fig. 4. For equalmass particles and X_{a} = (m_{a}/ρ_{a})^{p} (with p = 0.9), the code was able to keep the hydrostatic equilibrium during several crossing times (first row). The evolution calculated using X_{a} = 1, X_{a} = m_{a}, and (k = 0.5) (second row) led to the same wrong results in much shorter times. Rows 3 and 4 in Fig. 4 summarize the evolution using a homogeneous lattice of particles with different mass. Hydrostatic equilibrium is very well preserved using X_{a} = 1, , and X_{a} = (m_{a}/ρ_{a})^{0.9} (third row), while X_{a} = m_{a} (fourth row) does not give a satisfactory result.
Fig. 6
Results for the 2D Sedov test at t = 0.1. S_{1} corresponds to p = 0 (standard VE) and Δn = 0 (no equalization). S_{2} shows the effect of the VE with p = 0.7 in and no equalization, while S_{3} corresponds to p = 0 and equalization with Δn = 5. The profile S_{4} shows the outcome when p = 1.0 in and Δn = 0. We show here the radial profiles of density (upperleft), normalized pressure (upperright), and the fulfillment of the conditions  ⟨ Δr ⟩  = 0 (bottomleft) and ∑ _{b}V_{b}W_{ab} = 1 (bottomright). 
In Fig. 5 we show the magnitudes ∑ _{b}V_{b}W_{ab} (left panel) and ⟨ Δr ⟩ = ∑ _{b}V_{b} (r_{b}−r_{a})W_{ab} (right panel) along a 1D cut taken around y = 0.5, from x = 0.1 to x = 0.5 (center of the system) in the hydrostatic system described above after 20 integration steps. As can be seen in the left panel, choosing X_{a} = (m_{a}/ρ_{a})^{p}(p = 1) gives the best results, leading to an almost perfect volume normalization, while the case with p = 0 (equivalent to X_{a} = 1) shows a large oscillation around the contact discontinuity. The case with p = 0.9 is also quite satisfactory although, a small oscillation is still present. The right panel in Fig. 5 suggests that an adequate choice of the volume element may improve even further the accuracy of the IAD_{0} method to estimate gradients because, as we can see, a value of p ≃ 1 helps to keep ⟨ Δr ⟩ ≃ 0, which is the necessary condition for IAD_{0} to exactly reproduce the gradient of linear functions.
Settings of the calculated models in the Sedov test.
Unfortunately, taking p = 1 has the unwanted side effect that density tends to overshoot close to the contact discontinuity. The overshooting in density drops the value of the estimator X_{a} = m_{a}/ρ_{a}, so that in the next integration step the density undergoes a further overshooting. Occasionally, the feedback between ρ_{a} and X_{a} might rise the density to unrealistic values after several time steps. This problem can be avoided by reducing the value of the exponent p but in that case the adequate optimal value becomes problem dependent. For example, p = 0.9 works well in the hydrostatic test depicted in Fig. 4 but it produces worse results for the Sedov test described later in Sect. 5. For that test p ≃ 0.6–0.7 was a better choice. A way to circumvent the overshooting problem is to consider (34)where ⟨ m/ρ ⟩ _{a} is the SPH average of the standard volume element. Although less efficient than X = (m/ρ)^{p}, this simple recipe enhances the robustness of the scheme so that the exponent of the estimator can be safely raised to p ≃ 1, independently of the problem to be simulated. This procedure is especially well suited to handle contact discontinuities hosting large density contrasts, and it has been checked in the blob and Evrard tests described in Sects. 5.6 and 6.3. A more quantitative discussion on the convergence rate of the estimator X_{a} as a function of p and the density contrast is given in Appendix C.
5.3. Shocks
5.3.1. Point explosion
The study of the propagation of a pointlike explosion in more than one dimension is of great usefulness for checking hydrodynamics codes. We have carried out several simulations with SPHYNX to explore the impact of different combinations of volume element and level of equalization on handling shock waves. The explosion was triggered by depositing a considerable amount of energy inside a Gaussian surface of characteristic width σ = 0.05 located at the center of a uniform lattice of size [0,1] × [0,1]. The initial density profile is ρ(r,0) = 1 and the total injected energy E = 1. All simulations used N = 256 × 256 particles and the initial number of neighbors was set to n_{b} = 50. The different parameters used in the simulations are summarized in Table 3.
Figure 6 shows the results of the Sedov test for several combinations of volume elements V_{a} and equalization Δn at t = 0.1. Models S_{1}, S_{2}, S_{3}, and S_{4} were calculated using and different values for the exponent p. All models are quite satisfactory but model S_{2}, calculated with p = 0.7 and Δn = 0, leads to the best results. The density peak is closer to the analytic value and the pressure profile close to the origin behaves well. Model S_{4}, calculated using with p = 1, also provides good results while models S_{1}, S_{3}, with the choice p = 0 (standard VE) lead to lower peak values and oscillating pressure tails that depart from the analytic result.
The lower panels of Fig. 6 depict the profile of the moduli  ⟨ Δr ⟩  _{a} =  ∑ _{b}V_{b} (r_{b}−r_{a})W(h_{a},n_{a})  (left panel) and that of the normalization condition ∑ _{b}V_{b}W(h_{a},n_{a}) (right panel). These results are encouraging because they strongly suggest that the new volume elements not only enhance volume conservation (lower right panel of Fig. 6) but also make gradient estimation more accurate, because Eq. (9) is better fulfilled, ensuring that the IAD_{0} approximation is, in fact, close to the full IAD in terms of accuracy (lower left panel of Fig. 6). Again models S_{2} and S_{4} lead to the best results whereas S_{1} and S_{3} show worse volume conservation and lower fulfillment of Eq. (9).
It has been suggested that other choices of VE could be of interest for handling specific problems. In particular, it has been claimed that the pressurebased estimator (k< 1) could be well suited to handling contact discontinuities (Hopkins 2013). We carried out one simulation using that estimator, with the recommended value k = 0.05 (Rosswog 2015a). We found that the results with the pressurebased estimator were not as good as those of models S_{2} and S_{4} of Table 3. In Fig. 7 we compare the results of using the estimator with those of S_{1} and S_{2}. In the right panel it is clear that the volume normalization is not so well preserved as in model S_{2}, obtaining values similar to those of using standard VE (model S_{1}). Moreover, it leads to a lower peak density value and a spurious precursor shock is clearly seen ahead of the shock in the left panel. Although small, such precursor shock is a numerical artifact which was also reported by other authors (Rosswog 2015a). For that reason, hereafter we have limited the choice of volume elements to those given by Eq. (12)^{5}.
Fig. 7
Results for the 2D Sedov test at t = 0.1 taking X_{a} = P^{k}, where P is the pressure and k = 0.05 (black dashed line) compared to models S_{1} (red) and S_{2} (green). We show here the radial profiles of the density (left panel) and the equipartition condition (right panel). Solid black line is the analytic solution for the density profile. We note on the left panel that there is a small precursor bump in density for the calculation with X_{a} = P^{k} which, for models S_{1} and S_{2}, is absent. 
5.3.2. The wallshock test
Along with the point explosion (Sedov test), the so called wallshock (or Noh) test is also used to check the performance of multidimensional hydrodynamics codes. Unlike the Sedov test, the Noh experiment is an implosion towards a geometrical center. The interaction of the converging inflow of gas with the stagnated material at the central volume provokes the formation of a shockwave moving outwards. The Noh problem has an analytic solution (Noh 1987) to compare with. It is, however, not an easy test owing to the large jump in density across the shock front, 16 in 2D, 64 in 3D, for γ = 5/3. SPH codes traditionally had difficulties in handling this test because: a) many particles are needed to correctly reproduce the shock region (especially in 3D); and b) close to the center the internal energy shows a pronounced spike and, consequently, the density abruptly drops to keep the pressure constant. In fact, both problems are connected to the use of the AV and it is not clear to what extent a change in the SPH formalism may alleviate these shortcomings (Brookshaw 2003).
A sample of 256^{2} equalmass particles was distributed in an bcc lattice with circular perimeter. The radius of the initial configuration is R = 1 and the density is homogeneous with ρ = 1 (except at the outer edge of the system). A radial profile of v_{r} = −1 was imparted to all particles at t = 0, so that the system implodes. The evolution was tracked with SPHYNX until t = 0.5, when a clear steady shock moving outwards is formed. The results, calculated with n = 5, p = 0, and p = 0.7 for the VE and n_{b} = 50, have been compared to both, the analytic predictions and to the output of GADGET2 for the same initial model.
Table 4 and Figs. 8 and 9 summarize initial parameters of each simulated model and the main results of the simulations. The density profiles at t = 0.5 for the three calculated models, WHS_{1}, WHS_{2}, WHS_{G} are shown on the leftmost panel of Fig. 8. A sharp shock, moving towards the right, is clearly seen at r ≃ 0.17. The density jump across the shock front is Δρ ≃ 16, hence close to the analytic value. Close to the center of the configuration there is a pronounced dip in density caused, as expected, by the use of the AV. The SPHYNX calculation with p = 0.7 (model WHS_{2} with green dots) gives slightly better results at the shock position than WHS_{1} and WHS_{G} because the density profile is steeper and the maximum density is closer to the analytic value of 16. We note that all three calculations do show density oscillations at the shock front and that the GADGET2 simulation is more blurred. The colormaps of density for models WHS_{G} and WHS_{2} are also shown in the central and right panels of Fig. 8. As we can see, the shock region is sharp and well defined in the SPHYNX calculation whereas the simulation with GADGET2 shows stronger oscillations close to the shock location and an inhomogeneous particle distribution close to the origin.
Additionally, model WHS_{2} has both a better volume normalization than WHS_{1} (i.e., closer to 1), as shown in Fig. 9 (left panel) and a better behavior of  Δr  (i.e., closer to 0) at the shock location (right panel).
List of the calculated models in the wallshock tests.
5.4. Fluid instabilities
Instabilities play a central role in hydrodynamics because they are usually connected to shear mixing and turbulence. In the cosmos, instabilities are especially important because the large Reynolds numbers involved in astrophysical processes make these systems prone to turbulence. Particle and gridbased codes have been applied with different levels of success to simulate the evolution of hydrodynamical instabilities such as KelvinHelmholtz (KH) or RayleighTaylor (RT) instabilities. Here we revisit them with SPHYNX.
Fig. 8
Twodimensional wallshock test at time t = 0.5. Left: density profiles for models described in Table 4: WHS_{1} (red), WHS_{2} (green), and WHS_{G} (blue) in dots. The analytic result is the solid black line. The central panel depicts the particle distribution with density as color at t = 0.5 as obtained with GADGET2 whereas the last panel is that of model WHS_{2}. We note how WHS_{2} achieves lower particle disorder in the central region, and smaller density oscillations at the shock position than GADGET2. 
Fig. 9
Twodimensional wallshock test. Left: volume normalization at t = 0.5 for model WHS_{1} (red) and WHS_{2} (blue). Right: same but for the condition,  ⟨ Δr ⟩  = 0, among mass points within the kernel range of 2h. WHS_{2} (with new VE) fulfills both conditions better than WHS_{1} (with the standard VE). 
5.4.1. KelvinHelmholtz
Gridbased codes are, for the most part, almost free of numerical viscosity, leading to a good match between simulations and the analytic predictions during the linear phase of growth of the KH instability (Junk et al. 2010). On the other hand, SPH codes are Galilean invariant and do not suffer from numerical diffusion, but intrinsically have more numerical viscosity. It has been pointed out that SPH may not be appropriate for handling fluid instabilities across contact discontinuities with large density jumps (Agertz et al. 2007). Although there have been proposals to enhance the modeling of these systems (Price 2008), the SPH calculation of shear flows with large density contrast is still an open question.
Fig. 10
Density colormap depicting the evolution of the KelvinHelmholtz instability for models KH_{1} (p = 0, upper panels) and KH_{2} (p = 0.7, lower panels) of Table 5. Each snapshot gives the density colormap at times t = 1.5 (left) and t = 2.5 (right). 
We ran two sets of models simulating the growth of the KH instability around the boundary layer separating two flows with moderate and large density contrasts, respectively. Each set was in turn calculated using two values for the volume elements namely p = 0 and p = 0.7 in Eq. (12). The initial setting was the same as in McNally et al. (2012), where three stratified fluid layers inside a 2D box of size [0,1] × [0,1] were considered. The fluid layers span for y ≤ y_{1}, y_{1} ≤ y ≤ y_{2} and y ≥ y_{2} with densities ρ_{1}, ρ_{2}, and ρ_{3}, respectively. The adopted values for y_{1}, y_{2}, ρ_{1}, ρ_{2}, and ρ_{3}, as well as the number of particles used in the simulation, are shown in Table 5. Periodic boundary conditions were implemented at all sides of the box.
Settings of the calculated models in the 2D KelvinHelmholtz tests.
A velocity v_{x} = 0.5 is given to the central strip, whereas the rest of the box moves in the opposite direction with v_{x} = −0.5. Prior to the calculations, the density and v_{x} distributions were smoothed following the method described in McNally et al. (2012; their Eqs. (1) and (3)). Thus, the growthrate of the instability obtained with SPHYNX can be compared to the templates obtained by McNally and collaborators using the PENCIL Code^{6}, a stateoftheart hydrodynamics code of Eulerian type (Brandenburg & Dobler 2002). The pressure is set to P = 2.5 everywhere, with γ = 5/3. The fluid layer is in almost vertical equilibrium except for a small seeded perturbation in v_{y} given by, (35)with w_{0} = 0.01. The velocity perturbation has therefore a wavelength λ = 0.5 so that the box hosts two complete waveforms.
Models KH_{1} and KH_{2} in Table 5 were calculated assuming a density contrast of two, whereas an initial jump of 8 was imposed to models KH_{3} and KH_{4}. To build the initial models, a sample of 256 × 256 particles were first spread in a squared lattice and then stretched in the vertical direction so that the ensuing density profile adapts to that of Eq. (1) in McNally et al. (2012). This setting naturally leads to a density contrast of two for equalmass particles and provides the necessary smoothness around the separation layers between the fluids. To build the highdensity contrast models, the initial lattice was stretched until the density ratio was . The mass of the particles was then made proportional to the density of the stretched grid, so that a smooth profile with a density contrast of 8 was achieved.
Fig. 11
Amplitude mode growth of models KH_{1} (p = 0) and KH_{2} (p = 0.7) of Table 5. The solid red line is the reference solution (McNally et al. 2012) obtained with the PENCIL code (Brandenburg & Dobler 2002), while the solid green and blue lines were obtained with SPHYNX for two different values of the p parameter. The dashed lines correspond to calculations performed with the NDSPMHD code (Price 2012) with a resolution of 512^{2} and kernel B_{4} (pink) and B_{6} (light blue). See McNally et al. (2012) for further details. We also show the L_{1} errors with respect to the reference values for each calculation. 
The Balsara limiter f_{a} given by Eqs. (28), (30), and (31) was applied to all models to reduce the shear viscosity. For these initial conditions, previous simulations with the traditional formulation of SPH predict the growth of the KH instability only for the lowdensity contrast case (Agertz et al. 2007; Junk et al. 2010). When the density contrast rises well above 2 it is necessary either to add an artificial heat flux to keep isobaricity (Price 2008) or to redefine the volume elements (Saitoh & Makino 2013; Hopkins 2013). In particular, it was shown that the formulation which uses volume elements linked to pressure is able to reproduce the KH instability across high density jumps. Also, codes which directly use a smoothed pressure in the SPH equations, such as the PSPH formulation described in the Appendix of Hopkins (2015), can also handle with high density contrasts. Here we show that the IAD approach combined with new volume elements, which preserve kernel normalization, is also able to successfully simulate the KH instability with high density jumps.
Fig. 12
Kernel normalization of models KH_{1} (left) and KH_{2} (right) of Table 5 at t = 2.5. The dispersion of ∑ _{b}V_{b}W_{ab} around unity is shown for two options of the exponent of the volume estimator, X_{a} = (m_{a}/ρ_{a})^{p}, p = 0 (left) and p = 0.7 (right). Solid red lines are maximum and minimum values to help as a visual aid. 
Figure 10 shows a colormap of density of the evolution of models KH_{1} and KH_{2} at times t = 1.5 and 2.5. We see that there is a growth of the instability, qualitatively similar to the results obtained using other novel formulations of the technique such as the PSPH scheme with ≃200 neighbors (see for example Fig. 19 in Hopkins 2015). The results of SPHYNX are also very similar to those obtained with the recent CKRSPH formulation (see for example the upperrow of figure 17 in Frontiere et al. 2017). Model KH_{2}, calculated using the new volume elements, appears to evolve slightly faster than KH_{1}, computed with the standard choice, VE = m/ρ, showing more evolved structures in the nonlinear stage. Unlike the PSPH scheme, our method does not estimate the pressure by kernel smoothing nor does it incorporate any artificial flux of heat to smooth the pressure.
Figure 11 depicts the modeamplitude evolution of v_{y} calculated taking the Fourier transform (FT) of the v_{y} field. The FT was calculated with the expressions given in McNally et al. (2012) so that a comparison with the amplitude growth calculated with the PENCIL code (continuumred line) and the NDSPMHD code, described in Price (2012), (dashed lines) is straightforward. The results are encouraging as SPHYNX reproduces the KH growth closer to the reference simulation of the PENCIL code than the NDSPMHD code, the latter even having a factor 2 higher resolution than our simulations with SPHYNX (see also Fig. 7 in McNally et al. 2012, and comments therein). Our results are close to those obtained with the PSPH scheme with the same resolution. In Fig. 11 we also show the L_{1} errors of each model with respect to the reference values, calculated as: , where A^{r} and A^{s} stand for the reference and simulated amplitudes, respectively. These results are an indication of the importance of using an accurate gradient evaluation, this being the most relevant difference between SPHYNX and NDSPMHD.
Model KH_{2}, calculated with p = 0.7, exhibits better volume normalization than KH_{1}, computed with p = 0, as shown in Fig. 12. That figure depicts the value of the normalization condition ∑ _{b}V_{b}W_{ab} as a function of the density of the particles. For both runs, most of the particles cluster around the expected value of 1, but the summation falls below one in the lowdensity region and exceeds it in the highdensity region. Nevertheless, the dispersion around the correct value is considerably lower in the case with p = 0.7, showing its greater capability to achieve equipartition in disordered particle distributions.
Figure 13 summarizes the simulations with an initial density ratio ρ_{2}/ρ_{1} = 8. The upper and lower panels were calculated with p = 0 and p = 0.7 in Eq. (12), respectively. The snapshots correspond to times t = 1.5 and 2.5. First of all, we see that despite the large jump in density, the instability is able to develop in both cases. These results, therefore, suggest that the use of the IAD scheme to calculate gradients improves by itself the quality of the simulations of hydrodynamic instabilities. Although the instability evolves slightly faster when p = 0.7, and shows more structure, it is probably affected by the noise introduced by the VE estimator (m/ρ)^{p} when the mass of the particles is not constant. As in the precedent tests, the model calculated with p = 0.7 has the best kernel normalization properties (not shown in the figures).
5.4.2. The RayleighTaylor instability
In GarcíaSenz et al. (2012) it was shown that the IAD_{0} scheme is able to simulate the gross features of the growth of the RT instability for initial perturbations as low as Δv_{y} = 0.01 in the velocity field. In this section we present clear proof that SPHYNX is able to cope with the growth of the RT instability also when the gravitational force is small.
Our initial model is similar to that described in Springel (2010a), where the numerical experiment takes place in a box sizing [0.5 × 1.5]. The lower and upper halves of the box are filled with equalmass particles. The particle distribution is then arranged in the vertical direction so that ρ_{u} = 2 (upper region) and ρ_{d} = 1 (lower region).
Fig. 13
Density colormap depicting the evolution of the KelvinHelmholtz instability for models KH_{3} (p = 0, first row) and KH_{4} (p = 0.7, second row) of Table 5, corresponding to a density jump of a factor 8, at times t = 1.5, 2.5. We notice that, despite the much larger jump in density, the KH instability is still able to develop in both cases. 
The contact discontinuity around the interface was smoothed using: (36)with Δy = 0.0083 and y_{0} = 0.75. The integration of the hydrostatic equilibrium equation , where g is the gravitational acceleration, with an ideal EOS with γ = 1.4 gives the pressure profile in the vertical direction, (37)with P_{0} = 2.5. We used 256^{2} particles and an initial number of neighbor particles n_{b} = 50. The boundary conditions are periodic at the left and right sides of the box but reflective on the bottom and the top. The main features of the models used in this test are summarized in Table 6.
Fig. 14
Density colormap depicting the evolution of the RayleighTaylor instability for models RT_{1} (p = 0, first row), and RT_{2} (p = 0.7, second row) of Table 6 at times t = 1.5, 2.5, 3.5, 5.0, 6.5, and 7.5, respectively. The estimator was used to compute the volume elements. 
The fluid was initially at rest everywhere, except for a small, singlemode perturbation applied to the vertical velocity field, (38)where w_{0} = 0.0025. In Fig. 14 we show the density colormap of the evolution of the instability when g = −0.5 is adopted. As we can see, the instability grows at a good rate and it enters into the nonlinear regime at t> 3.5. The contact discontinuity between both fluids is well marked, but not totally sharp, owing to both the smooth initial conditions and to the spread of the physical magnitudes over the characteristic smoothing length h. During the nonlinear stage, the calculation with p = 0.7 (model RT_{2} in Table 5) displays more structure than that of model RT_{2} calculated with p = 0. At t = 7.5 the vertical extension of the unstable region is nearly the same but model RT_{2} shows a richer structure inside the mushroomlike instability.
Settings of the calculated models in the 2D RayleighTaylor tests.
The impact caused by the choice of VE is much more marked when a smaller gravitational force, g = −0.1, is adopted. This was the case reported in Springel (2010b) concerning the simulations of the RT instability with the AREPO code. As far as we know, there are no satisfactory calculations on this same scenario using the SPH technique. The combination of a small initial perturbation plus a tiny gravitational force makes the problem difficult because the E_{0} errors and the tensile instability conspire to totally suppress the growth of the instability (Valdarnini 2012). The upper panels in Fig. 15 depict the evolution of the instability as simulated by SPHYNX taking standard VE (i.e., p = 0 in Eq. (13)). As we can see, the instability is able to grow, although the growthrate is slow and there is a clear lack of structure during the nonlinear stage. The lower row of panels in the same figure depict the evolution when VE are calculated with p = 0.7. In this case there is a clear boost in the development of the instability as it grows faster and shows a richer structure in the nonlinear regime.
Quantitative numbers on this test can be extracted from the evolution of the points located on the tip/bottom of the bubbles/spikes. Additionally, the numerically inferred terminal velocity of the vertex of the bubble can be compared with the analytical results by Goncharov (2002). The results are summarized in the Fig. 16, where the left panel indicates the bubble/spike evolution of the farthest point achieved by the lighter and heavier fluids (bubble and spike respectively), and the right panel indicates the evolution of the velocity of the bubble. As we can see, the evolution of the bubble/spike sample is faster when the VE are calculated with p = 0.7 and p = 1 (in this last case the averaged estimator, Eq. (34), was used). We note that at t ≃ 15, the tip/bottom of the bubble/spike are close to colliding with the box limits. Consequently the rising velocity of the bubble velocity was computed in the interval 0 ≤ t ≤ 15. As we can see, the terminal velocity of the bubble matches well the analytical estimation by Goncharov (2002), (39)where A_{t} = (ρ_{2}−ρ_{1})/(ρ_{2} + ρ_{1}) = 1/3 is the Atwood number, k = 4π is the wavenumber of the applied single perturbation and C_{g} = 3 in twodimensions. The terminal velocity calculated using Eq. (39) (solid pink horizontal line in Fig. 16) is close to those obtained using SPHYNX.
Fig. 15
Same as Fig. 14 but for models RT_{3} (p = 0, first row), and RT_{4} (p = 0.7, second row) of Table 6 (with a very reduced gravity field of g = −0.1). Times are t = 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5, respectively. 
Fig. 16
RayleighTaylor test in 2D for models RT_{3} (p = 0, red lines), RT_{4} (p = 0.7, green lines), and RT_{5} (p = 1, blue lines). In the left panel we show the time evolution of the tips of the bubble (solid lines) and the spikes (dashed). From these it is clear that the inclusion of the new VE elements enhances the growth of the instability in the linear stage. Once the nonlinear stage is reached all models grow at similar rates. It is worth noting here that simulations without IAD_{0} and standard VE (not shown) failed to develop the RT instability. In the panel on the right, we show the time evolution of the bubble velocity. It can be seen how a plateau is reached by all three models, which is closer to the expected terminal velocity (pink solid line) for the models using the new volume elements. 
Fig. 17
GreshoChan vortex test in 2D. Radial profiles of the tangential velocity for several kernel choices and different number of neighbors, at time t = 1. The interpolators are the sinc(n = 5, 6.315, 7) and the Wendland C_{6}. The theoretical velocity profile is also shown (blackdashed line). 
Fig. 18
GreshoChan vortex test in 2D. Left: value of the magnitude L_{1}, at time t = 1, calculated with the sinc kernels as a function of the kernel index n and number of neighbors n_{b} (with constant total number of particles N = 256^{2}). Right: convergence rate of sincs and Wendland C_{6} kernels as a function of the number of neighbors. 
Although our results are not as good as those obtained with the code AREPO for the same initial conditions (see Fig. 35 in Springel 2010b, for a comparison), they are encouraging, as they suggest that the simulation of the RT instability in a tiny gravitational field is also affordable for SPH codes when an E0suppressing technique like IAD_{0} is used jointly with equipartitionpreserving volume elements.
5.5. The GreshoChan vortex
The simulation of a stationary vortex that is in stable equilibrium is a very demanding test for any hydrodynamics code. This is a fundamental test for the accuracy of the numerical scheme, the preservation of symmetry and the conservation of angular momentum. The GreshoChan vortex has been especially problematic for SPH, which, in spite of having excellent angular momentum conservation, shows very poor convergence (Springel 2010b) due to numerical deviations from the initial conditions that degenerate the stability of the vortex within short timescales. This is specially relevant in the simulation of selfgravitating disks, where the centrifugal force and pressure gradient should be (up to a certain extent) balanced. Therefore, numerical noise can trigger deviations from this equilibrium configuration that lead, for example, to an artificial fragmentation of the disk.
Several recent works (Dehnen & Aly 2012; Zhu et al. 2015; Rosswog 2015a) have managed to keep the vortex in steady state, with low dispersion, during more than one orbitcycle. In his work, Rosswog (2015a) proves that the use of highorder kernels is crucial to obtain accurate results, and in combination with IAD_{0} and an improved artificial viscosity with a noise dissipation trigger, leads to a roughly linear convergence rate.
In this test, we use the common initial conditions for the azimuthal velocity and pressure profile: (40)(41)where ψ = r/R_{1}, R_{1} = 0.2, v_{0} = 1, P_{0} = 5, for an ideal gas with γ = 5/3. This setting corresponds to a low Mach number test with . We use N = 256^{2} particles evenly distributed in a square lattice of size [0,1] × [0,1] so that the density is uniform with value ρ = 1. The artificial viscosity parameter in Eq. (25) is set to the current value α = 4/3 and the Balsara coefficients estimated with Eqs. (28), (30), and (31). We note that because the vortex is in steady conditions, the divergence of the velocity should vanish everywhere and f_{a} ≃ 0. As in other simulations, however, we allow a low level of AV to control the numerical noise and to reduce the dispersion of the particles. For this test, the particular choice of the volume element is not relevant because the particle distribution is homogeneous due to the constant density, hence the kernel is properly normalized at all times, independently of the exponent. Therefore, we take the nominal value of p = 0.7 in Eq. (12) in all calculated models.
We carried out several simulations of the vortex evolution for different choices of the kernel index n and initial number of neighbors^{7}, n_{b}(2D). We also calculated models with the Wendland C_{6} kernel which, according to Dehnen & Aly (2012) and Rosswog (2015a), was especially suited to handle this test. The quality of the simulations and convergence rate were checked with the parameter where v^{t} and v^{s} stand for the theoretical and the simulated value of velocity, respectively, and the summation runs over all particles.
In Fig. 17, we show the profile of the tangential velocity at time t = 1 (roughly the elapsed time needed by the vortex to complete one turn) for three kernel choices and different number of neighbors (n_{b}). Using a low n_{b} (≃ 30, in 2D) the best results were obtained with the sinc (n = 5) whereas the choice n = 6.315 or the Wendland C_{6} leads to qualitatively similar results. The situation is reverted when n_{b} = 50, where the C_{6} interpolator gives a slender velocity profile and the lowest L_{1} value. However, the sinc(n = 6.315) also shows low dispersion profiles and L_{1} values. Increasing the number of neighbors to n_{b} = 70 (≃ 500 in 3D) reduces the dispersion, as expected. The C_{6} provides the lowest L_{1} value here but closely followed by the sinc (n = 7).
Figure 18 summarizes the functional dependence of the estimator L_{1} with respect to the index n (left panel) and number of neighbors n_{b} (right panel). Looking at the left panel we see that for n_{b} ≃ 20 the sinc (n = 4) (similar to the M_{5} and C_{2} kernels) gives the best results. Increasing the number of neighbors moves the minimum value of L_{1} to the right (higher n values). At n_{b} ≃ 30, the index n = 5 becomes the optimal choice. Setting the initial number of neighbors to 70 and 80 demands larger kernel indices (n ≃ 7–8) at the position of minimum L_{1}. The right panel of Fig. 18 explores the convergence rate of the GreshoChan vortex test with respect to the number of neighbors at a constant total particle number N = 256^{2}. When n_{b} ≤ 30 it seems that the loworder interpolators with n ≤ 5 give the lowest L_{1} values. Nevertheless, above n_{b} ≥ 40 it is necessary to raise the index of the sinc kernel, or use the Wendland C_{6}, to achieve convergence. The C_{6} gives the lowest L_{1} values when n_{b} ≥ 40. The curve referred as nmix (dotted black line) is a fit which takes advantage of the optimal n index at each n_{b}. Both, the nmix profile and the C_{6} curve can be used to estimate the convergence rate in this test. These profiles show a rapid initial decline: , followed by a more gentle reduction of the convergence rate .
Fig. 19
GreshoChan vortex test in 2D. Convergence rate of L_{1} as a function of the equivalent 1D number of particles, N_{1d}, at time t = 1, for different sinc kernels. 
Fig. 20
Windcloud interaction: density colormap depicting the evolution of the collision and cloud fragmentation at three fiducial times t/t_{KH} = 0.5, 2, and 3. Model W_{G} refers to the calculation using GADGET2. Models W_{0} and W_{1} were calculated with SPHYNX using the standard and the new VE, respectively. 
Figure 19 shows the convergence rate of L_{1} at t = 1 when the onedimensional (1D) equivalent number of particles, N_{1D}, is varied in the range 50 ≤ N_{1D} ≤ 400. In this study we have only considered the sinc kernels, but with several exponents. The convergence rate is explored using the following (n,n_{b}) pairs: (5,30), (6.315,50), (8,80) and (10,100). As can be seen, the highorder sincs with n = 8, n = 10 achieve a linear convergence of L_{1}, provided that a sufficient number of neighbors is taken. Moreover, the convergence rate is larger than those reported in Springel (2010b) () or in Dehnen & Aly (2012; ) and closer to the value obtained with Eulerian codes, (Springel 2010a); see also Valdarnini (2016) where the ability of the IAD_{0} scheme to reduce sampling errors is proved in a similar subsonic test, but with Mach number as low as ℳ = 0.02.
5.6. Interaction of a supersonic wind with a cold cloud of gas
Popularly known as the “blob” test (Agertz et al. 2007), this problem has challenged SPH codes for a long time. The basic setting of this test gathers many pieces of physics, such as strong shocks and mixing due to the KH and RT instabilities in a multiphase medium with a large density contrast. The initial configuration consists in a dense spherical cloud of cold gas embedded in a hotter ambient medium. The cloud is initially at rest while the ambient background (the wind) moves supersonically. The windcloud interaction generates a bow shock and, in short time, the cloud is fragmented and mixed with the background owing to the combined effect of ablation and KH and RT instabilities.
Modern gridbased codes are able to handle this scenario and all of them agree in the gross features of the evolution. Nevertheless, simulations using the SPH technique have historically had difficulty disrupting the cloud because of the poor development of the hydrodynamic instabilities that drive the erosion of the cloud (Agertz et al. 2007). As a matter of fact, the challenge introduced by this problem has led to interesting advances of the SPH technique in recent years, such as the pressure, PSPH, formulation of the SPH equations by Saitoh & Makino (2013) and Hopkins (2013), and the conservative reproducing kernel method, CRKSPH by Frontiere et al. (2017). As shown in these works, it is feasible to suppress the tensileinstability acting at the contact discontinuity between the two fluids, leading to a better agreement between SPH and gridbased codes.
Here we show that the new volume elements given by Eq. (12) also suppress the tensileinstability at the contact discontinuity. The code SPHYNX is thus able to cope with the blob test without leaving the densityformulation of SPH. Our initial setting is similar to that described in Agertz et al. (2007), although restricted to two dimensions where the dense cloud is no longer a sphere but a circle (a cylinder in 3D). The wind is simulated with N_{w} = 256^{2} particles spread following a glasslike stable (previously relaxed) distribution in a box sizing [1,1/4]. The cloud is reproduced with N_{C} = 68^{2} particles, spread in a regular square lattice tailored in a circle of radius R_{C} = 1/40, and centered at coordinates (1/8,1/8). All particles have the same mass. The features of the wind and cloud at t = 0 are ρ_{w} = 1,P_{w} = 1,u_{w} = 3/2,v_{w} = 2.7 and ρ_{c} = 10,P_{c} = 1,u_{c} = 3/20,v_{c} = 0, respectively, where the wind velocity v_{w} is supersonic with Mach number ℳ = 2.7. As the particles in the cloud are fully ordered we add a small random initial radial velocity, with .
Fig. 21
Windcloud interaction: evolution of the surviving fraction of the cloud for the three calculated models. Even though a small remnant of the cloud is still present at t/t_{KH} ≃ 4 the calculation W_{1} (SPHYNX with the new VE, that is, p = 1) leads to the best results. The other two calculations (SPHYNX with standard VE, i.e. p = 0, and GADGET with standard VE and without IAD_{0}) are more inefficient in diluting the cloud as between 40–60% of it is still present even at t/t_{KH}> 4. 
Values of the parameter L_{1} for different magnitudes.
We carried out two calculations of the blob test using SPHYNX: with the standard VE (model W_{0}) and with the new VE (model W_{1}), as well as with GADGET2, (model W_{G}). Because of the large density contrast between the wind and the cloud the model W_{1} was calculated with the averaged estimator X = ( ⟨ m/ρ ⟩ )^{p} and p = 1.
In Fig. 20 we show several snapshots of the simulations at normalized times (t/t_{KH} = 0.5,2,3), where t_{KH} is the characteristic KH time, as defined in Agertz et al. (2007). As we can see, the fragmentation and mixing of the cloud is only attained in model W_{1}, calculated with p = 1. Models W_{0} and W_{G} give worse results, but we note that even in that case SPHYNX with p = 0 yields a slightly larger fragmentation of the cloud than GADGET2 (see Fig. 21). Our results are also in good agreement with those obtained by Frontiere et al. (2017) using the CRKSPH method (see their Fig. 21). The large suppression of the tensile instability in the W_{1} calculation is the characteristic that allows the fragmentation of the cloud. This is clearly visible in the detailed density snapshots shown in Fig. 22. It is therefore evident that the longstanding tensile instability problem in SPH is bounded to kernel normalization. Having a good partition of unity considerably reduces the strength of that instability.
Fig. 22
Windcloud interaction: impact of the VE choice in the tensile instability. Near the contact discontinuity the standard VE, m/ρ, produces the segregation of the two fluids, preventing mixing (left). Choosing VE_{a} = ( ⟨ m/ρ ⟩ _{a})^{p}/ ∑ _{ab}( ⟨ m/ρ ⟩ _{b})^{p}W_{ab}, where ⟨ m/ρ ⟩ is the SPH average of m/ρ and p = 1, suppresses the tensile instability leading to mixing of the fluids and the subsequent bubble fragmentation (right). 
Fig. 23
Colormap of the mass density in the triplepoint shock test. Each row shows a snapshot at a different time, while each column corresponds to a different VE. The column on the left was calculated with X_{a} = ( ⟨ m/ρ ⟩ _{a})^{p} and p = 1, while the one on the right corresponds to X_{a} = 1 (i.e., standard VE). 
The evolution of the surviving fraction of the cloud as a function of time is depicted in Fig. 21. The expected outcome is that the surviving fraction of the cloud reduces to 0 in about t/t_{KH} ~ 2.5, as obtained by grid codes (see, e.g., Fig. 25 of Hopkins 2015). As we can see, the W_{1} calculation leads to almost complete destruction of the cloud in t/t_{KH} ≃ 4, while in the other two calculations hardly 50% of the cloud is mixed. Nevertheless, unlike in the gridbased calculations, the cloud is not totally mixed with the ambient gas as ≃ 10% of it remains unmixed in the W_{1} simulation. The difference may arise from the different dimensionality of the calculations, because in 2D the cloud is a cylinder whereas in 3D the shape of the blob is spherical. However, a calculation of the same model in 3D with 3 × 10^{6} particles led to similar results. Actually, the complete dissolution of the cloud is only attainable if some heattransport is included in the scheme (Hopkins 2015).
5.7. Shock plus vorticity: the triplepoint shock test
The similarities among the results obtained by SPHYNX and the CRKSPH method, as found in the KH and the Blob tests above, are even more evident when considering the so called triplepoint shock test. This is a shocktubelike setting where three materials with different densities and pressures are put into contact through discontinuity lines. The first region contains a highdensity, high pressure material, located in the vertical band on the left of the first snapshot in Fig. 23 (see also the sketch by Frontiere et al. 2017, shown in their Fig. 24). That region is in contact with two lowpressure horizontal bands, the upper band being considerably less denser than the lower band. The particular point where the three discontinuity lines intersect is the triple point. The vertical highpressure band on the left launches a shock through the lowerpressure horizontal bands. Because of the pronounced difference in the speed of sound, the shock moves faster in the upper lowdensity, lowpressure region. As a consequence, a shear is induced around the line separating both lowpressure materials, and soon the KH instability develops, rollingup the interface between these two regions. At late times, the reflected shocks at the boundaries of the box also induces the RichtmyerMeskhov instability when they cross the interfaces between the different materials.
Fig. 24
Results for the 3DSedov test for models S_{5}, S_{6}, S_{7}, S_{8} of Table 3 and Gadget2. We show here the radial profiles of density (upperleft), normalized pressure (upperright), and the fulfillment of the conditions  ⟨ Δr ⟩  = 0 (bottomleft) and ∑ _{b}V_{b}W_{ab} = 1 (bottomright). Solid black lines are the analytic solutions for density and normalized pressure profiles. 
Our main aim is to simulate the triplepoint shock scenario and to compare the results obtained with SPHYNX to those given by the CRKSPH method. A comparison among the performances of CRKSPH, the standard SPH formulation and the ReALE (Reconnecting arbitrary Lagrangian Eulerian) method by Loubère et al. (2010) is provided in the work by Frontiere et al. (2017) and we refer the reader to that paper. Their results show that for the triplepoint shock scenario the CRKSPH scheme gives the best results.
The setting of the initial model, particle number in each region and EOS is exactly the same as described in Frontiere et al. (2017). The evolution of the system with SPHYNX and two choices for the VE, p = 0 and p = 1 in Eq. (34), was tracked until t = 8. We show in Fig. 23 the density colormaps for both cases at times t = 1, 3, 5, 7. As we can see, the results with SPHYNX and p = 1 are similar to those obtained with the stateoftheart CRKSPH scheme. The main features of the shock wave are well simulated for both, the p = 0 and p = 1 cases, but the amount of structure seen in the calculation with p = 1 at t = 5 and t = 7 is larger than that of the standard VE choice. It is worth noting that our recipe for implementing the artificial viscosity is relatively basic and there is still some room for improvement in this area, such as, for instance, the use of timedependent AV coefficients (Cullen & Dehnen 2010; Rosswog 2015a).
6. Threedimensional tests
In this section we present three test cases where SPHYNX is applied to threedimensional (3D) scenarios. We show here that the combined use of the new VE, the sinc kernels, and IAD_{0} leads to better results when trying to obtain an homogeneous density field, decreasing discretization errors. In the following, we redo the Sedov test, now in 3D, where the new VE stand out by better handling shockwaves and the equalization enhances the outcome in simulations with high number of neighbors. Finally, the collapse of an isothermal cloud tests the implementation of SPHYNX coupled with gravity evaluation. The results show that the new VE enhance the kernel normalization, and consequently the overall interpolations, leading to a better shock treatment when compared with standard implementations.
6.1. Approaching a uniform scalar field
The initial models used in many SPH simulations are often built so that the density and/or the pressure are as homogeneous as possible. These systems should ideally remain in equilibrium during many soundcrossing times. Although very common, arranging particles in ordered lattices is not the best option because the particles move off the lattice and introduce noise. In this section we discuss the ability of SPHYNX to build a uniform density/pressure field, which is obtained after relaxing a pseudoordered initial distribution of particles. Several features of the resulting glasslike structure are discussed as a function of the VE choice. To obtain a homogeneous density profile we follow a similar procedure such as that explained in Rosswog (2015a).
We first spread N = 128 000 particles in an ordered 3D squared bcc lattice. Afterwards, their position is displaced at random in each direction, with maximum amplitude 40% of the size of the smoothing length. Then, we allow the system to evolve keeping internal energy constant, at u_{0} = 1, to regain equilibrium. During the relaxation, the velocities are set to zero but the particles are displaced with the simple recipe: Δr = 10^{3}h (a_{c}/  a_{c} ) where h is the smoothinglength and a_{c} stands for the acceleration vector. The value of h was the same for all particles, so that the number of neighbors is n_{b} ≃ 100 with low dispersion. During the evolution, the L_{1} error estimators were monitored for several magnitudes, such as density, partition of unity, and ⟨ Δr ⟩. These are defined as: We note that as P = (γ−1)u_{0}ρ, with γ = 5/3, the error estimator L_{1}(ρ) also provides the dispersion in pressure.
The L_{1} values after 800 iterations are summarized in Table 7. All settings of the VE lead to stable glasslike structures with small departures from homogeneity (L_{1}(ρ) ≤ 10^{3}). Model D, calculated with X_{a} = ( ⟨ m_{a}/ρ_{a} ⟩ )^{p}; (p = 1), displays the lowest L_{1}(ρ) values. As expected, the calculation with X_{a} = (m_{a}/ρ_{a})^{0.7} leads to the best partition of unity but, paradoxically, is the most inhomogeneous of the sample. Model C, calculated with a negative exponent p = −0.7 is very similar to model A computed with the standard VE choice (p = 0.0). Because the internal energy was kept constant during the relaxation, a negative p value in Eq. (11) is equivalent to taking a VE linked to pressure. We therefore conclude that SPHYNX is able to handle homogeneous systems in pressure equilibrium for a wide range of the parameter p. For these systems, the best choice of VE is that given by Eq. (34) with p ≃ 1.
6.2. Point explosion
The description of an explosion in the 3D space is usually a challenge for any hydrodynamics code. The lower resolution (in comparison with 2D calculations) makes it difficult to capture the density, pressure, and velocity at the peak of the blast and also to correctly reproduce the posttail values of these variables. We have investigated the ability of our code to cope with a pointlike explosion in 3D and compared the results with those obtained using GADGET2.
We have run two sets of models labeled as { S_{5},S_{6},S_{7},S_{8} } and { S_{9},S_{10},S_{11},S_{12} } in Table 3. The first set consists of lowresolution calculations whereas the second set of simulations has not only more resolution, but also uses more neighbors to make the interpolations.
Figure 24 summarizes the results for the set of lowresolution models. As in the 2D simulations, models S_{6} and S_{8} in Table 3, with the new volume elements, lead to the best results. Both models give a higher density peak and a better postshock evolution of density and pressure. They also exhibit an enhanced volume normalization and  Δr  has a better behavior across the wave front. The model calculated with p = 0 and Δn = 5 (S_{7}) is the thirdbest case, followed by model S_{5} (standard VE setting p = 0,Δn = 0). Figure 24 also includes the density and pressure profiles (light blue lines) calculated using GADGET2 for the same initial setting. Its density profile shows a similar postshock tail evolution as model S_{6}, but lower peak value and the posttail pressure profile diverges considerably from the analytical curve.
Fig. 26
Radial profiles of the sinc kernel index, n, and density for model S_{12} of Table 3. We note how the scheme increases the kernel index to improve the interpolations in the complicated regions of low or of fastchanging density. 
Fig. 27
Upper panels: density (left) and radial velocity profiles (right) for the collapse of an initially isothermal sphere of gas at t = 0.8. The solid black line is the 1DPPM solution. We show the SPHYNX calculations with p = 0 (red) and p = 1 (green) in Eq. (34). Calculations using GADGET2 are in blue. Lower panels: profiles of  ⟨ Δr ⟩  (left) and ∑ _{b}V_{b}W_{ab} (right) for p = 0 (red) and p = 1 (green). 
Figure 25 is the same as Fig. 24 but for models { S_{9},S_{10},S_{11},S_{12} } calculated with a larger number of particles and taking two times more neighbors in the summations. Therefore, these models are characterized by both a higher resolution and a less noisy estimation of gradients. According to our calculations, model S_{12} computed using p = 0.7, Δn = 5 is the one closer to the analytic expectations. The density peak of the blast was the highest and at the correct position, while the values of density and pressure close to the origin behave well. Further, model S_{12} also has the best volume normalization and minimum  Δr  at the position of the shock front. The profile of the sinc kernel index n of model S_{12} is shown in Fig. 26. We can see that the highest value of the exponent (n ≃ 9.8) is reached at x = 0.3 behind the density peak. These results are closely followed by model S_{10} (p = 0.7 and no equalization), and, in decreasing quality, by S_{11} (p = 0 with equalization), and S_{9} (p = 0 and no equalization).
The results above suggest that: 1) the use of the new volume elements is beneficial when handling shockwaves; 2) when using these VE, the equalization algorithm (i.e., considering n(r,t), variable kernel exponents) is only able to enhance the results when the number of neighbors is high (as in the cases depicted in Fig. 25).
6.3. Collapse of an isothermal cloud
A relevant test in Astrophysics is the numerical study of the gravitational collapse of a gaseous configuration, also known as the Evrard test (Evrard 1988). This test includes gravity and has been used many times to check hydrodynamics codes (see for example Springel 2010a and references therein).
We used here the initial configuration described in Springel (2010a) consisting of a sphere of gas with unit radius and mass. The density profile at t = 0 is (45)where R = 1 and M = 1. The initial configuration is assumed isothermal with u_{0} = 0.05 per unit mass, obeying an ideal EOS with γ = 5/3. All particles are initially at rest. For this choice the internal energy is much lower than the gravitational energy (assuming G = 1) and the system collapses.
To build the initial model we have conveniently stretched a uniform grid with N = 40^{3} particles so that the profile given by Eq. (45) is obtained. This procedure reproduces well the 1D density profile, except near the surface layers. We have tracked the collapse of the sphere for two different values of the parameter p in both Eq. (11) and in its smoothed counterpart, Eq. (34). The results at time t = 0.8 are compared in Fig. 27 not only with an accurate 1DPPM calculation (Steinmetz & Mueller 1993), but also with the results obtained using GADGET2 for the same initial model.
The evolution of models calculated using Eq. (11) with p = 0.7, and with p = 1 in Eq. (34) were rather similar, so we describe the collapse of the configuration only for the latter case. The upper panels of Fig. 27 depict the density and radial velocity profiles at t = 0.8. We see only minor differences among the three calculated models. The density profiles are almost indistinguishable. Nevertheless, a close inspection of the region around the shock front (see the zoomed region in Fig. 27) reveals that the model calculated with p = 1 in Eq. (34) has a slightly better match to the exact solution than p = 0 and that obtained with GADGET2. The profile of the radial velocity is similar in all three calculations. The evolutions calculated with GADGET2 and with SPHYNX with p = 0 were almost identical and some minor differences may be due to the details in the implementation of the AV and gravity.
The new volume elements provide a better kernel normalization at the center and front locations (bottomright panel in Fig. 27). Nevertheless, unlike in the point explosion tests, we have not seen a clear enhancement of  Δr  (Eq. (9)) when p ≃ 1 is used to compute the volume elements, although, overall, the profile is smoother (bottomleft panel).
7. Discussion and conclusions
In this paper, we present a new densitybased SPH code, named SPHYNX, and test it in a series of traditionally problematic simulations for SPH codes in 2D and 3D. In particular, we have been able to perform a RayleighTaylor simulation in a weak gravitational field g = 0.1. Additionally, the shockblob interaction test proved that SPHYNX can efficiently suppress the tensile instability that prevents the rise of hydrodynamical instabilities and mixing in many scenarios simulated with SPH. Additionally, the outcome of other tests, such as the hydrostatic square, KelvinHelmholtz instability, GreshoChan vortex, Sedov explosion, Noh wallshock, Evrard collapse and Triple pointshock, prove that our implementation provides results competitive with other stateoftheart calculations. For these problems, SPHYNX produces better results than many of the extant densitybased SPH codes, being qualitatively similar to those obtained with the recently developed CRKSPH scheme (Frontiere et al. 2017). But, unlike the CRKSPH method, our approach ensures the angular momentum conservation from the onset.
To achieve this, SPHYNX benefits from recent advances in the field and gathers together the latest methodologies to perform numerical simulations of astrophysical scenarios via the smoothed particle hydrodynamics technique. These methodologies include, as a novelty, a new generalized volume element estimator and a consistent update of the smoothing length and the sharpness of the interpolating kernel along with the particle density. Additionally, it counts with an integral approach to calculate gradients and a pairingresistant family of interpolators. These features are summarized and discussed in the following.
The choice of nonstandard volume elements to approximate the Euler integral equations as finite summations has a significant impact on the simulations. Following the works by Saitoh & Makino (2013) and Hopkins (2013), who generalized the VE so that they are not necessarily the trivial m/ρ choice, we postulate a new volume element which enhances the normalization of the kernel. As discussed in Sect. 3.1, the VE assigned to a particle is V_{a} = X_{a}/ ∑ _{b}X_{b}W_{ab}, where X_{a} = (m_{a}/ρ_{a})^{p} is the weighting estimator of the kernel and 0 ≤ p ≤ 1 is a parameter chosen by the user. The value p = 0 reduces the VE to 1/ ∑ _{b}W_{ab}, which is the standard VE when the mass of the particles is the same. For p = 1, we have V_{a} = (m_{a}/ρ_{a})/ ∑ _{b}(m_{b}/ρ_{b})W_{ab} which is simply the renormalized traditional volume element. As expected, a better kernel normalization (between a factor 2 and a factor 5) is obtained when these VE are used. A negative feature of the proposed VE is their tendency to overshoot the density estimation in the presence of sharp gradients when p ≃ 1. Actually, that is the fundamental reason for not taking p = 1 in the estimator X_{a} = (m_{a}/ρ_{a})^{p}. The optimal value of p depends on the particular problem at hand, but the range 0 ≤ p ≤ 0.7, explored in this work seems to be safe. Nevertheless, a most robust implementation that allows taking p = 1 is to consider X_{a} = ( ⟨ m_{a}/ρ_{a} ⟩ )^{p}, where ⟨ . ⟩ is the SPH average of the magnitude. Although this last procedure requires the computation of the averages ⟨ m_{a}/ρ_{a} ⟩, it is the recommended default choice because of its robustness and ability to keep track of strong shocks and instabilities in the presence of sharp density gradients.
Another important feature is the dynamical choice of the interpolating kernel function. A large body of calculations carried out with SPH in the past made use of the M_{4} cubicspline function to perform interpolations. The M_{4} polynomial has, however, a serious drawback: it is prone to the pairinginstability when the number of neighbors increases (e.g., exceeding n_{b} ≃ 60 in 3D calculations which uses the M_{4} kernel). This is clearly a limitation, because in practical applications it is advisable to take as many neighbors as possible to reduce the E_{0} errors in the SPH equations. A growing number of kernel candidates has been proposed during the last decade to alleviate this problem. For example, one option is to consider the natural extension of the M_{n} family to higher polynomial degrees, such as the quartic (M_{5}) or the quintic (M_{6}) kernels. More recently, a different family of interpolators has been proposed based on the Wendland functions, as discussed in Dehnen & Aly (2012), which shows a strong endurance against the pairinginstability. A third family of interpolators, called the sinc (harmoniclike) kernels, was introduced by Cabezón et al. (2008), which are also implemented in SPHYNX. As mentioned in Sect. 2.1, the definition of the sinc kernels is directly linked to that of the Diracδ function. Unlike the M_{n} family, which is discrete in the index n ∈ Z( +), the sinc kernels do form a continuous family, which depends on a leading exponent n ∈ ℜ( +). Actually, the M_{n} family could be considered as a subset of the sinc family (GarcíaSenz et al. 2014). Using the sinc family of kernels endows the SPH technique with a flexible engine, as the shape of the kernel can be dynamically changed, in a continuous way, during runtime. This feature can be used, for example, to suppress the pairing instability (see Sect. 5.1) or to equalize the resolution behind a shockwave (as shown in Fig. 26).
Additionally, SPHYNX estimates gradients by an integral approximation (IAD_{0}) which is more accurate than the traditional procedure based on the analytic derivative of the kernel function, and reduces the E0 errors caused by the particle sampling of the fluid. We fully confirm in this work the importance of this new approach, especially for handling hydrodynamic instabilities, in agreement with previous publications (GarcíaSenz et al. 2012; Cabezón et al. 2012; Rosswog 2015a,b; Valdarnini 2016).
SPHYNX has been validated with several standard tests in two and three dimensions, ranging from strong shocks and subsonic fluid instabilities in boxes, to larger systems where the gravitational force takes over. From the analysis of these test cases we summarize the following conclusions.
The use of the Integral Approach to calculate gradients along with the traditional volume elements, V_{a} = m_{a}/ρ_{a} (p = 0 in Eq. (12)) and a sinc kernel with n = 5 improves the simulation of hydrodynamics instabilities subjected to small initial perturbations with respect the standard SPH. The quantitative amplitude growthrate of the KH instability is closer to the correct growthrate (as computed with stateoftheart Eulerian codes) than current densitybased SPH codes (with smaller L_{1} errors by a factor 1.5–4), being similar to the results of the modern PSPH formulation. It is also able to reproduce the KH instability in stratified fluids with high density contrasts (ρ_{2}/ρ_{1} ≃ 8). In the case of the RT instability, the scheme is also able to cope with small perturbations (w_{0} = 0.0025) and tiny gravity values (g = −0.1), although in the latter case the nonlinear evolution scarcely shows structure. In shocks, the results are similar to those provided by the standard method in identical conditions.
When the new VE are switchedon there is, in general, an increase of the quality of the simulations. We have monitored the volume normalization condition ∑ _{b}V_{b}W_{ab} = 1 in all calculated models and, without exception, it is better fulfilled (usually in the range of 10–20% closer to unity) with the new volume elements. This change has an impact on the overall evolution of the simulation, considerably improving the results of the simulations. A paradigmatic case is the RT instability, where the use of the VE leads to an increase of the growthrate of the instability and to a richer evolution in the linear stage, even for the lowgravity simulation. In shockwaves, the front of the blast becomes steeper and the density peak is 10–25% higher, even in 3D. Regarding the Sedov test, the postshock evolution of density and pressure is 5–10% closer to the analytic expectations. It is also worth noting that the VE also improve the condition  Δr  = 0 which, according to Eq. (9), is a necessary condition to exactly compute the gradient of linear functions when the IAD_{0} scheme is used.
During the course of the simulations we did not see any sign of pairing instability, even when working with ≃ 50 neighbors in the 2D tests. In any case, to avoid the instability it is enough to raise the exponent of the sinc kernel above the adopted default value n = 5. We stress that, unlike other recent SPH schemes, the simulations of the KH and RT instabilities were carried out without including any artificial flux of heat or any other procedure to smooth the pressure.
Among the several improvements left for future work, we plan to improve the calculation of gravity by including a better treatment of the gravitational softening on short distances. The best way to do that is to include the gravity into the discretized SPH Lagrangian as described in Price & Monaghan (2007) and Springel (2010b). Also, the implementation and validation of switches to ensure that the AV is only added in regions where there are shocks (Cullen & Dehnen 2010), as well as noise triggers to control the velocity in subsonic flows (Rosswog 2015a) could be done with moderate effort. A more ambitious goal would be to directly calculate the volume elements solving implicitly the equation ∑ _{b}V_{b}W_{ab} = 1 on each particle of the system. Even though the strong coupling between particles renders any implicit calculation computationally expensive, it will probably solve the density overshooting problem seen in our explicit approach.
This is not the only way to compare the kernels as one could, for instance, compare the logarithms of the functions. Alternatively, the minimization of the metric distance d between the first derivative of the kernels can be also of interest. All these procedures lead to only slight variations of the kernel exponents with respect to those given in Table 2.
We note that the standard V_{a} = m_{a}/ρ_{a} is a particular case of Eq. (12) when p = 0 and all particles have the same mass. Therefore, we can use the same expression to explore the impact of using both the standard and the new VE.
Available at pencilcode.nordita.org
Nevertheless, the energy Eq. (23) also conserves entropy if the smoothing length is selfconsistently calculated with density using Eq. (17).
Acknowledgments
The authors want to thank S. Rosswog, M. Liebendörfer, and R. Käppeli for useful discussions and comments. We also thank M. Steinmetz for providing the PPM calculations of the Evrard test and C. McNally for giving us the reference model describing the growthrate of the KH instability. This work has been supported by the Swiss Platform for Advanced Scientific Computing (PASC) project DIAPHANE and by the European Research Council (FP7) under ERC Advanced Grant Agreement No. 321263 – FISH. (R. Cabezón). It has also been supported by the MINECOFEDER Spanish projects AYA201342762P, AYA201459084P and by the AGAUR (D. GarcíaSenz and J. Figueira). The authors acknowledge the support of sciCORE (http://scicore.unibas.ch/) scientific computing core facility at University of Basel, where some of the calculations were performed.
References
 Agertz, O., Moore, B., Stadel, J., et al. 2007, MNRAS, 380, 963 [NASA ADS] [CrossRef] [Google Scholar]
 Balsara, D. S. 1995, J. Comput. Phys., 121, 357 [NASA ADS] [CrossRef] [Google Scholar]
 Brandenburg, A., & Dobler, W. 2002, Comput. Phys. Comm., 147, 471 [Google Scholar]
 Brookshaw, L. 1985, PASA, 6, 207 [NASA ADS] [CrossRef] [Google Scholar]
 Brookshaw, L. 2003, ANZIAM, 44, 114 [CrossRef] [Google Scholar]
 Cabezón, R. M., GarcíaSenz, D., & Relaño, A. 2008, J. Comput. Phys., 227, 8523 [NASA ADS] [CrossRef] [Google Scholar]
 Cabezón, R. M., GarcíaSenz, D., & Escartín, J. A. 2012, A&A, 545, A112 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Couch, L. 1997, Digital and Analog Communication Systems, 5th edn. (Prentice Hall) [Google Scholar]
 Cullen, L., & Dehnen, W. 2010, MNRAS, 408, 669 [NASA ADS] [CrossRef] [Google Scholar]
 Dehnen, W., & Aly, H. 2012, MNRAS, 425, 1068 [NASA ADS] [CrossRef] [Google Scholar]
 Evrard, A. E. 1988, MNRAS, 235, 911 [NASA ADS] [CrossRef] [Google Scholar]
 Frontiere, N., Raskin, C. D., & Owen, J. M. 2017, J. Comput. Phys., 332, 160 [NASA ADS] [CrossRef] [Google Scholar]
 Fulk, D. A., & Quinn, D. W. 1996, J. Comput. Phys., 126, 165 [NASA ADS] [CrossRef] [Google Scholar]
 GarcíaSenz, D., Cabezón, R. M., & Escartín, J. A. 2012, A&A, 538, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 GarcíaSenz, D., Cabezón, R. M., Escartín, J. A., & Ebinger, K. 2014, A&A, 570, A14 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 GarcíaSenz, D., Cabezón, R. M., Domínguez, I., & Thielemann, F. K. 2016, ApJ, 819, 132 [NASA ADS] [CrossRef] [Google Scholar]
 Gingold, R. A., & Monaghan, J. J. 1977, MNRAS, 181, 375 [Google Scholar]
 Goncharov, V. N. 2002, Phys. Rev. Lett., 88, 134502 [NASA ADS] [CrossRef] [Google Scholar]
 Hernquist, L., & Katz, N. 1989, ApJS, 70, 419 [NASA ADS] [CrossRef] [Google Scholar]
 Hopkins, P. F. 2013, MNRAS, 428, 2840 [NASA ADS] [CrossRef] [Google Scholar]
 Hopkins, P. F. 2015, MNRAS, 450, 53 [NASA ADS] [CrossRef] [Google Scholar]
 Junk, V., Walch, S., Heitsch, F., et al. 2010, MNRAS, 407, 1933 [NASA ADS] [CrossRef] [Google Scholar]
 Loubère, R., Braeunig, J.P., & Ghidaglia, J.M. 2010, ArXiv eprints [arXiv:1010.4208] [Google Scholar]
 Lucy, L. B. 1977, AJ, 82, 1013 [Google Scholar]
 McNally, C. P., Lyra, W., & Passy, J.C. 2012, ApJS, 201, 18 [NASA ADS] [CrossRef] [Google Scholar]
 Monaghan, J. J. 1992, ARA&A, 30, 543 [Google Scholar]
 Monaghan, J. J. 1997, J. Comput. Phys., 136, 298 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Monaghan, J. J. 2000, J. Comput. Phys., 159, 290 [NASA ADS] [CrossRef] [Google Scholar]
 Monaghan, J. J. 2005, Rep. Prog. Phys., 68, 1703 [Google Scholar]
 Monaghan, J. J., & Gingold, R. A. 1983, J. Comput. Phys., 52, 374 [NASA ADS] [CrossRef] [Google Scholar]
 Monaghan, J. J., & Lattanzio, J. C. 1985, A&A, 149, 135 [NASA ADS] [Google Scholar]
 Noh, W. F. 1987, J. Comput. Phys., 72, 78 [NASA ADS] [CrossRef] [Google Scholar]
 Perego, A., Cabezón, R. M., & Käppeli, R. 2016, ApJS, 223, 22 [NASA ADS] [CrossRef] [Google Scholar]
 Price, D. J. 2008, J. Comput. Phys., 227, 10040 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Price, D. J. 2012, J. Comput. Phys., 231, 759 [NASA ADS] [CrossRef] [Google Scholar]
 Price, D. J., & Monaghan, J. J. 2007, MNRAS, 374, 1347 [NASA ADS] [CrossRef] [Google Scholar]
 Read, J. I., Hayfield, T., & Agertz, O. 2010, MNRAS, 405, 1513 [NASA ADS] [Google Scholar]
 Ritchie, B. W., & Thomas, P. A. 2001, MNRAS, 323, 743 [NASA ADS] [CrossRef] [Google Scholar]
 Rosswog, S. 2015a, MNRAS, 448, 3628 [NASA ADS] [CrossRef] [Google Scholar]
 Rosswog, S. 2015b, Liv. Rev. Comput. Astrophys., 1, 1 [NASA ADS] [CrossRef] [Google Scholar]
 Saitoh, T. R., & Makino, J. 2013, ApJ, 768, 44 [NASA ADS] [CrossRef] [Google Scholar]
 Saitoh, T. R., & Makino, J. 2016, ApJ, 823, 144 [NASA ADS] [CrossRef] [Google Scholar]
 Schoenberg, I. J. 1946, Q. Appl. Math, IV, 1, 54 [Google Scholar]
 Schuessler, I., & Schmitt, D. 1981, A&A, 97, 373 [NASA ADS] [Google Scholar]
 Springel, V. 2005, MNRAS, 364, 1105 [NASA ADS] [CrossRef] [Google Scholar]
 Springel, V. 2010a, MNRAS, 401, 791 [NASA ADS] [CrossRef] [Google Scholar]
 Springel, V. 2010b, ARA&A, 48, 391 [NASA ADS] [CrossRef] [Google Scholar]
 Springel, V., & Hernquist, L. 2002, MNRAS, 333, 649 [NASA ADS] [CrossRef] [Google Scholar]
 Steinmetz, M., & Mueller, E. 1993, A&A, 268, 391 [NASA ADS] [Google Scholar]
 Valdarnini, R. 2012, A&A, 546, A45 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Valdarnini, R. 2016, ApJ, 831, 103 [NASA ADS] [CrossRef] [Google Scholar]
 Wendland, H. 1995, Adv. Comput. Math., 4, 389 [Google Scholar]
 Zhu, Q., Hernquist, L., & Li, Y. 2015, ApJ, 800, 6 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Kernel separability
Fig. A.1
Value of S_{n}, for indices n = 3, n = 6, and n = 12 (left), and Wendland kernels C_{2} and C_{6} (right) in 2D (continuum thick lines). We show in dots the direct product (left) and (right) for the same set of indices. Increasing n, the sinc kernels approach separability in each spatial direction, while Wendland kernels do not. 
Spherically symmetric kernels sacrifice tensorial features in order to preserve the secondorder accuracy. The notable exception is the Gaussian kernel, which is both symmetric and separable in each axis direction. But, unlike the Gaussian function, the compactsupported spherically symmetric kernels are not separable, hence neither is the sinc kernels (S_{n}) used in this work. Nevertheless, the S_{n} family approaches separability as n → ∞. This property is shown in Fig. A.1 (left panel) which depicts the profiles of (continuum lines) and that of the direct product (dotted lines) for n = 3, n = 6, and n = 12, in green, pink, and black, respectively. As we can see, the dispersion in the kernels product becomes narrower as the kernel index n increases. As a comparison, we show the same profiles for some Wendland kernels in Fig. A.1 (right panel). In this case, it is clear that Wendland kernels are not separable as the product of the unidimensional components departs from the direct multidimensional calculation. Furthermore, the result does not improve as we increase the order of the Wendland kernel. The study of the consequences that this property may have is beyond the scope of this paper, but it is undoubtedly worthy to explore in future works.
Appendix B: A better partition of unit improves the estimation of the gradient of a linear function
We consider the following integral in 1D: (B.1)Because the integrand is an odd function, I_{0} is trivially equal to zero. Nevertheless, approaching I_{0} with finite summations does not necessarily ensure I_{0} = 0. We know that a necessary and sufficient condition to exactly reproduce the gradient of a linear function in SPH is that I_{0} = ∑ _{b}V_{b}x_{b}W_{ab} = 0, (Eq. (9)), where V_{b} is the volume element of particle b.
Integrating I_{0} by parts, (B.2)where G(x,h) is the primitive integral of W(x,h): G(x,h) = ^{∫}W(x,h) dx.
We now consider a very simple kernel W(x,h) whose primitive G(x,h) can be obtained analytically: (B.3)This kernel (Fulk & Quinn 1996) is spherically symmetric but currently it is not the preferred one because it is not a flattop kernel. The normalization constant in 1D is C = 0.5. Nevertheless, it is an adequate kernel for this proof, as it admits an analytical primitive, (B.4)The first term of the RHS of Eq. (B.2) is zero, because G(x,h) vanishes very quickly when x/h increases. For spherically symmetric kernels I_{0} = 0, thus the integral giving the second term should also be zero. The integral I_{0} can be estimated as I_{0} = I_{1} + I_{2} where, (B.5)(B.6)with W(x,h) given by Eq. (B.3). The analytical calculation of these integrals gives I_{1} = −Ch, I_{2} = + Ch, and I_{0} = I_{1} + I_{2} = 0, which is obvious because Eq. (B.1) is an odd function. Things are different when we approach these integrals by finite summations: (B.7)On the other hand, if the V_{b} are well chosen, they should fulfill the equipartition condition: (B.8)Now we make the reasonable ansatz that improving V_{b} is actually enhancing each term inside the summation in Eq. (B.8). Thus, (B.9) and, according to Eq. (B.7), I_{0} vanishes (or, in any case, it becomes very small) even when it is approached using summations. We note that this proof is strictly valid only for exponential kernels. Nevertheless, the main conclusion also holds (at least qualitatively) for other centrallypeaked, sphericallysymmetric interpolating functions. The positive feedback between IAD_{0} and the VE is also supported by many of the test cases presented in this work.
Appendix C: Partition of the unity: convergence rate of the estimator X = (m/ρ)^{p}
We discuss here a simple 2D static test which explores the effect of changing the exponent p of X_{a} = (m_{a}/ρ_{a})^{p} into the normalization of the kernel. We considered a shocktube filled with a twodensity fluid separated by a contact discontinuity. The jump of the density across the frontier is estimated for several values of 0 ≤ p ≤ 1. The SPH particles were arranged in an ordered grid according to the density value at both sides of the discontinuity. The contact between both fluids was not smoothed. We have considered two density ratios: ρ_{1}/ρ_{2} = 2 and ρ_{1}/ρ_{2} = 8, and calculated the magnitude ∑ _{b}V_{b}W_{ab} for different values of p. The results are depicted in Fig. C.1 which shows the maximum relative error in the kernel normalization  ∑ _{b}V_{b}W_{ab}−1  as a function of p, the density contrast and the particular procedure to estimate X_{a}, either X_{a} = (m_{a}/ρ_{a})^{p} (continuum red lines) or X_{a} = ( ⟨ m_{a}/ρ_{a} ⟩ )^{p} (green dashed lines), where ⟨ . ⟩ is the SPH average. The smoothing length was adjusted so that there were n_{b} = 50 neighbors contributing to the summations.
Fig. C.1
Static shocktube problem described in the Appendix C: maximum relative error in the partition of unit as a function of the exponent p used to infer the particle volume. The red solid line is for X = (m/ρ)^{p} and the green dashed line is for X = ( ⟨ m/ρ ⟩ )^{p}. 
For not too large density ratios, raising the exponent p leads to a linear improvement in kernel normalization. The convergence rate is slower when X_{a} = ( ⟨ m_{a}/ρ_{a} ⟩ )^{p}, as expected. Things are different for large density ratios where the convergence rate follows a parabolic line with a local minimum around p ≃ 0.7 (uppermost red line in Fig. C.1). Above that value it seems that any increase of p makes the convergence worse. The reason is that, in static configurations with sharp contact discontinuities, the density under/overshoots at the two sides of the discontinuity. Taking both a large density ratio and a high p value induces a catastrophic feedback between the overshooting and X_{a}. Such behavior is due to the explicit nature of the implementation of the volume elements in SPH and would probably be overcome using an implicit approach. A simple recipe to circumvent the problem is to take X_{a} = ( ⟨ m_{a}/ρ_{a} ⟩ )^{p}, which leads to a slower, albeit safer, convergence rate (uppermost green line in Fig. C.1).
Appendix D: Included physics and alternative formulations of the energy equation
Most astrophysical applications need to calculate gravity, and to that extent SPHYNX incorporates an octaltree structure (Hernquist & Katz 1989) with several levels of particle clustering. When the particle interdistance r_{ab} is shorter than h_{a} + h_{b} we apply a simple smoothing to the gravitational force, (D.1)This softening usually gives satisfactory results but it is not fully compatible with the EulerLagrange derivation of the SPH equations (Price & Monaghan 2007).
SPHYNX also incorporates a thermal conductive transport equation compatible with the IAD formulation. That equation was described and checked in Cabezón et al. (2012) and it is reproduced here for completeness, (D.2)where d is the dimension of the space and the tilde symbol means the arithmetic average of the magnitude.
In high density plasmas with finite temperature, which characterize compact objects such as white dwarfs and neutron stars, it is often preferable to directly compute the temperature instead of the internal energy. SPHYNX can switch the energy equation, Eq. (23), to the temperature equation, (D.3)where C_{v} is the specific heat. This formulation of the temperature equation also leads to an almost perfect energy conservation. The internal energy can be obtained through the EOS when necessary. Equation (D.3) was recently used to calculate explosion models of Type Ia supernova with the SPHYNX code (GarcíaSenz et al. 2016).
Also, the energy equation may be substituted without much effort by an entropy equation which ensures perfect entropy conservation in pure adiabatic fluxes (Springel & Hernquist 2002)^{8}. An isentropic flow is characterized by (D.4)where A_{a} is a constant determined by the initial conditions of the flux. The entropy equation was incorporated as default in the GADGET2 code (Springel 2005), bringing excellent results for ideal EOS. However, it has to be adapted to more complex and realistic EOS where the adiabatic index γ can be timedependent and not so straightforward to know.
All Tables
Values of the exponent n of the sinc kernels that provide the best match (i.e., minimum metric distance ) to the profile of known interpolators, such as members of the Bsplines, M_{n}, and Wendland, C_{m} families.
All Figures
Fig. 1
Fourier transform for the spline kernel B_{4} and B_{6} (black lines), the Wendland kernels C_{4} and C_{6} (light blue lines), and the sinc kernels S_{5}, S_{7}, and S_{10}. Only the positive part of the transform is shown. 

In the text 
Fig. 2
Impact of changing the sinc kernel index n and number of neighbors n_{b} on the pairinginstability during the relaxation towards equilibrium of a perturbed homogeneous system. In the Xaxis the minimum interparticle distance normalized to its initial value is represented. The Yaxis shows the corresponding percentage of particles at different times. We show the results for n = 3 (red), n = 5 (green), n = 7 (blue) and n = 10 (pink). Continuum lines are for t = 0.01, dashed lines for t = 0.19 and big dots for t = 1.79. The continuum line in light blue for n_{b} = 100 was calculated using the Wendland C_{3} interpolator and shows the results at t = 0.19. 

In the text 
Fig. 3
Same as Fig. 2 but using IAD_{0} instead of the standard gradient estimator. In this case, particle distributions peak at higher q/q_{0} ratios compared to those in Fig. 2, showing more resistance to particle clumping. 

In the text 
Fig. 4
The unmixed isobaric twofluid test at times t = 0, t = 0.4, t = 1 and t = 2 (columns from left to right) calculated with different volume elements. The initial model is that of two nested squares with density contrast of four (the one with higher density being the inner yellow square). First and second rows use particles with identical mass but unevenly spaced grid, while the opposite applies for the two last rows. The first row depicts the evolution with the volume estimator (p = 0.9). The second row is the same, but using either X_{a} = 1, m_{a}, and (k = 0.5) (all three giving similar results). The evolution shown in the third row was calculated using either , and , while the outcome using X_{a} = m_{a} is shown in the fourth row. 

In the text 
Fig. 5
Impact of X_{a} = (m_{a}/ρ_{a})^{p} in both the volume normalization, ∑ _{b}V_{b}W_{ab} = 1, (left) and ⟨ Δr ⟩ = 0 condition (right) for different values of the exponent. The figure depicts the profile of these magnitudes along a 1D cut taken around y = 0.5 in the hydrostatic system shown in Fig. 4. We notice how the jump at the contact discontinuity is largely reduced in both conditions when p ≃ 1. 

In the text 
Fig. 6
Results for the 2D Sedov test at t = 0.1. S_{1} corresponds to p = 0 (standard VE) and Δn = 0 (no equalization). S_{2} shows the effect of the VE with p = 0.7 in and no equalization, while S_{3} corresponds to p = 0 and equalization with Δn = 5. The profile S_{4} shows the outcome when p = 1.0 in and Δn = 0. We show here the radial profiles of density (upperleft), normalized pressure (upperright), and the fulfillment of the conditions  ⟨ Δr ⟩  = 0 (bottomleft) and ∑ _{b}V_{b}W_{ab} = 1 (bottomright). 

In the text 
Fig. 7
Results for the 2D Sedov test at t = 0.1 taking X_{a} = P^{k}, where P is the pressure and k = 0.05 (black dashed line) compared to models S_{1} (red) and S_{2} (green). We show here the radial profiles of the density (left panel) and the equipartition condition (right panel). Solid black line is the analytic solution for the density profile. We note on the left panel that there is a small precursor bump in density for the calculation with X_{a} = P^{k} which, for models S_{1} and S_{2}, is absent. 

In the text 
Fig. 8
Twodimensional wallshock test at time t = 0.5. Left: density profiles for models described in Table 4: WHS_{1} (red), WHS_{2} (green), and WHS_{G} (blue) in dots. The analytic result is the solid black line. The central panel depicts the particle distribution with density as color at t = 0.5 as obtained with GADGET2 whereas the last panel is that of model WHS_{2}. We note how WHS_{2} achieves lower particle disorder in the central region, and smaller density oscillations at the shock position than GADGET2. 

In the text 
Fig. 9
Twodimensional wallshock test. Left: volume normalization at t = 0.5 for model WHS_{1} (red) and WHS_{2} (blue). Right: same but for the condition,  ⟨ Δr ⟩  = 0, among mass points within the kernel range of 2h. WHS_{2} (with new VE) fulfills both conditions better than WHS_{1} (with the standard VE). 

In the text 
Fig. 10
Density colormap depicting the evolution of the KelvinHelmholtz instability for models KH_{1} (p = 0, upper panels) and KH_{2} (p = 0.7, lower panels) of Table 5. Each snapshot gives the density colormap at times t = 1.5 (left) and t = 2.5 (right). 

In the text 
Fig. 11
Amplitude mode growth of models KH_{1} (p = 0) and KH_{2} (p = 0.7) of Table 5. The solid red line is the reference solution (McNally et al. 2012) obtained with the PENCIL code (Brandenburg & Dobler 2002), while the solid green and blue lines were obtained with SPHYNX for two different values of the p parameter. The dashed lines correspond to calculations performed with the NDSPMHD code (Price 2012) with a resolution of 512^{2} and kernel B_{4} (pink) and B_{6} (light blue). See McNally et al. (2012) for further details. We also show the L_{1} errors with respect to the reference values for each calculation. 

In the text 
Fig. 12
Kernel normalization of models KH_{1} (left) and KH_{2} (right) of Table 5 at t = 2.5. The dispersion of ∑ _{b}V_{b}W_{ab} around unity is shown for two options of the exponent of the volume estimator, X_{a} = (m_{a}/ρ_{a})^{p}, p = 0 (left) and p = 0.7 (right). Solid red lines are maximum and minimum values to help as a visual aid. 

In the text 
Fig. 13
Density colormap depicting the evolution of the KelvinHelmholtz instability for models KH_{3} (p = 0, first row) and KH_{4} (p = 0.7, second row) of Table 5, corresponding to a density jump of a factor 8, at times t = 1.5, 2.5. We notice that, despite the much larger jump in density, the KH instability is still able to develop in both cases. 

In the text 
Fig. 14
Density colormap depicting the evolution of the RayleighTaylor instability for models RT_{1} (p = 0, first row), and RT_{2} (p = 0.7, second row) of Table 6 at times t = 1.5, 2.5, 3.5, 5.0, 6.5, and 7.5, respectively. The estimator was used to compute the volume elements. 

In the text 
Fig. 15
Same as Fig. 14 but for models RT_{3} (p = 0, first row), and RT_{4} (p = 0.7, second row) of Table 6 (with a very reduced gravity field of g = −0.1). Times are t = 5.0, 7.5, 10.0, 12.5, 15.0, and 17.5, respectively. 

In the text 
Fig. 16
RayleighTaylor test in 2D for models RT_{3} (p = 0, red lines), RT_{4} (p = 0.7, green lines), and RT_{5} (p = 1, blue lines). In the left panel we show the time evolution of the tips of the bubble (solid lines) and the spikes (dashed). From these it is clear that the inclusion of the new VE elements enhances the growth of the instability in the linear stage. Once the nonlinear stage is reached all models grow at similar rates. It is worth noting here that simulations without IAD_{0} and standard VE (not shown) failed to develop the RT instability. In the panel on the right, we show the time evolution of the bubble velocity. It can be seen how a plateau is reached by all three models, which is closer to the expected terminal velocity (pink solid line) for the models using the new volume elements. 

In the text 
Fig. 17
GreshoChan vortex test in 2D. Radial profiles of the tangential velocity for several kernel choices and different number of neighbors, at time t = 1. The interpolators are the sinc(n = 5, 6.315, 7) and the Wendland C_{6}. The theoretical velocity profile is also shown (blackdashed line). 

In the text 
Fig. 18
GreshoChan vortex test in 2D. Left: value of the magnitude L_{1}, at time t = 1, calculated with the sinc kernels as a function of the kernel index n and number of neighbors n_{b} (with constant total number of particles N = 256^{2}). Right: convergence rate of sincs and Wendland C_{6} kernels as a function of the number of neighbors. 

In the text 
Fig. 19
GreshoChan vortex test in 2D. Convergence rate of L_{1} as a function of the equivalent 1D number of particles, N_{1d}, at time t = 1, for different sinc kernels. 

In the text 
Fig. 20
Windcloud interaction: density colormap depicting the evolution of the collision and cloud fragmentation at three fiducial times t/t_{KH} = 0.5, 2, and 3. Model W_{G} refers to the calculation using GADGET2. Models W_{0} and W_{1} were calculated with SPHYNX using the standard and the new VE, respectively. 

In the text 
Fig. 21
Windcloud interaction: evolution of the surviving fraction of the cloud for the three calculated models. Even though a small remnant of the cloud is still present at t/t_{KH} ≃ 4 the calculation W_{1} (SPHYNX with the new VE, that is, p = 1) leads to the best results. The other two calculations (SPHYNX with standard VE, i.e. p = 0, and GADGET with standard VE and without IAD_{0}) are more inefficient in diluting the cloud as between 40–60% of it is still present even at t/t_{KH}> 4. 

In the text 
Fig. 22
Windcloud interaction: impact of the VE choice in the tensile instability. Near the contact discontinuity the standard VE, m/ρ, produces the segregation of the two fluids, preventing mixing (left). Choosing VE_{a} = ( ⟨ m/ρ ⟩ _{a})^{p}/ ∑ _{ab}( ⟨ m/ρ ⟩ _{b})^{p}W_{ab}, where ⟨ m/ρ ⟩ is the SPH average of m/ρ and p = 1, suppresses the tensile instability leading to mixing of the fluids and the subsequent bubble fragmentation (right). 

In the text 
Fig. 23
Colormap of the mass density in the triplepoint shock test. Each row shows a snapshot at a different time, while each column corresponds to a different VE. The column on the left was calculated with X_{a} = ( ⟨ m/ρ ⟩ _{a})^{p} and p = 1, while the one on the right corresponds to X_{a} = 1 (i.e., standard VE). 

In the text 
Fig. 24
Results for the 3DSedov test for models S_{5}, S_{6}, S_{7}, S_{8} of Table 3 and Gadget2. We show here the radial profiles of density (upperleft), normalized pressure (upperright), and the fulfillment of the conditions  ⟨ Δr ⟩  = 0 (bottomleft) and ∑ _{b}V_{b}W_{ab} = 1 (bottomright). Solid black lines are the analytic solutions for density and normalized pressure profiles. 

In the text 
Fig. 25
Same as Fig. 24 but for models S_{9}, S_{10}, S_{11}, S_{12}, with n_{b} = 220, of Table 3. 

In the text 
Fig. 26
Radial profiles of the sinc kernel index, n, and density for model S_{12} of Table 3. We note how the scheme increases the kernel index to improve the interpolations in the complicated regions of low or of fastchanging density. 

In the text 
Fig. 27
Upper panels: density (left) and radial velocity profiles (right) for the collapse of an initially isothermal sphere of gas at t = 0.8. The solid black line is the 1DPPM solution. We show the SPHYNX calculations with p = 0 (red) and p = 1 (green) in Eq. (34). Calculations using GADGET2 are in blue. Lower panels: profiles of  ⟨ Δr ⟩  (left) and ∑ _{b}V_{b}W_{ab} (right) for p = 0 (red) and p = 1 (green). 

In the text 
Fig. A.1
Value of S_{n}, for indices n = 3, n = 6, and n = 12 (left), and Wendland kernels C_{2} and C_{6} (right) in 2D (continuum thick lines). We show in dots the direct product (left) and (right) for the same set of indices. Increasing n, the sinc kernels approach separability in each spatial direction, while Wendland kernels do not. 

In the text 
Fig. C.1
Static shocktube problem described in the Appendix C: maximum relative error in the partition of unit as a function of the exponent p used to infer the particle volume. The red solid line is for X = (m/ρ)^{p} and the green dashed line is for X = ( ⟨ m/ρ ⟩ )^{p}. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.