Issue 
A&A
Volume 562, February 2014



Article Number  A78  
Number of page(s)  13  
Section  Cosmology (including clusters of galaxies)  
DOI  https://doi.org/10.1051/00046361/201322412  
Published online  10 February 2014 
ISIS: a new Nbody cosmological code with scalar fields based on RAMSES
Code presentation and application to the shapes of clusters
^{1}
Institute for Theoretical Astrophysics, University of Oslo,
PO Box 1029 Blindern,
0315
Oslo,
Norway
email: claudio.llinares@astro.uio.no
^{2}
Hamburger Sternwarte, Gojenbergsweg 112,
21029
Hamburg,
Germany
Received:
31
July
2013
Accepted:
24
December
2013
Several extensions of the standard cosmological model include scalar fields as new degrees of freedom in the underlying gravitational theory. A particular class of these scalar field theories include screening mechanisms intended to hide the scalar field below observational limits in the solar system, but not on galactic scales, where data still gives the freedom to find possible signatures of their presence. To make predictions to compare with observations coming from galactic and clusters scales (i.e. in the nonlinear regime of cosmological evolution), cosmological Nbody simulations are needed, for which codes that can solve for the scalar field must be developed. We present a new implementation of scalartensor theories of gravity that include screening mechanisms. The code is based on the already existing code RAMSES, to which we have added a nonlinear multigrid solver that can treat a large class of scalar tensor theories of modified gravity. We present details of the implementation and the tests that we made to the code. As application of the new code, we studied the influence that two particular modified gravity theories, the symmetron and f(R) gravity, have on the shape of cluster sized dark matter haloes and found consistent results with previous estimations made with a static analysis.
Key words: gravitation / galaxies: clusters: general / dark energy / largescale structure of Universe / galaxies: halos / methods: numerical
© ESO, 2014
1. Introduction
While the standard model for cosmology, ΛCDM, is widely accepted as a possibly valid explanation for reality, there are several issues that are not fully understood on galactic scales, which give theoreticians the chance to consider extensions to the model. Furthermore, the tension found by the Planck collaboration (Planck Collaboration XX 2014) between σ_{8} and Ω_{m}, likewise puts the model in check on cosmological scales. The same collaboration also confirmed the already known anisotropy of the cosmic microwave background (Eriksen et al. 2004; Hansen et al. 2009; Planck Collaboration XVI 2014), which is difficult to explain within the standard model. Between the possible extensions to the model, there is the idea of modifying the gravitational theory. Several alternative theories exist (Clifton et al. 2012; Amendola et al. 2013), all of which include extra degrees of freedom in the form of scalar, vectors, and even tensor fields. To test these models in the nonlinear regime of structure formation by using large surveys, such as the upcoming Euclid (Laureijs et al. 2011) and LSST (LSST Science Collaboration et al. 2009) surveys, precise and accurate predictions are needed, for which numerical simulations must be performed. Within the standard context, there are several algorithms and very well tested codes that are known to give consistent results. For models beyond ΛCDM, however, the situation is still not settled, and only a few codes exist per alternative gravitational model (e.g. Llinares et al. 2008; Oyaizu 2008; Schmidt 2009; Baldi et al. 2010; Llinares 2011; Zhao et al. 2011; Li et al. 2011a,b, 2012, 2013; Baldi 2012a,b; Brax et al. 2012, 2013; Puchwein et al. 2013).
Nbody techniques are crucial in the build up of predictions, and thus, they should be developed independently by more than one research group. In an effort to extend the existing library of codes and give strength to the results by showing that they are stable when changing underlying approximations and implementation details, we present here a new implementation of scalartensor modified gravity theories in the code RAMSES (Teyssier 2002).
The set of models we have focussed on are scalartensor models that were originally designed as explanation for dark energy and that include screening mechanisms, which are induced by nonlinearities in the equation of motion for the scalar field. The dominant dynamical effects appear in this models through the inclusion of a fifth force in addition to the gravitational force. We are interested in the effects of this fifthforce in the evolution of large scale structure and the formation of dark matter haloes in particular. The code that we present here must be taken not as definitive, but as a starting point for more complex simulations including hydrodynamics and different types of couplings, including even nonuniversal couplings to the different matter species found in our Universe.
For a large class of scalartensor theories, one finds the following equation of motion for the metric perturbation Φ, the scalar field φ, and the positions x of the Nbody particles: where S and g are model specific functions. These equations are the outcome of canonical scalar tensor theories, and they can also be applied to the study of f(R) theories, which can be recast as a scalartensor theory. The hard part of trying to solve Eqs. (1) to (3) consists of changing the original linear multigrid solver of RAMSES to a nonlinear one. In our implementation, the functions S and g are not hardcoded, but left as free functions, which increases the flexibility regarding the models that can be simulated.
Equations (1) to (3) are the consequence of assuming the quasistatic approximation (i.e. time derivatives in the fields were neglected). Llinares & Mota (2013a) presented a nonstatic solver, which raises the question about the validity of this approximation. However, one has to take into account that nonstatic simulations are very costly since they track very rapid oscillations of the scalar field in time, which implies the use of very short time steps. Furthermore, the frequency of the oscillations grows when going to small scales, which will imply even higher requirements when increasing resolution. While Llinares & Mota (2013a) proved that some of the properties of static and nonstatic solutions can disagree, the static simulations are still needed to calibrate nonstatic methods and to work out observables that are not affected by the oscillations. In other words, while the static approximation could give biased predictions for a number of (still undetermined) observables, it gives much higher flexibility regarding resolution and will continue to be used in the future.
The paper is organised as follows. Section 2 presents the general set of equations we intend to solve, while Sect. 3 describes the discretisation in detail that we use as well as our implementation of the nonlinear multigrid algorithm in the code RAMSES. In Sect. 4 we give the modelspecific equations for two different modified gravity theories. Tests that we have made of the code are presented in Sects. 5, and 6 shows an application of the code where we calculate the impact that scalar fields have on the shape of dark matter haloes. Finally, conclusions and discussion are given in Sect. 7.
2. The equations for generic scalar fields
We are interested in running simulations with models defined by the following action: (4)where the Einstein and Jordan frame metrics (g_{μν} and ) are related according to (5)The equation of motion for the scalar field that results from this Lagrangian is (6)where T is the trace of the Einstein frame energy momentum tensor. To be able to introduce this equation into the code in the cosmological context, we need to specify the metric (7)which is a flat FriedmannLemaîtreRobertsonWalker metric with scalar perturbations. With this metric, the equation of motion in the quasistatic limit reads as (8)where ρ is the matter density and S the sourceterm shown in Eq. (2).
In certain models, it is convenient for numerical reasons to redefine the scalar field (9)where the function j is chosen such that it fixes the sign of the scalar field to be unique and at the same time, it reduces extreme gradients that the scalar field could have. Typical choices for j are power laws or exponential function. The field equation of the new field u is (10)where (11)Thus, even though we start with a canonical scalar field, the equation we end up trying to solve will not be canonical in many cases. Our code must therefore be able to solve noncanonical equations, and we discuss how this is done in the next section. Naturally, since the redefinition is usually nonlinear, it must be made after switching to a dimensionless field, which we describe in Sect. 4 when specifying the models.
The evolution of the matter component is found by discretizing the density field with particles and finding their free trajectories, which are given by the geodesics equation. By taking the terms that involve the scalar field in the action into account, one obtains the following modified geodesics: (12)where we also neglected nonstatic terms.
3. Implementation of scalar fields in Ramses
Our code is a modification of the open source Nbody code RAMSES (Teyssier 2002), to which we added a nonlinear implicit solver to consider the equation of motion of the scalar field in its static approximation. Information about the original multigrid linear solver that we employed as starting point can be found in Guillet & Teyssier (2011). The set of variables for which the code is written in are defined in Martel & Shapiro (1998).
In brief, the RAMSES code is an Nbody particle mesh code that also includes a Godunov solver to treat the evolution of baryons. The gravitational forces are calculated as spatial derivatives of the gravitational potential that is previously calculated on a grid. We give here a few words regarding the original linear Poisson solver and present the modifications that are necessary for including the nonlinear equation for the scalar field in its canonical and noncanonical forms given by Eqs. (8) and (10) respectively.
3.1. Poisson’s equation
In the standard gravity case, the code RAMSES solves the following equation for the gravitational potential: (13)where δ = δρ/ρ_{0} is the overdensity. The equation is solved by discretizing the differential operator ∇^{2} on a cartesian grid and applying a GaussSeidel iteration scheme to the resulting algebraic equation. Multigrid techniques are implemented to accelerate the convergence of the method.
The Laplacian is discretized by using a standard secondorder formula (14)where we show only subindexes with values different than (i,j,k), which is the notation that we use throughout the paper. The standard code solves the previous equation by means of an explicit iterative method that starts from an initial guess for the potential. During each iteration step, the potential is changed in an explicit way according to (15)To speed up the convergence, the solver includes multigrid relaxation. In brief, a twolevel algorithm is as follows. Given an approximation for the solution φ^{k} the code solves the equation (16)for a coarse grid correction δφ, where R is a restriction operator, and ϵ the fine grid residual, which is defined as (17)Once a fixed number of GaussSeidel iterations is made for solving Eq. (16), a correction is applied to the original (fine grid) solution in the following way: (18)where P is a prolongation operator that moves the information from the coarse to the fine grid. The standard RAMSES code not only uses two levels for the iterations, but also several levels within a V scheme (i.e. iterations are made starting from the finest grid down to the coarsest one and coming back up making corrections in every level). More details on multigrid relaxation and its generalisation to nonlinear equations can be found, for instance, in Brandt (1977), Wesseling (1992), or Trottenberg et al. (2000).
3.2. Extending the original solver to nonlinear equations (full approximation storage)
Our solver for the scalar field is based on the original Poisson’s solver, however, since the source of the scalar field equation will for many models have a nonlinear dependence on the scalar field, the previous procedure can no longer be used. Our generalisation of the multigrid scheme is based on the full approximation storage (FAS) algorithm (e.g. Brandt 1977). In this case, the equation solved in the coarse grids is no longer a correction to the solution, but the solution itself. The original equations for the scalar field, Eqs. (8) or (10), can be written in the form (19)where the operator L is given in the canonical case by (20)and by (21)in the noncanonical one. The new source Σ is zero in the fine grid and has the following expression in the coarse grids, (22)where R is a restriction operator that moves information from the original fine grid to the coarse one, and the residual ϵ is defined as (23)The source Σ is only calculated when jumping from a fine to a coarse grid.
The GaussSeidel iterations that are needed to improve the solution of the discretized equation are performed in an implicit way (24)This expression can be derived as one step of a NewtonRalphson scheme applied to the solution of the equation (25)and assuming that (26)which was found to be a good approximation.
The nonlinear multigrid algorithm can be implemented in the same way in both the canonical and noncanonical cases. The only difference between the two equations is the discretization formula used, which has to be written explicitly for each differential operator in the uniform part of the grid and in the boundaries of the refinements. We present these details in the following sections.
3.3. Discretization of the canonical equation
The canonical Eq. (8) is discretized using standard secondorder formulas as in the original RAMSES code, but applied to the scalar field φ instead of the gravitational potential Φ. The derivative of the discretised differential operator that are needed for the implicit iterations is given by (27)where we have taken into account that the source S in Eq. (8) is now a function of both the matter density and the scalar field φ.
3.3.1. Discretization near the boundaries of the refinements (canonical case)
The standard RAMSES code includes adaptive mesh refinements, which means that the resolution of the grid is not uniform in space, but is increased in regions of interest. The decision to refine a cell in the fine grid is given by some specific criteria, which could be, for instance, a density threshold. The nodes that lie on the border of the refinement patches lack one or more neighbours and thus, the discretization formula must be modified to take this into account. We briefly describe the workaround to this problem that is implemented in the standard code and in our extension to the nonlinear case. We give a simplified discussion of the canonical case, but describe the complete algorithm in the following section when showing the discretization for the noncanonical equation.
The reconstruction method included in the standard code is based on the fact that the standard formula for the Laplacian, Eq. (14), can be rewritten as (28)where we show derivatives in only one dimension. To explain the procedure, we assume, for instance, that we want to calculate the Laplacian in a cell (i, j, k), which only lacks the neighbour (i + 1,j,k) (i.e. the node (i, j, k) is the last one of a refinement to the right in x direction). The derivative in the x direction is calculated by using information that must be reconstructed in the boundary of the refinement, which we call φ_{i + 1/2}. The modified expression for this derivative is (29)where φ_{i + 1/2} is obtained in the first place by moving the information from the next neighbour coarse node to the neighbour nonexisting node φ_{i + 1}. Next, the information is moved from there to the border of the refinement (giving us φ_{i + 1/2}). When the addition for the 3D Laplacian is made, we obtain the following discretization formula: (30)In the general case, the number 7 will be substituted by the number of boundaries n that the node (i,j,k) contains. See Gibou et al. (2001) for more details on alternative discretization formulas.
The equation for the explicit iterations, Eq. (15), is then modified and written as (31)The term in square brackets in this expression can be seen as a modified source and calculated before any iteration is made, which is the way the algorithm is implemented in the original code. However, in the nonlinear case, the previous expression is not valid because the Laplacian and the solution have a nonlinear dependence on φ. In the modified code, the complete Laplacian is calculated at each iteration step. This can be done by rearranging terms in the discretized Laplacian in the following way: (32)The first term can be now calculated only once, before any iteration is made. The second term must be calculated at every GaussSeidel iteration step.
3.4. Discretization of the noncanonical equation
The discretized differential operator L that corresponds to the noncanonical Eq. (10) can be written as (33)where we used the same notation as in previous paragraphs (i.e. we show only subindexes with values different than (i, j, k)). By assuming b(u) = 1, we recover the standard formula for the Laplacian. The values of b at the faces of the nodes are calculated by linear interpolation The derivative of Eq. (33) needed for the implicit scheme is given by
3.4.1. Discretization near the boundaries of the refinements (noncanonical case)
When the multigrid algorithm is applied not only to the domain grid (which is uniform), but also to the refined regions, we face the problem that the boundary of a node that belongs to a refinement does not always correspond to the boundary of its corresponding coarse nodes. The issue was solved in the standard RAMSES code by introducing a mask function in the grids to indicate which of their nodes belongs to a given refinement or does not. Inner cells (i.e. cells that are not in the boundary) are initialised with a mask function with value m_{i,j,k} = 1 and outer cells with a mask m_{i,j,k} = −1. The boundary is defined at the position where the interpolated value of the cellcentred mask crosses zero. In the finest grid, the boundaries are positioned at the faces of the outer cells. On the corresponding coarser grids, the mask is calculated by applying the restricting operator to the mask. The mask values must be understood as follows. Cells with positive mask value have their centre inside the refinement. In that case, one can use the inner discretisation formulas described above. Cells with negative mask value exists as refined cells, but have their centre outside the boundary. Finally, cells where the mask takes the value −1 are completely outside the refinement (i.e. they do not exist). Figure 1 shows an example of the distribution of masks in a node that belongs to a 2D stencil and is close to the boundary of the refinement.
Fig. 1
A five points stencil (2D) located close to the boundary. The central cell has two neighbours that are not masked m > 0 and two cells that are masked m < 0. One of these cells is completely masked m = −1 meaning that it does not exist in the memory, and its field value must be interpolated from the coarse grid. For the cells that are only partly masked − 1 < m < 0, the field value is determined from Eqs. (40) and (41). The cells with m > 0 are treated as normal cells even though the boundary might cross some part of the cell. 
To deal with the cases where we have masked cells, the differential operator is redefined for cells (i, j, k) close to the boundary whenever one of the six neighbouring cells has negative mask value. The information in cells with m < 0 is replaced by a ghost value linearly interpolated between cell i and the boundary value. This ghost value depends explicitly on cell (i, j, k) and the boundary condition, and with this we make sure that the boundary remains at the same location to secondorder accuracy when one goes from fine to coarse levels in the multigrid hierarchy.
To explain the procedure, we assume for simplicity that only the (i + 1,j,k)’th cell is masked (m_{i + 1,j,k} < 0). By linear interpolation, we find that the distance from the cell (i, j, k) to the place where the maskvalue crosses m = 0 (i.e. the position of the boundary) is at (39)times the distance x_{i + 1, j, k} − x_{i, j, k}. The boundary value u_{#} of the scalar field can then be found as (40)where if − 1 < m_{i + 1,j,k} ≤ 0 and if m = −1 (if the cell does not exist) we find by interpolating it from the coarse grid. The value in previous expresion is the value of the cell (i + 1,j,k) before we make the GaussSeidel iterations (at the time when the boundary value u_{#} is defined).
The ghost value for u_{i + 1,j,k} can then be found from (41)We can combine Eqs. (40) and (41) to get the following equation for the value of u in the masked cell (42)Owing to the nonlinearity of the differential operator, it is not possible to include these modifications in the source term before making the GaussSeidel iterations. We have chosen to solve this by storing the value of for the boundary cells so that we can reconstruct the ghost value for u_{i + 1,j,k} whenever needed during the GaussSeidel iterations.
To complete the discretization in the nodes that are close to the boundary, we need to define a consistent value for b_{i + 1,j,k} and . One way to do this, proposed in Gibou et al. (2001), is to use where and similar for c_{i + 1,j,k}. However, the nonlinearity of b and c implies that . To obtain a consistent value for we instead use When calculating the differential operator for cells close to the boundary, the only changes from this choice are in the actual values we use for , , and , but for the derivative of the operator Eq. (37), we do pick up some new terms such as depends on u_{i,j,k} through Eq. (42). The extra terms we need to add to Eq. (37) are (47)where We emphasise that these terms should only be included when m_{i + 1,j,k} < 0, i.e. when the (i + 1)’th cell is masked otherwise as . The case where several neighbour cells are masked is covered by summing these expressions over all the masked cells.
4. The equations and implementation details for specific models
In this section we describe the definition of the two models that we have implemented (the symmetron and f(R) gravity), the dimensionless form of their equations, and details on the implementation.
4.1. Symmetron model
The symmetron model was originally proposed in Hinterbichler & Khoury (2010) as a screening mechanism that allows a scalar field to mediate a long range (~Mpc) force of gravitational strength in the cosmos while satisfying solar system tests of gravity. Nbody simulations of this model have already been run, for instance, in Davis et al. (2012) using a modified version of the code MLAPM (Knebe et al. 2001). The effect of nonstatic terms in these simulations was studied by Llinares & Mota (2013a). The potential and conformal factor that defines the model are where μ and M are mass scales, and λ is a dimensionless constant.
Taking Eq. (8) into account, we find that the static equation of motion for the scalar field reads as (52)It is convenient to work with a dimensionless scalar field χ, which we obtain by normalising φ with its vacuum expectation value, (53)We substitute the free parameters of the model (μ,λ,M) by the range of the field that corresponds to ρ = 0, (54)a dimensionless coupling constant, (55)and the expansion factor for which the background density takes the value for which the symmetry is broken in the cosmological background (56)By substituting these definitions in the equation of motion we obtain (57)where η is the matter density in terms of the mean density at any given redshift.
The modified geodesics given by Eq. (12) take the following form for this model: (58)which can be written as (59)when introducing the dimensionless scalar field χ and the parameters (λ_{0},β,a_{SSB}).
The equation can be simplified further by using the supercomoving quantities defined by Martel & Shapiro (1998)
A similar change in the scalar field (62)will also remove the explicit dependence with a in the term related to the fifth force. In these variables, the equation becomes (63)which is the expression we use to evolve the positions of the particles in the Nbody code. This equation is solved by using the same leapfrog scheme as is included in the standard code. The evolution scheme for the time step n is given by the following equations: In the same way as in the standard code, the second evaluation of the velocities is made in the next time step to avoid calling both gravitational solvers twice in each time step. The form of the evolution equation, Eq. (63), is the same as for standard gravity, namely acceleration equals force given by the gradient of a potential. The main difference is that the force term is on average larger than for standard gravity. RAMSES uses adaptive timestepping to prevent particles moving to far each time step, so we expect the leapfrog scheme to work just as well for scalar tensor theories as it does for standard gravity. The time steps will in general be smaller to account for the stronger force.
4.2. HuSawicki f(R) model
As an example of applying the noncanonical equations, we have implemented the HuSawicki f(R) gravity model (Hu & Sawicki 2007). The model is originally defined in the Jordan frame through a modified EinsteinHilbert term R → R + f(R) where R is the Ricci scalar and f is a free function. The action that defines the model is (67)where ℒ_{m} is the matter Lagrangian, and f is chosen as (68)where and c_{1}, c_{2} and n are dimensionless model parameters. These three free parameters can be reduced to only two (n and f_{R0}) by requiring the model to yield dark energy (here in the form of an effective cosmological constant). This requires (69)Instead of using c_{1} (or c_{2}) as our second free parameter, it is convenient to use (70)which is related to the range of fifth force in the cosmological background today via (71)The f(R) models can be transformed into a scalartensor theory in the form of the action given by Eq. (4) through a Weyl transformation (72)where (Brax et al. 2008) (73)with . This equation defines R(φ), which can be used to get the potential V(φ) that is given by (74)The resulting equation of motion for the scalar field f_{R} in the static limit is (75)where f_{R0} is the value that corresponds to the minimum for the background density today and can be written as in Eq. (67). Since we work in the Einstein frame, the equation for the metric perturbations will be the standard Poisson’s equation. This is a different implementation^{1} (though mathematically equivalent) from what is done in other codes that have implemented this model (Oyaizu 2008; Li et al. 2012; Puchwein et al. 2013). As noted in Oyaizu (2008), the scalar field equation of motion can be written in a more numerically stable form by making a field redefinition: (76)The equation of motion in its noncanonical form is then (77)where b(u) = e^{u}. The discretization of this equation was implemented as described in Sect. 3.4. The geodesic equation reads as (78)The evolution scheme used for the geodesics is equivalent to the one used for the symmetron model and in the standard code (Eqs. (64) to (66)).
Fig. 2
Left: comparison between analytical and numerical solutions of the symmetron model for two different redshifts. The density distribution is given by a sphere of constant density. Different colours correspond to different levels of refinement. The thin line is the analytic solution for redshift z = 0 (upper line) and z = 1 (lower line). Right: same for the f(R) solver. See text for details. 
5. Code tests
In this section we show the results of the tests that were performed to our implementation of the scalar field solvers.
5.1. Potential solver
To measure the quality of the new solver, we compared results with solutions obtained for a sphere of uniform density located in the centre of the box. Confident solutions to compare with can be obtained by writing the equation in spherical coordinates and solving the resulting 1D equation with standard packages such as Mathematica.
The density corresponds to a sphere of radius R of constant density embedded in a uniform background: (79)where (80)characterises the jump in the density, is the mean density in the box, R the radius of the sphere, and B the size of the box. The density is provided to the code through a distribution of particles. The density estimation (CIC) and refinement criteria are the same as those used for the cosmological simulations we performed. The value of δ chosen for the test is 5000. The radius and box size for the symmetron test was taken to be R = 3 Mpc/h and B = 40 Mpc/h, respectively. For the f(R) test we used R = 5 Mpc/h and B = 100 Mpc/h. For both tests we used 128^{3} particles and a domain grid with 128 nodes per dimension. To test that the treatment of the boundary of the refinement is correct, we included two levels of refinement.
In the symmetron case, the test was made at redshift z = 0 and z = 1, and the models parameters were defined as λ_{0} = 1 Mpc/h and a_{SSB} = 0.6. This gives us the possibility to test our solver in a situation in which the redshift is higher than the symmetry breaking redshift, but the density outside the sphere is low enough for the field to have a nonzero expectation value. The model parameters for the test of the f(R) code are n = 1 and  f_{R0} = 10^{6}. Figure 2 shows the result of the tests for the symmetron (left) and f(R) (right) codes. The continuous line is the 1D solution, and the points are the solution on the grid that was obtained using the new solvers. The different colours depict the different refinement levels. The test was performed using both the serial version and the parallel version running with eight processes. Both versions gave the same results, showing that the parallel version of the solver works properly.
5.2. Time evolution
Fig. 3
Relative difference in the symmetron matter power spectrum with respect to ΛCDM for our code (red) and a similar implementation of the code ECSOMOG. Right: same comparison with the code MLAPM. See text for details of the models and simulations parameters employed. 
To test the time evolution of the new code, we ran cosmological simulations and compared the final matter power spectrum at redshift z = 0 with results that were obtained with similar codes or taken from the literature. In the symmetron case, we ran simulations with three different codes: our new code that we wanted to test, a modification of the code MLAPM (Li & Barrow 2011), and the code ECOSMOG (Li et al. 2012). The initial conditions for the comparison with the code ECOSMOG were the same as were used in the simulations presented in Li et al. (2012) and were constructed using a box of 128 Mpc/h and 256^{3} particles. The symmetron parameters of this particular simulation are (a_{SSB},λ_{0},β) = (0.5,1 Mpc/h,1), while the background cosmology is given by (Ω_{m},Ω_{Λ},H_{0}) = (0.267,0.733,71.9 km s^{1} Mpc^{1}). Both simulations were run using the same random seed for the realisation of the initial density field, which implies that the differences that we find between different runs can only be attributed to differences between the codes and not to cosmic variance. In an effort to isolate differences that could exist in the modified gravity part of the codes from the ΛCDM part, we also ran simulations using standard gravity with both codes using the same initial conditions and the same background cosmology. The comparison is then made on the difference between the standard and modified gravity codes. Details on the particular implementation of ECOSMOG used for the test can be found in Li et al. (2012). In brief, the code includes a generalised description for scalar fields, for which the symmetron model is a limit case. The lefthand panel of Fig. 3 shows the relative difference between the power spectrum at redshift z = 0, which was obtained from the symmetron and ΛCDM simulations with both codes. The differences between the two codes are below 0.2% for all k < 10 h /Mpc.
The initial conditions for the comparison with the MLAPM code were generated for a box of 64 Mpc/h and with 128^{3} particles. As in the previous test, the simulations with ISIS and MLAPM were run with exactly the same initial conditions and background cosmology, but in this case we used the symmetron parameters (a_{SSB},λ_{0},β) = (0.33,1 Mpc/h,1), which means that the symmetry is broken at earlier times in the background, and so the effects of the fifth force are enhanced. As described above, we also used ΛCDM simulations that were run with the ISIS and MLAPM codes as reference. The righthand panel of Fig. 3 shows the outcome of the test. A different treatment of the refinement structure and in time stepping increases the differences between both codes, but the difference is still below 1−2% for all k < 4 h /Mpc.
Fig. 4
Relative difference in the f(R) power spectrum with respect to ΛCDM for our code (red) and similar implementations published in Li et al. (2012) and Puchwein et al. (2013). Different sets of curves correspond to different values of f_{R0}. 
To test the time evolution of the f(R) code, we compared the results of our simulations with others taken from the literature. The codes used in the works that we take as reference are ECOSMOG (Li et al. 2012) and Gadget (Puchwein et al. 2013). Both codes include the same f(R) model that we included in our code, but their implementation was made in the Jordan frame instead of the Einstein frame that we decided to use. We ran three simulations using f_{R0} = 10^{6},10^{5}, and 10^{4}, which goes from an almost Newtonian limit model to a model that includes strong effects of the fifth force. Figure 4 shows the power spectrum of the three f(R) simulations with respect to a ΛCDM run that was made for comparison using the same initial conditions. The three simulations were run using different initial seeds for the random number generator used to calculate the initial conditions, and thus the initial phases are different. Cosmic variance should be taken into account when comparing the curves that correspond to each code.^{2} The simulations of the three codes still agree remarkably well on scales 0.01 h/Mpc ≲ k ≲ 1 h /Mpc.
6. Application: shape of clusters as a test for modified gravity
As an application of our new code, we test the impact that the scalar field fifth force has on the haloes shape of groups and clusters of galaxies. Llinares & Mota (2013b, LLM13 from now on) proposed to study this observable by using static calculations of the total gravitational potential. The general result is that the presence of a scalar field increases the ellipticity of the total gravitational force distribution, hence of the xray emitting gas residing in the clusters. Given the observational constraints that exist on the ellipticity of the xray component of clusters, LLM13 gave constraints on the model parameters β and λ_{0} (coupling constant and range) for chameleon (Khoury & Weltman 2004) and symmetron models.
A backoftheenvelope calculation can help for understanding this result. The calculation consists in applying a perturbation to a spherical object and analysing the shape of the associated gravitational potential. One could, for instance, propose the following perturbed densitypotential pair (we focus on the region close to θ = 0): where ρ_{0} and φ_{0} is the pair associated to the unperturbed spherical system, and ϵ_{ρ} and ϵ_{φ} are small numbers. By substituting this pair in Poisson’s equation (81)and keeping only firstorder terms in the perturbations, one obtains (82)which in the limit θ → 0 (i.e. maximizing the perturbation) gives ϵ_{φ} → 0. In other words, in the Newtonian case, the gravitational potential is insensitive to perturbations in the shape of the density up to first order, which is a well known result (see for instance results from simulations in Lau et al. 2011).
By repeating the same analysis for instance with the equation for the chameleon field (Khoury & Weltman 2004) (83)we obtain (84)If we now take the limit β → ∞ we find ϵ_{φ} → ϵ_{ρ}, so in the largely coupled limit, the modified model is sensitive to changes in the shape of the density up to first order. An alternative way of looking at the same problem is the following. In the chameleon case, when β or ρ_{0} is large enough, the field will be forced to stay close to the minimum of its effective potential φ_{min} = (βρ/M^{5}M_{Pl})^{−1/2}, which is uniquely determined by the local matter density. This implies that in the limit βρ → ∞, the isocontours of the scalar field will be completely aligned with the isocontours of the matter density, no matter how this density is distributed.
By performing numerical calculations for a fixed density distribution, LLM13 proved that this result can be also extended to symmetron models. However, their result is incomplete in the sense that time evolution was not taken into account in the calculations; in other words, the dark matter distribution was fixed and the back reaction of the difference found in the potential into the shape of the dark matter halo itself was not taken into account. We give here the next natural step towards a fully realistic analysis, by including the time evolution in the dark matter component. Since in the present version of the ISIS code, baryonic physics is not included, for the moment we can only give constraints on the shape of the dark matter haloes. We remind the reader that the shape of the baryonic component is still not fully understood within the context of standard gravity and that the socalled overcooling problem is still not completely settled (see e.g. Lau et al. 2011). The impact of modified gravity on the shape of the gas distribution is left for future work.
To properly interpret the results that will be presented below, it is important to understand the relation between the models studied in this paper and those presented in LLM13. The symmetron model employed is the same in both papers. However, here we substituted the chameleon studied in LLM13 with the HuSawicki f(R) model (Hu & Sawicki 2007) described in Sect. 4.2. Both of the two models have an equation of the following form in a noncosmological setting: (85)The mapping between the HuSawicki f(R) model we have simulated and the chameleon model is given by n = −1/2, and β = 1/. The constant M can be written in terms of the range of the field in the cosmological background today via (86)
Model parameters for the symmetron (top) and f(R) (bottom) runs.
6.1. Simulations
The data to be used for the analysis was obtained from a set of simulations that we ran with both standard gravity and the two models presented in Sect. 4. Table 1 summarises the model parameters. The initial conditions were generated assuming that both scalar field models give fully screened solutions at high redshift, and thus the Zeldovich approximation is also valid in the modified models. In practice, we generated only one set of initial conditions with the package Cosmics (Bertschinger 1995). We used a box size of 256 Mpc/h and 512^{3} particles. The background cosmology is also the same for all the simulations and is defined as a flat ΛCDM model given by (Ω_{m}, Ω_{Λ},H_{0}) = (0.267, 0.733, 71.9 km s^{1} Mpc^{1}).
6.2. Analysis
The aim of the analysis is to measure the shape of the dark matter haloes present in the simulations and to make the comparison between ΛCDM haloes and those that were formed under the influence of the modified gravity models. The haloes were identified using the code Rockstar (Behroozi et al. 2013). Since gravitational lensing observations measure the distribution of the total mass of the clusters, we did the analysis using all the mass included inside the virial radius. We used a lower cut off of 10^{13} M_{⊙}, which give haloes with more that ~10^{3} particles. The upper cutoff in mass is given naturally by the size of the box. In other words, we present results that are applicable to groups and clusters of galaxies. We concentrate all our analysis at redshift z = 0.
Fig. 5
Contours of the distribution of ratios between semiaxis for the symmetron (left) and f(R) (right) simulations. Both panels also include the ΛCDM simulation for comparison. The points are the expected values of the distributions. The size of the error bars of these are comparable to the size of the points and are not included in the plot. 
To avoid contaminating the results with unrelaxed clusters, we defined the virialisation state by taking into account the virial theorem, (87)where (88)is the total kinetic energy, (89)is the total potential energy, and E_{s} is a surface pressure term that takes the effects of the environment into account over the dynamical state of the haloes (e.g. Shaw et al. 2006). It is customary to define the virial ratio (90)Virialized objects are defined as having d^{2}I/dt^{2} ≈ 0, and thus η ~ 0. However, one must consider that the value for η is not exactly zero for virialized objects, but oscillates around that value. We define the virialized sample as composed of those haloes for which  η  is below a certain threshold, which we choose as 0.2. The definition given by Eq. (89) is only a lower bound of the true potential energy, which is likely to have higher values owing to the presence of the environment in which each dark matter halo is immersed. Furthermore, the condition η < 0.2 that we use to define the virialized sample is consistent with the literature (e.g. Shaw et al. 2006), but ultimately arbitrary. In this sense, the criteria that we employ to define the relaxed sample must be taken as a rough diagnostic rather than a definitive choise.
In the modified gravity case, the definition of the potential energy must be modified to take into account the energy of the scalar field. In this case, the superposition principle is not valid anymore, so that the virialisation state cannot be calculated by using previous expressions. A detailed study of this issue applied to the same data presented here is shown in a companion paper (Grönke et al. 2014). In the present work, we use a simplified analysis that consists in calculating the distribution of the η parameter by using standard gravity and shifting it in such way that its maximum is centred on zero. Because the models that we are treating include a massdependent screening, we calculated these distributions not for the complete sample, but for subsamples defined by four uniform logarithmic mass bins ranging from 10^{13} to 10^{15} M_{⊙}.
Once we have specified the sample of halos, we then measure the shape of their density distribution, which can be defined starting from (91)The isodensity surfaces can be approximated by ellipsoids described by the radial ellipsoidal coordinate (92)with axial ratios (93)where M_{xx}, M_{yy} and M_{zz} are the eigenvalues of M_{ij}. The integral in Eq. (91) is computed by summing over the particles, (94)up to the virial radius R_{200}, where m_{l} is the mass of the particle l, which is the same for all the particles in our simulations. The shape of the regions in which the integral is calculated for each halo is a triaxial ellipsoid that was determined iteratively as in Dubinski & Carlberg (1991).
6.3. Results
Figure 5 shows the distributions of the ratios between semiaxis q = b/a and s = c/a for the whole sample found in each simulation, with expectation values of the distributions. The error of this quantities is comparable to the size of the dots and are thus not shown. The results from the symmetron simulations are shown in the lefthand panel and can be compared directly with those presented in LLM13. In there, the dependency of the shape of the isosurfaces of the scalar field was shown as a function of z_{SSB}, λ_{0}, and β. The general result is that these isosurfaces move away from the isopotentials when decreasing both z_{SSB} or λ_{0}. The dependence on β is the opposite: the higher the coupling, the more elliptical these isosurfaces are. The simulations presented here show the expected dependence with β: the Symm C simulation have lower ratios than the Symm A and ΛCDM simulations. However, in the case of z_{SSB}, we find a behaviour that is opposite to what we expected: the higher z_{SSB}, the lower the ratios found in the simulation, thus the more elongated the haloes. The result is a direct consequence of taking the time evolution of the haloes into account. While according to LLM13, a low value for z_{SSB} increases the ellipticity of the isosurfaces of the scalar field, the objects in these models are only influenced during a short time by the fifth force. When considering the time evolution, one takes not only the intrinsic distribution of the scalar field into account, but also its history and cumulative effects, which turn the results upside down.
Fig. 6
Axial ratios b/a (upper group of lines) and c/a (lower group of lines) obtained from the symmetron (left) and f(R) (right) simulations as a function of halo mass. Both panels also include the results from the ΛCDM simulation for comparison. 
In the f(R) case (right panel of Fig. 5), there is not a clear trend of the shapes with respect to the free parameter f_{R0}. The expectation value of the distributions moves slightly towards more elliptical objects in the fofr6 runs and towards more spherical objects in the other two. An analysis of the mass dependence of this distributions will help for interpreting this results.
The simulations give us the chance to extend the results presented in LLM13 by taking the mass dependence of the signal into account. We show this dependency in Fig. 6. We find not only that the changes in shape are not given equally at all masses, but also that haloes of different mass are affected in a different way. In addition to this, we find this dependence is not equal for all the models, but that each model has a different behaviour. In the symmetron case, we find, within the region of the parameter space tested with this simulations, that increasing z_{SSB} while fixing λ_{0} and β (see simulations Symm A, B, and D) increases the ellipticity almost evenly in the whole mass range studied. The change in the shapes when modifying β (see simulations Symm A and C) occurs only a low masses.
The dependence of the shapes on mass found in the f(R) simulations is shown in the righthand panel of Fig. 6. The fofr6 model (for which we found an increase in the ellipticity in the previous analysis) shows that the increase in the total distribution is given only by the low mass haloes. The highmass haloes are mostly screened, so there is not fifth force capable of changing the shapes. In the lowmass haloes, even if the force can be screened in their centre, it is still active in the outer regions, so it is possible to find differences with respect to ΛCDM (see Grönke et al. 2014, for a study of the force distribution in the haloes of this simulations). In the fofr5 case, we find that the near absence of deviations from ΛCDM shown in Fig. 5 is actually produced by a compensation given by a large deviation towards more elliptical haloes at highmass and a correction towards more spherical haloes in the lowmass end. While the deviation in the highmass end is much larger, there are many more lowmass haloes, so that the effect of all of them together is enough to change the expectation of the whole sample towards more spherical haloes. Finally, the fofr4 case seems to contradict the results presented in LLM13. The haloes in this model are actually slightly more spherical than in the ΛCDM case in the lowmass half of the sample. By extrapolating results from LLM13 towards a model with a range λ_{0} = 23.7 and n = −1/2 (according to the notation in LLM13), one could expect that the shapes should be similar to ΛCDM. To explain why fofr4 haloes are more spherical, one should enrich the analysis with more information than just the distribution of forces in a static case. A possible mechanism for making the axial ratios even closer to one than in standard gravity could be based on the fact that haloes are not formed by spherical collapse, but in a hierarchical way. To take the formation history of the haloes into account could help in understanding this result. For instance, it was proven by using Nbody simulations that modified gravity models have the property of increasing the collisional velocity of dark matter haloes (Llinares et al. 2009; Lee & Baldi 2012). The model fofr4 is a model in which the fifth force acquires extreme values, and thus, the collisional velocity in mayor mergers should be greater than in ΛCDM, which should change the way in which haloes move towards equilibrium, making them more spherical.
7. Conclusions and discussion
Several extensions of the standard cosmological model include scalar fields as new degrees of freedom in the underlying gravitational theory, which can be, for instance, in the form of scalar, vector, or tensor fields. In general, these new degrees of freedom interact with matter and, in particular, with the standard model fields. Since deviations from Einstein gravity have neither been observed nor measured up to now in the solar system, these interacting scalar field theories must include screening mechanisms intended to hide the scalar field below observational limits within the solar system. Such a requirement can be relaxed on galactic scales and above, where data still gives the freedom to find possible signatures of their presence.
To make predictions to compare with observations coming from galactic and clusters scales (i.e. in the nonlinear regime of cosmological evolution), cosmological Nbody simulations are needed, for which codes must be developed that can solve for the scalar field. In this work we presented a new implementation in an Nbody code of scalartensor theories of gravity that includes screening mechanisms. The code is based on the existing code RAMSES, to which we have added a nonlinear multigrid solver that can treat a large class of scalar tensor theories of modified gravity. We presented details of the implementation and the tests that we made to it.
As application of the new code, we studied the influence of two particular modified gravity theories, the symmetron and a chameleonf(R) gravity, on the shape of group and clustersized dark matter haloes. Using static calculations, Llinares & Mota (2013b) show that the shape of the scalar fields follows the density distribution more closely than the gravitational potential, which should increase the ellipticity of the clusters, hopefully beyond observational limits. If this is the case, the ellipticity of clusters provides a way to constraint the parameter space of scalartensor theories. The time evolution was a missing ingredient in the Llinares & Mota (2013b) calculations. Here we extended this work, including the time evolution in the dark matter component, and found that indeed the ellipticity of the clusters in the scalartensor simulations is higher, with exception of the most extreme f(R) model studied. The predictions that we obtained for the f(R) model were obtained in a different region of the parameter space studied in Llinares & Mota (2013b), so it is not possible to make a direct comparison. The main difference that we found with respect to the Llinares & Mota (2013b) expectations is that the f(R) haloes can actually be slightly more spherical than ΛCDM ones when the fifth force is very long ranged. Further study of the formation history of the haloes is needed to fully understand this effect.
In the symmetron case we found results that are consistent with previous analytic estimations that exist in the literature. Furthermore, we studied the mass dependence of this effects and found that different regions of the parameter space of this model give different dependencies. This could help to distinguish between models once the influence of gas dynamics has been understood (not only in this models, but also within standard gravity), and accurate predictions can be made.
It is important to note that our results are based on a different quantity than studied in Llinares & Mota (2013b). There, the shape of the Xray component was studied by assuming static dark matter haloes and hydrostatic equilibrium, while here we studied the influence of the cosmological evolution in the shape of the underlying dark matter haloes. The extension of these predictions to the shape of the gas component is beyond the scope of this work but will be performed elsewhere. The impact that these results have on strong lensing statistics is left for future work. What we can say at present is that the increase in the ellipticity will increase the probability of finding strong lenses, and thus, act in the right direction for solving the problem with the lensing statistics that the ΛCDM paradigm has (see Meneghetti et al. 2013, for a review of this topic).
By introducing the (Jordanframe) potential () one can transform the equations to that of Oyaizu (2008) and Li et al. (2012). Poisson’s equation for Φ_{J} follows from simply adding Poisson’s equation for Φ_{N} and the KleinGordon equation for − f_{R}/2.
Acknowledgments
C.L., D.F.M., and H.A.W. thank the Research Council of Norway FRINAT grant 197251/V30. D.F.M. is also partially supported by projects CERN/FP/123618/2011 and CERN/FP/123615/2011. The simulations were performed on the NOTUR Clusters HEXAGON and STALLO, the computing facilities at the Universities of Bergen and Tromsø, Norway. We thanks Mikjel Thorsrud and Iain Brown for useful discussions.
References
 Amendola, L., Appleby, S., Bacon, D., et al. 2013, Liv. Rev. Rel., 16, 6 [Google Scholar]
 Baldi, M. 2012a, Physics of the Dark Universe, 1, 162 [NASA ADS] [CrossRef] [Google Scholar]
 Baldi, M. 2012b, MNRAS, 422, 1028 [NASA ADS] [CrossRef] [Google Scholar]
 Baldi, M., Pettorino, V., Robbers, G., & Springel, V. 2010, MNRAS, 403, 1684 [NASA ADS] [CrossRef] [Google Scholar]
 Behroozi, P. S., Wechsler, R. H., & Wu, H.Y. 2013, ApJ, 762, 109 [NASA ADS] [CrossRef] [Google Scholar]
 Bertschinger, E. 1995 [arXiv:astroph/9506070] [Google Scholar]
 Brandt, A. 1977, Math. of Comp., 31, 333 [Google Scholar]
 Brax, P., van de Bruck, C., Davis, A.C., & Shaw, D. J. 2008, Phys.Rev., D78, 104021 [Google Scholar]
 Brax, P., Davis, A.C., Li, B., Winther, H. A., & Zhao, G.B. 2012, JCAP, 10, 2 [Google Scholar]
 Brax, P., Davis, A.C., Li, B., Winther, H. A., & Zhao, G.B. 2013, JCAP, 4, 29 [NASA ADS] [CrossRef] [Google Scholar]
 Clifton, T., Ferreira, P. G., Padilla, A., & Skordis, C. 2012, Phys. Rep., 513, 1 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Davis, A.C., Li, B., Mota, D. F., & Winther, H. A. 2012, ApJ, 748, 61 [NASA ADS] [CrossRef] [Google Scholar]
 Dubinski, J., & Carlberg, R. G. 1991, ApJ, 378, 496 [Google Scholar]
 Eriksen, H. K., Hansen, F. K., Banday, A. J., Górski, K. M., & Lilje, P. B. 2004, ApJ, 605, 14 [NASA ADS] [CrossRef] [Google Scholar]
 Gibou, F., Fedkiw, R., tien Cheng, L., & Kang, M. 2001, J. Comput. Phys, 205 [Google Scholar]
 Grönke, M. B., Llinares, C., & Mota, D. F. 2014, A&A, 562, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Guillet, T., & Teyssier, R. 2011, J. Comput. Phys., 230, 4756 [Google Scholar]
 Hansen, F. K., Banday, A. J., Górski, K. M., Eriksen, H. K., & Lilje, P. B. 2009, ApJ, 704, 1448 [NASA ADS] [CrossRef] [Google Scholar]
 Hinterbichler, K., & Khoury, J. 2010, Phys. Rev. Lett., 104, 231301 [NASA ADS] [CrossRef] [Google Scholar]
 Hu, W., & Sawicki, I. 2007, Phys. Rev. D, 76, 064004 [CrossRef] [Google Scholar]
 Khoury, J., & Weltman, A. 2004, Phys. Rev. Lett., 93, 171104 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Knebe, A., Green, A., & Binney, J. 2001, MNRAS, 325, 845 [NASA ADS] [CrossRef] [Google Scholar]
 Lau, E. T., Nagai, D., Kravtsov, A. V., & Zentner, A. R. 2011, ApJ, 734, 93 [NASA ADS] [CrossRef] [Google Scholar]
 Laureijs, R., Amiaux, J., Arduini, S., et al. 2011 [arXiv:1110.3193] [Google Scholar]
 Lee, J., & Baldi, M. 2012, ApJ, 747, 45 [NASA ADS] [CrossRef] [Google Scholar]
 Li, B., & Barrow, J. D. 2011, Phys. Rev. D, 83, 024007 [NASA ADS] [CrossRef] [Google Scholar]
 Li, B., Mota, D. F., & Barrow, J. D. 2011a, ApJ, 728, 109 [NASA ADS] [CrossRef] [Google Scholar]
 Li, B., Mota, D. F., & Barrow, J. D. 2011b, ApJ, 728, 108 [NASA ADS] [CrossRef] [Google Scholar]
 Li, B., Zhao, G.B., Teyssier, R., & Koyama, K. 2012, JCAP, 1, 51 [Google Scholar]
 Li, B., Zhao, G.B., & Koyama, K. 2013, JCAP, 5, 23 [Google Scholar]
 Llinares, C. 2011, Ph.D. Thesis, University of Groningen, The Netherlands [Google Scholar]
 Llinares, C., & Mota, D. F. 2013a, Phys. Rev. Lett., 110, 161101 [NASA ADS] [CrossRef] [Google Scholar]
 Llinares, C., & Mota, D. F. 2013b, Phys. Rev. Lett., 110, 151104 [NASA ADS] [CrossRef] [Google Scholar]
 Llinares, C., Knebe, A., & Zhao, H. 2008, MNRAS, 391, 1778 [NASA ADS] [CrossRef] [Google Scholar]
 Llinares, C., Zhao, H. S., & Knebe, A. 2009, ApJ, 695, L145 [NASA ADS] [CrossRef] [Google Scholar]
 LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009 [arXiv:0912.0201] [Google Scholar]
 Martel, H., & Shapiro, P. R. 1998, MNRAS, 297, 467 [NASA ADS] [CrossRef] [Google Scholar]
 Meneghetti, M., Bartelmann, M., Dahle, H., & Limousin, M. 2013, Space Sci. Rev. [Google Scholar]
 Oyaizu, H. 2008, Phys. Rev. D, 78, 123523 [NASA ADS] [CrossRef] [Google Scholar]
 Planck Collaboration XVI. 2014, in press [arXiv:1303.5076] [Google Scholar]
 Planck Collaboration XX. 2014, in press [arXiv:1303.5080] [Google Scholar]
 Puchwein, E., Baldi, M., & Springel, V. 2013, MNRAS, 436, 348 [NASA ADS] [CrossRef] [Google Scholar]
 Schmidt, F. 2009, Phys. Rev. D, 80, 043001 [NASA ADS] [CrossRef] [Google Scholar]
 Shaw, L. D., Weller, J., Ostriker, J. P., & Bode, P. 2006, ApJ, 646, 815 [NASA ADS] [CrossRef] [Google Scholar]
 Teyssier, R. 2002, A&A, 385, 337 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Trottenberg, U., Oosterlee, C., & Scholler, A. 2000, Multigrid (Academic Press) [Google Scholar]
 Wesseling, P. 1992, An Introduction to Multigrid Methods (John Wiley and Sons Inc), 2nd ed. [Google Scholar]
 Zhao, G.B., Li, B., & Koyama, K. 2011, Phys. Rev. D, 83, 044007 [NASA ADS] [CrossRef] [Google Scholar]
All Tables
All Figures
Fig. 1
A five points stencil (2D) located close to the boundary. The central cell has two neighbours that are not masked m > 0 and two cells that are masked m < 0. One of these cells is completely masked m = −1 meaning that it does not exist in the memory, and its field value must be interpolated from the coarse grid. For the cells that are only partly masked − 1 < m < 0, the field value is determined from Eqs. (40) and (41). The cells with m > 0 are treated as normal cells even though the boundary might cross some part of the cell. 

In the text 
Fig. 2
Left: comparison between analytical and numerical solutions of the symmetron model for two different redshifts. The density distribution is given by a sphere of constant density. Different colours correspond to different levels of refinement. The thin line is the analytic solution for redshift z = 0 (upper line) and z = 1 (lower line). Right: same for the f(R) solver. See text for details. 

In the text 
Fig. 3
Relative difference in the symmetron matter power spectrum with respect to ΛCDM for our code (red) and a similar implementation of the code ECSOMOG. Right: same comparison with the code MLAPM. See text for details of the models and simulations parameters employed. 

In the text 
Fig. 4
Relative difference in the f(R) power spectrum with respect to ΛCDM for our code (red) and similar implementations published in Li et al. (2012) and Puchwein et al. (2013). Different sets of curves correspond to different values of f_{R0}. 

In the text 
Fig. 5
Contours of the distribution of ratios between semiaxis for the symmetron (left) and f(R) (right) simulations. Both panels also include the ΛCDM simulation for comparison. The points are the expected values of the distributions. The size of the error bars of these are comparable to the size of the points and are not included in the plot. 

In the text 
Fig. 6
Axial ratios b/a (upper group of lines) and c/a (lower group of lines) obtained from the symmetron (left) and f(R) (right) simulations as a function of halo mass. Both panels also include the results from the ΛCDM simulation for comparison. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.