Issue 
A&A
Volume 662, June 2022



Article Number  A50  
Number of page(s)  11  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/202141981  
Published online  15 June 2022 
ICARUS, a new inner heliospheric model with a flexible grid
^{1}
Centre for mathematical Plasma Astrophysics, KU Leuven,
Celestijnenlaan 200B,
3001
Leuven,
Belgium
email: christine.verbeke@oma.be; cgjmverbeke@gmail.com
^{2}
Solar–Terrestrial Centre of Excellence – SIDC, Royal Observatory of Belgium,
1180
Brussels,
Belgium
^{3}
Institute of Physics, University of Maria CurieSkłodowska,
Pl. M. CurieSkłodowska 5,
20–031
Lublin,
Poland
Received:
6
August
2021
Accepted:
5
January
2022
Context. Simulating the propagation and predicting the arrival time of coronal mass ejections (CMEs) in the inner heliosphere with a full threedimensional (3D) magnetohydrodynamic (MHD) propagation model requires a significant amount of computational time. For CME forecasting purposes, multiple runs may be required for different reasons such as ensemble modeling (uncertainty on input parameters) and error propagation. Moreover, higher resolution runs may be necessary, which also requires more CPU time, for example for the prediction of solar energetic particle acceleration and transport or in the framework of more indepth studies about CME erosion and/or deformation during its evolution.
Aims. In this paper we present ICARUS, a new inner heliospheric model for the simulation of a steady background solar wind and the propagation and evolution of superposed CMEs. This novel model has been implemented within the MPIAMRVAC framework which enables the use of stretched grids and solution adaptive mesh refinement (AMR). The usefulness and efficiency (speedup) of these advanced features are explored. In particular, we model a typical solar wind with ICARUS and then launch a simple cone CME and follow its evolution. We focus on the effect of radial grid stretching and two specific methods or criteria to trigger solution AMR on this typical simulation run.
Methods. For the solar background wind simulation run, we limited the mesh refinement to the area(s) of interest, in this case a corotating interaction region (CIR). For the CME evolution run, on the other hand, we apply AMR where the CME is located by the use of a tracing function. As such, the grid is coarsened again after the CME has passed.
Results. The implemented AMR is flexible and only refines the mesh in a particular sector of the computational domain, for example around the Earth or a single CIR, and/or for a particular feature such as CIR or CME shocks. Radial grid stretching alone yields speedups of up to 4 and more, depending on the resolution. Combined with solution adaptive mesh refinement, the speedups can be much larger depending on the complexity of the simulation (e.g., number of CIRs in the background wind, number of CMEs) and on the chosen AMR criteria, thresholds and the number of refinement levels.
Conclusions. The ICARUS model implemented in the MPIAMRVAC framework is a new inner heliospheric 3D MHD model that uses grid stretching as well as AMR techniques. The flexibility in the grid and its resolution allows an optimization of the computational time required for CME propagation simulations for both scientific and forecasting purposes.
Key words: magnetohydrodynamics (MHD) / Sun: coronal mass ejections (CMEs) / Sun: heliosphere / solar wind
© C. Verbeke et al. 2022
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
Coronal mass ejections (CMEs) are violent releases of plasma and magnetic field from the lower solar corona into the heliosphere. When launched toward Earth they typically reach us within one to five days. While they propagate in the solar wind, CMEs interact with the solar wind plasma and the interplanetary magnetic field, which causes deformation, rotation, and deflection of the CMEs. This makes it challenging to predict their arrival time at Earth and in particular the severeness of their impact.
Over the past decades many different models have been developed to simulate the propagation of CMEs in the inner heliosphere and their arrival at Earth and at other locations of interest (e.g., other planets and satellites). A number of these models focus on describing the full threedimensional (3D) solar wind and the CMEs propagating and evolving in it. The heliospheric models ENLIL (Odstrčil et al. 2004), EUHFORIA (EUropean Heliospheric Forecast Information Asset, Pomoell & Poedts 2018), and SUSANOOCME (SpaceweatherforecastUsable System Anchored by Numerical Operations and Observations  Coronal Mass Ejection model, Shiota & Kataoka 2016) are solving the timedependent 3D magnetohydrodynamic (MHD) equations, each with different modeling techniques, starting at an inner boundary of about 21.5 R_{⊙} where the solar wind is assumed to be superAlfvénic. In each of these models the grid resolution can be chosen initially, but it remains fixed throughout a simulation. Other models, such as the AWSoMR model (Sokolov et al. 2013; van der Holst et al. 2014; Jin et al. 2017) start the modeling much closer to the solar surface and use adaptive grids, applying radial grid stretching closer to the Sun. This enables the incorporation of more detailed physics, but the total computation time is thus much larger. Currently, the less sophisticated but faster models ENLIL, EUHFORIA, and the SUSANOOCME are used in operational space weather services to forecast the solar wind and the propagation of CMEs in the inner heliosphere and their corresponding possible arrival at Earth.
When forecasting CMEs and the severeness of their impacts at Earth, different model configurations may be required to provide an optimal forecast, such as higher spatial resolution to better capture corotating interaction regions (CIRs) in the heliospheric wind or CME shocks. For the purpose of daily solar wind modeling, the resolution of the corresponding MHD simulation influences the results especially with regard to the arrival of high speed streams (Hinterreiter et al. 2019). Accurate solar wind modeling is also closely related to the forecast of Solar Energetic Particle (SEP) events. When using SEP models, such as PARADISE (Wijsen et al. 2019; Wijsen 2020), a higher spatial resolution compared to the standard resolution(s) used in current daily operational forecasting is preferred. Higher spatial resolution allows all shocks related to CIRs and CMEs to be fully captured. Ensemble modeling (i.e., running a large set of simulation runs with different input parameters in order to provide a probabilistic forecast, as demonstrated by Mays et al. 2015), also requires a substantially longer total computation time. From a scientific point of view, higher spatial resolutions enable the CME model to better quantify CME erosion (due to magnetic reconnection) and deformation during its propagation through the inner heliosphere. Hence, higher spatial resolution is crucial for a few applications. Different techniques can be used to speed up simulations and/or to allow for higher spatial resolution where needed for a similar wallclock time, such as transferring from a fixed equidistant grid to a stretched grid and/or solution adaptive mesh refinement (AMR).
We implemented a new heliospheric model called ICARUS^{1} into the framework of the Message Passing Interface  Adaptive Mesh Refinement – Versatile Advection Code (MPIAMRVAC; see Xia et al. 2018). Using a fixed grid, the functionality of the novel model is very similar to the ENLIL or EUHFORIA models. However, the MPIAMRVAC framework enables us to explore the possibilities of using a (gradually) radially stretched grid and AMR strategies to further speed up the simulations. In this paper we focus particularly on how grid stretching and solution AMR techniques can influence the performance and speedup of 3D MHD solar wind and CME modeling and we try to quantify this for two typical simulation setups. In future work we will focus on a more thorough and detailed validation of the novel ICARUS model by modeling a real CME event and by applying different combinations of grid stretching and solution AMR strategies.
The paper is structured as follows. In Sect. 2, we briefly describe the implemented inner heliospheric model ICARUS, and discuss the MPIAMRVAC framework and the boundary conditions for both the implementation of the solar wind and the superposition of the CME model to it. In Sect. 3 the details of the performed runs are presented, including information about the grid stretching technique and the blockadaptive solution AMR strategy. We present and discuss our results in Sects. 4 and 5, and provide results of the simulation runs as well as information on the speedup and performance of the simulations for both grid stretching and the used solution AMR. Finally, in Sect. 6 we summarize the main findings and provide an outlook to future studies.
2 Designing inner heliospheric simulations of the solar wind and CMEs
2.1 The MPIAMRVAC framework
To model the inner heliosphere, we rely on the parallel adaptive mesh refinement framework MPIAMRVAC^{2}, further described in Keppens et al. (2012), Porth et al. (2014), and Xia et al. (2018). The MPIAMRVAC framework consists of a parallelised finitevolume code that is able to solve a wide variety of multidimensional sets of partial differential equations in different geometries. It contains several numerical schemes, focusing on (near)conservation laws and shockcapturing methods and has several physics modules including the ideal MHD module that is used in this paper. We chose to call upon a shockcapturing secondorder (in time and space) total vanishing diminishing LaxFriedrichs (TVDLF) scheme (Toth & Odstrcil 1996) in combination with a Woodward limiter (Woodward & Colella 1984). To diffuse any occasional local nonzero ∇ B errors, we used the parabolic cleaning method introduced in Eq. (8) by Keppens et al. (2003). We opted to solve the ideal MHD equations in a frame that is corotating with the Sun, using spherical geometry (and grid). As a result the final relaxed solar wind state is invariant in time, and subsequently it enables us to refine fixed regions of the steady solar wind that are anchored in the corotating frame. The abovementioned combination of numerical solution scheme, limiter, and coordinate frame permits us to solve the ideal MHD equations in an accurate and robust way. The considered MHD equations, including gravity, are given by (1) (2) (3) (4) (5)
and p, v, p_{tot}, B, and e are the mass density, the velocity vector field, the total pressure of the plasma (i.e., the sum of the thermal pressure and the magnetic pressure), the magnetic vector field, and the total energy density, respectively. We refer to Porth et al. (2014) for more details about the implementation into the MPIAMRVAC framework. We consider the magnetic permeability µ_{0} to be 1. The gravitational acceleration g is given by, with G the gravitational constant and M_{⊙} the mass of the Sun. The adiabatic index γ is chosen to be 1.5, similar to Pomoell & Poedts (2018) and Odstrčil et al. (2004). This reduced index models additional heating in the most simple way and yields an acceleration of the solar wind throughout the inner heliosphere (see, e.g., Pomoell & Vainio 2012). Finally, the source term F is given by p(Ω × r) × Ω + 2p(v × Ω), corresponding to the centrifugal and Coriolis forces, respectively. Here, Ω is the rotation rate of the Sun at the equator (2.97 × 10–^{6} rad s^{–1}).
2.2 Computational domain and grid resolution
The computational domain of the model setup in this work extends from 0.1 AU (21.5 R_{⊙}) to 2 AU in the radial direction. The latitudinal and longitudinal extent is ±60 degrees and 360 degrees, respectively. In Table 1 details are provided on both the equidistant and radially stretched grid resolutions corresponding to the low, middle, high, and ultrahigh grid resolution setups used in this paper (see Sect. 4 for more on grid stretching). We note that middle resolution is currently used for operational purposes. For the purpose of this work, we generated readout at Earth corresponding to a planet located in the ecliptic plane that is moving in the corotating frame at the rotation rate of Earth.
Equidistant (nonstretched) and radially stretched grid resolutions referred to throughout this paper.
2.3 Boundary conditions for the solar wind
Inner boundary conditions for the solar wind can be taken from any coronal model that produces solar wind conditions at 21.5 R_{⊙}, assuming that the conditions relating to the velocity and magnetic vector fields only contain radially outward or inward terms. MPIAMRVAC constructs two layers of ghost cells to impose boundary conditions. All primitive variables are defined at the cell centers. The boundary conditions provided by the coronal model are defined at 21.5 R_{⊙}, which corresponds to the cell interface between the first ghost cell and the (radially) first domain cell. As such, we use a linear extrapolation to extract the value of each variable using its value in the first domain cell and the value that is imposed at the inlet boundary. Since we solve the equations in the corotating frame, and the inner corona is assumed to rigidly corotate with the Sun, the radial inner boundary conditions of the solar wind do not vary in time. At the outer radial boundary, at 2 AU, we use continuous boundary conditions where the gradient of conserved variables is kept at zero by copying the variable values from the edges of the mesh into the ghost cells, while at the latitudinal boundaries we implement a symmetric reflection. The continuous and symmetric boundary conditions are both standard implementations in MPIAMRVAC. Furthermore, the initial state is specified as follows: for a point with coordinates [r, θ, ∅], the radial speed v_{r} is set as the radial speed at the inlet boundary for the same [θ, ∅] (i.e., the value at [21.5 R_{⊙}, θ, ∅]). The density, pressure, and radial magnetic field are scaled by a factor of (r_{b}/r)^{2} compared to the corresponding boundary value [21.5 R_{⊙}, θ, ∅], with r_{b} the radius of the inner boundary (21.5 R_{⊙}) and r the radius of the considered point. As the pressure is not a conserved variable, MPIAMRVAC allows for conversion from pressure to total energy density (and other primitive to conserved variables).
2.4 Boundary conditions for superposing CMEs
We employed one of the cone CME models described in Scolini et al. (2018). Here, we briefly mention the specific combination of equations that were used to obtain the results presented and discussed in this paper. We refer to Scolini et al. (2018) for other setups that can be used for the cone model.
The cone CME model is determined by the following set of parameters: time when the CME is located at the inner boundary (21.5 R_{⊙}) t_{0}; propagation speed of the CME v_{CME}; latitude θ_{cme} and longitude ∅_{CME} of the center of the CME; radius of the CME r_{1/2}, which is related to the angular halfwidth ω_{CME} that is often used in literature; density p_{CME} = 10–^{18} kg m–^{3} and temperature T_{CME} = 0.8 MK of the CME, which remain uniform throughout the CME.
We briefly present the equations that determine the propagation of the cone model through the inner boundary. First, corresponding to Eq. (2) in Scolini et al. (2018), we define the maximum radius of the cone model as (7)
as also used in previous works by Xie et al. (2004) and Zhao et al. (2002). Here R denotes the radius of the inner heliospheric boundary (i.e., R = 21.5 R_{⊙}). We note that our definition of ω_{CME} is angular halfwidth here, compared to full angular width in Scolini et al. (2018).
Secondly, we consider the opening angle 8(ř), which changes with time as the CME passes through the boundary, as given by Eq. (7) in Scolini et al. (2018): (8)
This definition of the opening angle corresponds to the case of a spherical CME crossing a spherical surface at constant speed. Here R_{c}(t) is the distance of the CME center from the solar center as a function of time and is given by (9)
Finally, to determine the variable cross section of the CME and the r = 0.1 AU shell, we choose a distance formula that calculates the distance between two points on the sphere (see Eq. (10) in Scolini et al. 2018). The angular distance is then given by (10)
where (θ_{CME},θ_{CME}) corresponds to the launch direction of the CME and (θ_{p}, ∅_{p}) corresponds to the position of a certain point p on the inner boundary shell. A point p then belongs to the CME when d ≤ θ(t).
We note that we are able to implement the exact same set of CME equations as in EUHFORIA; the coordinate system considered here is corotating with the Sun, while in EUHFORIA HEEQ coordinates (where the longitude of Earth is fixed to zero) are considered. Hence, the same CME launch position (θ_{CME}, ∅_{CME}) in two differently rotating coordinate frames will result in slight differences for the launch conditions of the CME. Finally, all other equations and cone model combinations as presented by Scolini et al. (2018) are also implemented into the heliospheric model and can be utlised according to user preference.
3 Modeling setup
3.1 Solar wind
We consider two cases: a synthetic solar wind and a typical background solar wind based on a photospheric magnetogram and a simple coronal model. The artificial wind allows us to control the speed and CIR(s), while the wind based on the magnetogram allows us to test the model under typical forecasting circumstances.
Fig. 1 Comparison of the radial speed at 1 AU and 0 degree latitude between the ICARUS (red) and EUHFORIA (blue) models. 
3.1.1 Synthetically generated solar wind boundary conditions
The setup for the synthetically generated solar wind boundary conditions is identical to that described in Wijsen et al. (2019), and mimics a single idealized fast solar wind stream embedded in a slow solar wind. The slow solar wind is characterized by a radial speed of 330 km s^{–1} at the inner boundary of r = 21.5 R_{⊙}. A fast solar wind of 660 kms^{–1} is prescribed over the same region as given by Eq. (19) in Wijsen et al. (2019), and spans a circular region with radius 20° around the point (θ,Ø) = (5°, 165°) surrounded by a transition region of angular width 6°, in which the solar wind speed changes linearly from slow to fast. In this configuration the fast solar wind stream is completely embedded in a magnetic field of positive polarity. This synthetic wind mimics a situation with a coronal hole located close to the solar equatorial plane. We refer the reader to Pomoell & Poedts (2018) and Wijsen et al. (2019) for more details on the setup of the magnetic vector field, number density, temperature and pressure.
3.1.2 Solar wind boundary conditions based on typical magnetogram input
For the purpose of this study, we also selected a set of solar wind boundary conditions based on an observed photospheric magnetogram. To generate the solar wind boundary conditions, we used the coronal model setup of EUHFORIA described by Pomoell & Poedts (2018), which adopts a semiempirical setup similar to Wang–Sheely–Arge (WSA, Arge 2003). More specifically, we used the standard synoptic magnetogram from the Global Oscillation Network Group (GONG, Hill 2018) on 12 July 2012 at 11:54 UT as input.
To demonstrate that the solar wind model provides results similar to the heliospheric EUHFORIA model for the same inlet boundary conditions, we show a comparison of a solar wind run between the ICARUS model presented in this paper and the EUHFORIA model (Pomoell & Poedts 2018), both starting from the GONG magnetogram of 12 July 2012 at 11:54 UT, and both using the boundary conditions derived by the abovementioned coronal model in EUHFORIA. In Fig. 1 we provide a comparison of the two runs making a spatial slice at 1 AU and zero latitude, where we used a middle resolution grid (see Table 1) and made the grids of the two models match exactly, apart from the fact that ICARUS uses a grid corotating with the Sun while EUHFORIA uses HEEQ coordinates. Small differences in the results are expected and are in principle due to the differences in the numerical schemes used to solve the MHD equations, the numerical limiters that were used, the differences in the adopted coordinate systems (requiring artificialforce source terms in ICARUS), and the way the boundary conditions were implemented. In Fig. 1, we find that the simulation results at 1 AU are more diffusive in ICARUS than in the EUHFORIA simulations. These numerical differences are quite substantial as we applied a TVDLF scheme solving the equations in conservative form (see Sect. 2.1), while in EUHFORIA a finite volume method is implemented together with a constrained transport approach, which requires a staggered grid and needs the tangential electric field components to be specified at the boundary. In Fig. 2 we provide a comparison of the radial speed in the ecliptic plane between the ICARUS and EUHFORIA model, where the right panel represents the difference between the two models. Furthermore, we compute the mean absolute error (MAE) between the two simulation runs at the simulated date of the input magnetogram time stamp. The MAE for the full 3D grid is 12 kms^{–1}. The maximum detected difference between the two models is 216 km s^{–1} and is detected toward the outer radial boundary.
3.2 Coronal mass ejections
To verify the implementation of the cone CME model and its propagation through the inner heliosphere, we implement the initial conditions so that the CME hits the Earth. We refer to Table 2 for the full set of parameters. As mentioned above, the density p_{CME} and the temperature T_{CME} are given their standard values of 10^{–18} kg m^{–3} and 0.8 MK. Density and temperature are both uniform throughout the CME. We note that in the chosen coordinate frame, corotating with the Sun, the position of the Earth is changing throughout the simulation. Hence, CMEs can be ejected at any latitude and longitude and can arrive at Earth, depending on the launching time and location. This is also influenced by their speed and width and the position of Earth in the corotating frame at the time of the CME initiation.
4 Gradual radial grid stretching
Typical operational heliospheric simulation tools such as EUHFORIA (Pomoell & Poedts 2018) and ENLIL (Odstrčil et al. 2004) use equidistant meshes throughout the computational domain without any grid stretching. In a spherical grid this means that with increasing radial distance from the Sun, the width of the grid cells becomes larger due to the constant latitudinal and longitudinal spacing, while the cell spacing in the radial direction remains constant. As a result, at larger radial distances from the Sun, the cells become more elongated and deformed (see Fig. 3, left side). Grid stretching is a natural solution for such cases. In the current approach we consider gradual grid stretching only in the radial direction. The technique is discussed in detail in Xia et al. (2018) and demonstrates how to transform the resolution gradually in the radial direction in order to maintain the more or less cubic shape of the cells throughout the domain. For a given angular resolution, we can determine the number of cells N in the radial direction that would result in the cubic shapes of the cells,
where r_{in} and r_{out} denote the inner and outer radii of the domain and ∆θ is the angular resolution given in radians in the latitudinal and longitudinal direction. For the given resolutions in Table 1, we find that the number of radial cells as best solution to maintain quadratic shapes of the cells reduces from [300, 600, 1200, 1800] to [46, 92, 184, 276] for the low, middle, high, and ultrahigh resolutions, respectively. However, due to the nature of the MPIAMRVAC framework, where the domain is divided into an even number of blocks and where each block contains an even number of cells, we alter these numbers for easier and/or faster computation and implementation. The final optimal gradually stretched grid radial resolution is given by [60, 120, 240, 360] grid cells, while the angular resolution is kept the same as in the equidistant cases, resulting in a factor of 5 fewer cells compared to the corresponding equidistant grids. On the right side of Fig. 3 we show the mesh of a stretched grid for low resolution. As a result of the gradual radial stretching, the radial grid spacing is no longer constant, and for example for low resolution near the inner boundary, the cell size in the radial direction is ~1.07 R_{⊙}, while near 1 AU, at Earth's location, the radial cell size is ~ 10.7 R_{⊙}. For comparison, on the equidistant mesh the radial grid size is ~ 1.362 R_{⊙}.
Fig. 2 Comparison of the radial speed in the ecliptic plane between the ICARUS (left panel) and EUHFORIA (middle panel) models, where the right panel represents the difference between the two models. 
Fig. 3 Mesh grid in the ecliptic plane. Left panel: equidistant grid with the number of points in spherical coordinates corresponding to [60, 32, 96]. Right panel: radially stretched grid for the same resolution, so for N = 60. 
4.1 Solar wind with a radially stretched grid
The effect of gradual radial grid stretching is first considered for the realistic solar wind (i.e., based on the boundary conditions) determined from the photospheric magnetogram and the semiempirical WSAlike coronal model in EUHFORIA (see Sect. 3.1). In Fig. 4 we compare the temporal evolution of the radial speed and number density at Earth for the different simulations with low, middle, high, and ultrahigh resolutions, respectively. In these simulations Earth is rotating in the ecliptic plane. The solar wind has been relaxed for 14 days, where time zero corresponds to the end of this relaxation phase. The top figure of the radial velocity profiles shows that the low resolution simulation has a smoother profile, while the higher resolution profiles display more features with steeper gradients. The difference between the values of the middle (orange) and low (yellow) resolution simulation time series are bigger than the differences between the time series of the high (blue) and middle resolution or ultrahigh (purple) and high resolution simulations. More specifically, the MAE for the aforementioned combinations are 9.7, 8.1, and 3.1 km s^{–1}. The maximum absolute difference between the curves are 50.7, 48.5, and 20.4 km s^{–1}. We also show data for the middle resolution run (600 grid points in the radial direction), with a nonstretched grid given by the red dashed curve, as this is currently the standard resolution used in the operational setup of EUHFORIA. The top panel in Fig. 4 shows that the middle resolution nonstretched equidistant profile (red dashed curve) is very similar to the middle resolution stretched case (orange curve); the two curves almost coincide. The MAE between the two medium resolution curves is 0.8 km s^{–1} and the maximum absolute difference between the two curves is 4.9 km s^{–1}. The high and ultrahigh resolution profiles for the stretched grid show sharper gradients. Even though the resolution decreased at 1 AU locally after incorporating the grid stretching technique, the time series of the variables of the same resolution runs (e.g., middle resolution nonstretched vs. middle resolution stretched) are still similar. In the temporal profile of the plasma density at Earth (Fig. 4, bottom) the same trends are visible. Lower resolution yields smoother profiles and higher resolution runs show higher gradients in the density profile at Earth. Again, the middle resolution run with stretched grid (orange curve) yields a very similar profile to the middle (standard) resolution run with an radially equidistant computational mesh (red dashed curve), and the differences between these two curves are slightly larger than for the temporal profile of the radial velocity. The differences are most apparent in the maxima around day three. The MAE between the two middle resolution runs is 0.3 cm^{–3} and the maximum absolute difference is 3.0 cm^{–3}.
Fig. 4 Radial velocity and number density at Earth vs. time for the realistic solar wind (i.e., based on the boundary conditions) determined from the photospheric magnetogram and the semiempirical WSAlike coronal model of EUHFORIA. Five cases are shown: the four simulations with different spatial resolutions on the radially stretched grid and the middle resolution nonstretched (equidistant) case. Results for low, middle, high, and ultrahigh grid resolutions are given in yellow, orange, blue, and purple, respectively. The red dashed curve shows middle resolution nonstretched simulation results. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 
4.2 CME modeling with a radially stretched grid
We now investigate the effect of gradual radial grid stretching for a simulation run with a cone CME superposed onto the background solar wind. In contrast to the typical solar wind simulation discussed in the previous subsection, here we consider a fully controlled synthetically generated solar wind, as explained in Sect. 3.1. The selected CME parameters are those mentioned in Sect. 3.2 and listed in Table 2.
Figure 5 shows the radial velocity and the number density versus time at the Earth's location for low, middle, and high resolution simulations in yellow, orange, and blue, respectively. As a reference, the results from both the middle (red dashed) and high (dark blue dashed) resolution runs with nonstretched (equidistant) radial grids are also displayed in the figure. We note that the solar wind for the middle and high resolution cases almost fully correspond, as was also shown in the previous subsection. The high resolution grid captures the CME shock better, which is noticeable from the larger gradients at the shock position. However, it requires much more CPU time while it yields approximately the same shock strength and arrival time. The choice for space weather forecasting agencies to use middle resolution runs instead of high resolution runs is thus a small tradeoff.
The lower resolution grids yield more numerical dissipation and, as a consequence, the synthetic profiles at 1 AU are smoother in these cases. In particular, the CME shock in the radial velocity time profile (Fig. 5, top) is more spread out for the lower resolution runs and also less strong. The synthetic plasma density time profiles at 1 AU (Fig. 5, bottom) are also smoother for the simulations with lower resolution grids, and the temporal profiles show less detail on the radial density distribution in the CME. Before and after the passage of the CME, however, the differences are not significant. We calculate the MAE and maximum absolute differences for the middle and high resolution stretched and nonstretched cases, considering only the time when the CME is passing Earth's location. For the middle resolution comparison, the MAE for radial speed and density are 9.3 km s^{–1} and 0.9 cm^{–3} with maximum differences of 254.2 kms^{–1} and 5.9 cm^{–3}. For the high resolution comparison, the results are slightly lower: 5.4 kms^{–1} and 0.5 cm^{–3} with maximum differences of 225.6 kms^{–1} and 5.9 cm^{–3}. The maximum difference is found when the shock of the CME arrives.
In order to further quantify these observations, we determined the shock widths and shock arrival times at Earth for all five simulations. These values were determined from the temporal profile of the (relative) radial velocity at 1 AU. More precisely, the arrival time of the CME (shock) is defined here as the time when the difference between two consecutive u_{r} values is larger than 0.1% of the u_{r} value at that timestep. The CME shock width is defined as the time difference between the shock arrival time and the shock radial velocity peak value. The results are shown in Fig. 6.
As expected, going from lower to higher resolution results in higher peak values in the υ_{r} time profile as the shock gets diffused less when traveling throughout the domain, and thus the shock width reduces as well. The shock width for the high resolution case is ~3.3 times smaller than in the low resolution case, also providing a more accurately defined shock arrival time. We note that the equidistant grid simulations yield an even narrower CME shock at 1 AU. The grid resolution at Earth is higher than for the high resolution radially stretched grid, and thus the CME shock is captured better. The CME in the low resolution simulation arrives about 3 and 4.5 h earlier than in the middle and high resolution simulations, respectively. The difference in CME arrival time between the simulation with a high resolution radially stretched grid and the middle and high resolution equidistant grid simulations is less that one hour. We note again that for the latter simulations, the grid resolution at Earth is higher than for the high resolution radially stretched grid.
Fig. 5 Radial velocity and number density at Earth vs. time. The simulation run is performed with the synthetic wind from Sect. 3.1.1 and the CME parameters given in Sect. 3.2. Results for low, middle, and high stretched grid resolutions are given in yellow, orange, and blue, respectively. The red dashed curve and the dark blue dashed curve show nonstretched simulation results for middle resolution and high resolution, respectively. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 
Fig. 6 Effect of the radial grid resolution on the arrival time and width of the shock of the CME. Left panel: arrival time of the CME shock (in hours after the CME insertion at 0.1 AU) vs. the radial grid resolution. Right panel: CME shock width (in hours) vs. the radial grid resolution for the low, middle, and high resolution simulations (blue, orange, and green, respectively). The points in red and purple represent the results of the middle and high resolution simulations with the original equidistant grid cells. The horizontal axis displays the number of cells in the radial direction in each resolution. 
4.3 Performance and speedup
We performed runs for simulations with radially stretched and equidistant, nonstretched grids for low, middle, and high resolution runs, as specified in Sect. 2.2 and Table 1. The results of the runtime (wallclock) comparisons are listed in Table 3 for runs using a single CPU node with 36 cores on the Genius cluster at the Vlaams Supercomputing Centre^{3}. The last column shows the ratio of the time required for the simulation with the equidistant (nonstretched) grid and the simulation with the stretched grid. We can see that the stretched grid speeds up the simulation by a factor of about 4 for the standard middle resolution, but the speedup depends on the grid resolution. It is important to note that the high resolution simulation performed on an equidistant nonstretched grid did not finish within the maximum walltime limit of the supercomputer cluster, at least not on a single computing node. Therefore, this run takes longer than 72 h (wallclock time) to finish. We note that the run time for the middle resolution equidistant grid is shorter than for the high resolution stretched grid. This is due to the larger number of total grid cells and to the variation in the simulation timestep as the stretched grid has smaller grid spacing closer to the inner boundary where the wind and the CME are initiated. It remains up to the user to decide the optimal grid based on user requirements such as model performance and run time.
Run times required for different resolution runs with and without radial grid stretching performed on 1 CPU node with 36 cores.
5 Adaptive mesh refinement
Solution AMR is an advanced numerical technique that allows the introduction of higher resolution in the computational domain where necessary or preferred by the user to better capture or resolve important features, such as shock waves orregions of magnetic reconnection. The heliospheric simulations of the solar wind and the superposed propagation of CMEs have a very complex structure. Some regions, particularly CIRs and interplanetary CME shock and magnetic cloud regions are of high interest to investigate. For other areas, the solar wind can be more homogeneous or they are of less interest, because they are in the other half of the heliosphere than Earth, for example, which means that it may not be necessary to resolve these regions in more detail. Demanding high(er) resolution in the full computational domain significantly slows down simulations and makes them more computationally expensive. It is important to choose the correct refinement criteria and to limit the mesh refinement to the area(s) of interest. The MPIAMRVAC framework divides the computational domain into different 3D blocks and the mentioned blockadaptivity means that a block will be refined when one of the cells in the block meets the refinement requirement(s). It is important to note that when the criteria (or criterion) are met, the mesh is refined in all three spatial directions. In the present paper we discuss two different types of refinement. First, there is the refinement of the background solar wind structure. As the MHD equations are solved in a frame corotating with the Sun, the steady solution is timeindependent in this frame, and thus CIRs remain at fixed positions, which allows us to specify fixed AMR regions in the computational grid. Second, we demonstrate and discuss the criteria limiting AMR to regions where the CME structure is propagating, and coarsening the grid again when the CME structure has passed.
5.1 Solution AMR for solar wind simulations
First, we apply the AMR technique for the solar wind solution. The solar wind can consist of very complex structures. Since it is bimodal, it consists of fast and slow plasma streams, and their compressed interaction regions are known as CIRs (Owens & Forsyth 2013). Both forward and backward shocks can be generated at these interaction regions, especially when fast streams have very high speeds. These shocks can be spread over significant lengths in the heliosphere and are very important for general space weather conditions. Moreover, when adjacent CIRs have different stream velocities, the CIR shocks can even interact with each other. SEPs are very often significantly accelerated in the shock regions throughout the solar wind, and their acceleration sites are important to investigate in order to better understand the mechanisms behind the acceleration. As such, higher spatial resolution than the middle resolution, currently used for operational forecasting, is thus preferred for SEP models that are forecasting or simulating the related anisotropy profiles and the (omnidirectional) intensity profiles at different energy levels for past events. We note that very recently KelvinHelmholtz instabilities have been observed at the border of the slow and fast solar wind (Kieokaew et al. 2021). This too can be modeled with MHD, but requires much higher spatial resolutions at the slowfast solar wind boundary than considered thus far.
We consider the solar wind boundary conditions based on the typical magnetogram input, as mentioned in Sect. 3.1. Due to the rotation of the Sun, CIRs form a spiral pattern and they are usually stretched until the outer boundary of the simulation domain. Refining a single CIR is a challenge when several CIRs are present in the background wind. In the corotating frame considered here, however, the CIRs remain at a fixed location on the grid once the steady state is reached. We note that this is not the case in EUHFORIA or in ENLIL as these models use HEEQ coordinates. In order to limit the refinement to one particular CIR, a special condition must be implemented to take into account the shape of the spiral, the characteristic speed of the fast stream and the rotation rate of the Sun. Based on the location of the high speed stream, we can estimate where the corresponding CIRs will be located in the corotating coordinate system as (12)
where Ø is the longitude that needs to be refined, Ø_{0} and r are the coordinates of a point of interest in the domain, r_{i} = 0.1 AU is the inner heliospheric boundary, U is the characteristic speed of the fast stream, and Ω is the rotation rate of the Sun. Then to follow the spiral shape of the CIR shock, the following condition is applied to the transformed longitude value Ø, (13)
where ø lower and ø upper are the lower and higher limits for the transformed longitude we are searching for. These limits are chosen by estimation of the shift in coordinates considering the speed of the stream. A similar approach was used by Wijsen et al. (2021).
In Fig. 7, we provide a zoomedin view of the ecliptic plane of the solar wind. On the left we show only the radial velocity in one quadrant, depicting the wind structure, while on the right we also include the mesh grid that is refined in two steps (three grid levels). We aim to better capture only the shown CIR shock in that quadrant. The blockadaptivity is evident from the right panel, and it is clear that the solution AMR works both in the radial direction and in the longitudinal direction. In the latitudinal direction the mesh is also adapted accordingly. As the mesh size is halved with every AMR level, the CIR shock is modeled with a spatial resolution that is four times higher in each direction than the surrounding solar wind.
In Fig. 8 we provide timelines of the radial velocity and number density at the location of the Earth starting from 14 days after the start of the simulation, corresponding to the end of the relaxation phase. The yellow, orange, and blue curves correspond to the timelines at Earth for solution AMR with 1, 2, and 3 levels. The yellow dashed curve thus corresponds to the low resolution case as there is only 1 grid level. As reference, the red dashed curve corresponds to the timeline of the middle resolution run without AMR, which would correspond to having a level 2 AMR grid over the full domain. As the number of refinement levels increases, the temporal profiles of the radial velocity and plasma density become less smooth and the gradients increase.
First, we compare the red dashed middle resolution run with the orange level 2 AMR run. Here AMR level 2 has the same spatial resolution as the middle resolution run. From day 1 to 3 Earth is in the level 1 low resolution grid. Since the wind is steady, the two timelines are very close together. Then, around day 3 Earth encounters the CIR region, where the orange curve starts to refine to level 2. We can see that the orange line coincides with the red dashed line of the middle resolution. After the CIR has passed, around day 9, the orange curve is in AMR level 1, as expected, and thus starts to deviate from the red dashed middle resolution curve and moves toward the yellow dashed curve, which corresponds to the low resolution run. The blue AMR level 3 resolution run contains more structure, and around day 9 to 10 we can see that it is very similar to the red dashed middle resolution run. This occurs because the difference in AMR levels between two blocks can only be one. As such, part of the grid that lies beside the blocks with AMR refinement level 3 will be in level 2, which is effectively middle resolution (see Fig. 7, left). The MAE and the maximum difference for the middle resolution and the AMR level 2 simulation run are 4.7 km s^{–1} and 0.6 cm^{–3} with maximum differences of 60.0 km s^{–1} and 4.4 cm^{–3}.
Fig. 7 Simulation run with solution AMR limited to three grid levels applied to a single CIR region. The x and yaxes are shown in R_{⊙}. Both figures are zoomedin versions of the ecliptic plane, showing the radial speed values. The right panel displays the mesh grid resolution at the different AMR levels in addition to the radial speed. 
Fig. 8 Radial velocity and number density at Earth vs. time for the realistic solar wind (i.e., based on the boundary conditions) determined from the photospheric magnetogram and the semiempirical WSAlike coronal model of EUHFORIA. Results for middle grid resolution and 1, 2, and 3 AMR levels are shown in dashed red, dashed yellow, orange, and blue, respectively. The dashed yellow curve with only 1 AMR level corresponds to the low resolution run. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 
5.2 CME evolution with solution AMR
The application of solution AMR to CME propagation throughout the inner heliosphere is a little bit more challenging as it is necessary to apply AMR as the CME propagates through the computational domain and to coarsen the mesh again after it has passed by. Moreover, depending on the aims of the study one may want to resolve the entire CME (e.g., when studying CME erosion or deformation) or only the CME shock (for SEP event studies). Ideally, the AMR criteria should be flexible enough to accomplish these requirements. In the present paper we discuss a specific way to trace the CME in the computational domain by introducing a tracing function.
We use a tracing function to identify the CME after it has entered the computational domain. The tracing function solves an extra density equation where the background solar wind is set to zero, while the CME, as it enters the inner radial boundary, has a density p_{CME}. The tracing function is zero everywhere else, and hence provides an easy way to locate the position of the CME plasma at any timestep. Therefore, the AMR criterion can be formulated in this way: when the tracing function is greater than a threshold value at a given point in the computational domain, that cell needs to be refined, and hence the corresponding entire computational block to which that cell belongs will be refined, as explained in Sect. 5. In the current case, the threshold value is set to 0.001, slightly above zero to take numerical diffusion into account.
Figure 9 shows an example of a CME simulation with solution AMR with three grid levels (one base level and two refinement levels) applied to the domain based on the tracing function criterion explained above. Shown is a zoomedin snapshot of part of the ecliptic plane. On the left side we provide the tracing function, colorcoded with the density values of the CME superposed onto the AMR mesh grid, while on the right side we provide colored contours of the corresponding snapshot of the radial velocity, also superposed onto the mesh AMR grid with three levels. The blocks with cells containing CME plasma are all refined to level 3. In the right plot we can see that as the CME interacts with the solar wind, the solar wind plasma piles up before the CME and a shock wave is formed. As noted before, the applied AMR criterion does not take into account such areas and it focuses only on regions where the CME is propagating. This means that regions where solar wind is being piled up may be missed by this criterion, especially when the CME has propagated farther from the inner boundary and it has expanded. However, the blockadaptivity implemented in the MPIAMRVAC framework refines full computational cell blocks and not specific individual cells. We note that if these regions are not in level 3, they will be in level 2, as they are in the blocks next to level 3. We will explore more specific criteria for shock refinement in future work. In Fig. 10 we provide the radial speed and number density temporal profiles at Earth for low resolution (without refinement, AMR level 1) and for AMR level 2 and 3 runs in dashed yellow, orange, and blue, respectively. As a comparison, we again add the results of the middle (in dashed red) and high (in dashed dark blue) resolution simulation runs. The simulation setup is similar to the runs performed for the CME simulations for the stretched grid in Sect. 4. We note that all three AMR runs have solar wind conditions that coincide, while the arrival of the CME is very similar, but still slightly different. As a result the conditions in which the CME is traveling are different than those of the middle and high resolution run. We determine the MAE and maximum absolute differences for the middle and level 2 AMR and for the high and level 3 AMR cases, considering only the time when the CME is passing Earth's location. For the middle resolution with AMR level 2 comparison, the MAE for radial speed and density are 7.7 km s^{–1} and 0.3 cm^{–3} with maximum differences of 51 km s^{–1} and 3.7 cm^{–3}. However, for the high resolution with the AMR level 2 comparison we find some higher results for the density: an MAE of 6.9 km s^{–1} and 0.8 cm^{–3} for the radial velocity and density, respectively. The maximum absolute differences are 270 km s^{–1} and 8.7 cm^{–3}. It can indeed be seen from visual comparison of the two curves that after the arrival of the CME the density profile of the high resolution run and the AMR level 3 simulation run deviate.
Furthermore, we determined the arrival times and shock widths for all five simulations and the comparison can be found in Fig. 11. Arrival times and shock widths were calculated as explained in Sect. 4. The shock region becomes narrower for the higher refinement levels because the steepness of the jump is larger. While the AMR level 3 does not have similar arrival times and shock widths to the high resolution case, it does have a comparable arrival time to the middle resolution run and a comparable shock width to the middle resolution run. We note that the difference in arrival times and shock widths is much smaller for the AMR runs than for the stretched runs.
Fig. 9 Example of AMR grid with three grid levels (base + two refinements) applied to a cone CME. The x and yaxes show distance in R_{⊙}. Left panel: tracing function. Right panel: radial speed. 
Fig. 10 Radial velocity and number density at Earth vs. time. Simulation run is performed with the synthetic wind from Sect. 3.1.1 and the CME parameters as given in Sect. 3.2. As a reference, results for low, middle and high resolution are given by the dashed lines in yellow, red and darkblue. Solution AMR runs with 2 and 3 levels are given by the orange and blue curves, respectively. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). We note that compared to previous figures, we have zoomed in to show days 1 through 7 only. 
Fig. 11 Left panel: arrival time of the CME shock in hours. Right panel: shock width in hours. We show level 1 (noAMR), level 2 and level 3 results in yellow, orange and blue, respectively. The red and darkblue points show the results for the noAMR middle and high resolution simulations, respectively. The xaxis shows the corresponding AMR levels (level 1 corresponds to the case without AMR). 
Wallclock times required for the same simulation run with different mesh refinement levels without grid stretching performed on a CPU node with 36 cores.
5.3 AMR effect on performance and speedup
We performed typical simulation runs starting from a low resolution computational mesh and added level 2 and level 3 solution AMR (i.e., allowing for one and two levels of refinement, respectively). We provide runtime comparisons in Table 4 for simulation runs on a CPU node with 36 cores on the tier2 Genius cluster of the Vlaams Supercomputing Centre. For comparison, we also show the run times for the original simulations without refinement. The times for low, middle, and high resolution runs in rows 1, 3, and 5 are identical for the solar wind (left) and CME tracing (right) cases because the resolution is uniform over the full computational domain. AMR level 1 resolution corresponds to the original simulation using a low resolution mesh; AMR levels 2 and 3 locally correspond to the middle and high resolution domains, meaning with an effective resolution that is, respectively, a factor of 2 and 4 higher (in each spatial direction) than the level 1 resolution. Considering this relation, it is interesting to investigate how the simulation times compare. We note that the AMR level 3 resolution run, which performs very similarly to the middle resolution (see previous section), runs about a factor of 2 and 4 faster for the solar wind and CME runs, respectively. It remains up to the user to decide if this speedup is worthwhile. In an operational setup requiring several simulation runs to compensate for the uncertainty and/or inaccuracy in the observational measurements, such speedups can be crucial. In fact, the high resolution simulations on the original grid cannot be performed with the settings described at the beginning of this section. It requires more resources (at least two computing nodes with a total of 72 cores) to be carried out within the time frame of three days. Yet, the AMR level 3 simulations with the same effective resolution for the solar wind condition and the CME tracing condition run within a few hours.
6 Discussion and outlook
We presented a novel inner heliospheric wind and CME evolution model called ICARUS, which has been implemented within the MPIAMRVAC framework. The usefulness and efficiency (speedup) of radial grid stretching and solution adaptive mesh refinement have been explored by modeling a typical solar wind with ICARUS and then launching a simple (cone) CME and following its evolution. We focused on the potential of radial grid stretching and two specific methods or criteria to trigger solution adaptive grid refinement. For the steady background wind simulation, we limited the mesh refinement to one of the CIRs and for the CME evolution run, we applied the refinement according to the tracing function of the CME, in order to follow it from the inner boundary toward the outer boundary as it propagates in the inner heliosphere. When the CME has passed, the grid is coarsened in order to avoid unnecessary refinement in the domain.
Radial grid stretching introduced a significant speedup of the simulation because the load in the domain is decreased and less information is processed at each step. In addition, due to the radial grid stretching, the grid cells far away from the Sun do not have an elongated shape any more. Moreover, the radial grid stretching yielded a speedup of ~3.5 for the high resolution grid.
Solution AMR also had a substantial effect on the wallclock time of the simulation. The blockadaptive AMR in AMRVAC is very flexible and allows for different AMR strategies and thresholds depending on the needs. Several possible refinement conditions or criteria can be considered. In this work we applied a grid refinement based on the CME tracing function, which refines the domain where the (entire) CME is present. This enables high spatial resolution at the varying CME location, and grid coarsening after the CME has past. In the following work we will apply a CME shock mesh refinement condition. This will not only increase the resolution in the shock, but also, depending on the background wind conditions, any other shock waves in the solar wind, like the CIR shocks that form at the interaction regions of the high and slow speed streams.
The simulation speedup by introducing solution AMR was very notable, but depended on the required resolution (i.e., on the number of refinement levels). Since the shock refinement criterion was fulfilled in larger portions of the computational domain, more grid cells were refined. Therefore, the simulations needed slightly more CPU and wallclock time than those in which the refinement criterion was based on the tracing function. Depending on the requirements for the simulation, the AMR criterion (or criteria) can be adjusted.
In this paper we considered the effect of radial grid stretching in the domain and the effect of imposing solution AMR on the original equidistant grid. While the stretched grid provided the highest speedup, AMR level 3 provided results much closer to the middle and high resolution cases while still adding a considerable speedup. As such, this setup may be preferred. However, in the following work, we will consider the combination of radial grid stretching and AMR. Radial grid stretching, in the way we implemented it here, decreases the number of cells in the domain and it decreases the spatial resolution at larger radial distances. However, this can be compensated for, wherever high spatial resolution is required, by combining the grid stretching with solutionadaptive grid refinement which enables very high effective spatial resolution without the high CPU costs this induces when the spatial resolution is increased everywhere in the computational domain. As a result of keeping the high spatial resolution local, only where needed, the simulation becomes much more efficient.
Acknowledgements
The authors thank Dr. Emmanuel Chané for many interesting discussions and his very valuable input on this paper. C.V. is funded by the Research Foundation – Flanders, FWO SB PhD fellowship 11ZZ216N. This project (EUHFORIA 2.0) has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 870405. These results were also obtained in the framework of the projects C14/19/089 (C1 project Internal Funds KU Leuven), G.0D07.19N (FWOVlaanderen), SIDC Data Exploitation (ESA Prodex12), and Belspo projects BR/165/A2/CCSOM and B2/191/P1/SWiM. Computational resources and services used in this work were provided by the VSCFlemish Supercomputer Center, funded by the Research Foundation Flanders (FWO) and the Flemish GovernmentDepartment EWI.
References
 Arge, C. N., O. D. P. V. M. L. 2003, AIP Conf. Proc., 679, 190 [NASA ADS] [CrossRef] [Google Scholar]
 Hill, F. 2018, Space Weather, 16, 1488 [NASA ADS] [CrossRef] [Google Scholar]
 Hinterreiter, J., Magdalenic, J., Temmer, M., et al. 2019, Sol. Phys., 294, 170 [NASA ADS] [CrossRef] [Google Scholar]
 Jin, M., Manchester, W. B., van der Holst, B., et al. 2017, ApJ, 834, 173 [Google Scholar]
 Keppens, R., Nool, M., Tóth, G., & Goedbloed, J. 2003, Comput. Phys. Comm., 153, 317 [NASA ADS] [CrossRef] [Google Scholar]
 Keppens, R., Meliani, Z., van Marle, A. J., et al. 2012, J. Comput. Phys., 231, 718 [Google Scholar]
 Kieokaew, R., Lavraud, B., Yang, Y., et al. 2021, A&A, 656, A12 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Mays, M. L., Taktakishvili, A., Pulkkinen, A., et al. 2015, Sol. Phys., 290, 1775 [Google Scholar]
 Odstrcil, D., Riley, P., & Zhao, X. P. 2004, J. Geophys. Res., 109, 2116 [NASA ADS] [Google Scholar]
 Owens, M. J., & Forsyth, R. J. 2013, Liv. Rev. Sol. Phys., 10, 5 [Google Scholar]
 Pomoell, J., & Poedts, S. 2018, J. Space Weather Space Clim., 8, A35 [Google Scholar]
 Pomoell, J., & Vainio, R. 2012, ApJ, 745, 151 [Google Scholar]
 Porth, O., Xia, C., Hendrix, T., Moschou, S. P., & Keppens, R. 2014, ApJS, 214, 4 [Google Scholar]
 Scolini, C., Verbeke, C., Poedts, S., et al. 2018, Space Weather, 16, 754 [Google Scholar]
 Shiota, D., & Kataoka, R. 2016, Space Weather, 14, 56 [Google Scholar]
 Sokolov, I. V., van der Holst, B., Oran, R., et al. 2013, ApJ, 764, 23 [Google Scholar]
 Toth, G., & Odstrcil, D. 1996, J. Comput. Phys., 128 [Google Scholar]
 van der Holst, B., Sokolov, I. V., Meng, X., et al. 2014, ApJ, 782, 81 [Google Scholar]
 Wijsen, N. 2020, PhD thesis, KU Leuven, Belgium, and University of Barcelona, Spain [Google Scholar]
 Wijsen, N., Aran, A., Pomoell, J., & Poedts, S. 2019, A&A, 622, A28 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Wijsen, N., Samara, E., Aran, À., et al. 2021, ApJ, 908, L26 [Google Scholar]
 Woodward, P., & Colella, P. 1984, J. Comput. Phys., 54, 115 [Google Scholar]
 Xia, C., Teunissen, J., El Mellah, I., Chané, E., & Keppens, R. 2018, ApJS, 234, 30 [Google Scholar]
 Xie, H., Ofman, L., & Lawrence, G. 2004, J. Geophys. Res. Space Phys., 109, A03109 [NASA ADS] [Google Scholar]
 Zhao, X. P., Plunkett, S. P., & Liu, W. 2002, J. Geophys. Res. Space Phys., 107, 1223 [NASA ADS] [CrossRef] [Google Scholar]
http://amrvac.org/, v2.2.
All Tables
Equidistant (nonstretched) and radially stretched grid resolutions referred to throughout this paper.
Run times required for different resolution runs with and without radial grid stretching performed on 1 CPU node with 36 cores.
Wallclock times required for the same simulation run with different mesh refinement levels without grid stretching performed on a CPU node with 36 cores.
All Figures
Fig. 1 Comparison of the radial speed at 1 AU and 0 degree latitude between the ICARUS (red) and EUHFORIA (blue) models. 

In the text 
Fig. 2 Comparison of the radial speed in the ecliptic plane between the ICARUS (left panel) and EUHFORIA (middle panel) models, where the right panel represents the difference between the two models. 

In the text 
Fig. 3 Mesh grid in the ecliptic plane. Left panel: equidistant grid with the number of points in spherical coordinates corresponding to [60, 32, 96]. Right panel: radially stretched grid for the same resolution, so for N = 60. 

In the text 
Fig. 4 Radial velocity and number density at Earth vs. time for the realistic solar wind (i.e., based on the boundary conditions) determined from the photospheric magnetogram and the semiempirical WSAlike coronal model of EUHFORIA. Five cases are shown: the four simulations with different spatial resolutions on the radially stretched grid and the middle resolution nonstretched (equidistant) case. Results for low, middle, high, and ultrahigh grid resolutions are given in yellow, orange, blue, and purple, respectively. The red dashed curve shows middle resolution nonstretched simulation results. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 

In the text 
Fig. 5 Radial velocity and number density at Earth vs. time. The simulation run is performed with the synthetic wind from Sect. 3.1.1 and the CME parameters given in Sect. 3.2. Results for low, middle, and high stretched grid resolutions are given in yellow, orange, and blue, respectively. The red dashed curve and the dark blue dashed curve show nonstretched simulation results for middle resolution and high resolution, respectively. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 

In the text 
Fig. 6 Effect of the radial grid resolution on the arrival time and width of the shock of the CME. Left panel: arrival time of the CME shock (in hours after the CME insertion at 0.1 AU) vs. the radial grid resolution. Right panel: CME shock width (in hours) vs. the radial grid resolution for the low, middle, and high resolution simulations (blue, orange, and green, respectively). The points in red and purple represent the results of the middle and high resolution simulations with the original equidistant grid cells. The horizontal axis displays the number of cells in the radial direction in each resolution. 

In the text 
Fig. 7 Simulation run with solution AMR limited to three grid levels applied to a single CIR region. The x and yaxes are shown in R_{⊙}. Both figures are zoomedin versions of the ecliptic plane, showing the radial speed values. The right panel displays the mesh grid resolution at the different AMR levels in addition to the radial speed. 

In the text 
Fig. 8 Radial velocity and number density at Earth vs. time for the realistic solar wind (i.e., based on the boundary conditions) determined from the photospheric magnetogram and the semiempirical WSAlike coronal model of EUHFORIA. Results for middle grid resolution and 1, 2, and 3 AMR levels are shown in dashed red, dashed yellow, orange, and blue, respectively. The dashed yellow curve with only 1 AMR level corresponds to the low resolution run. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). 

In the text 
Fig. 9 Example of AMR grid with three grid levels (base + two refinements) applied to a cone CME. The x and yaxes show distance in R_{⊙}. Left panel: tracing function. Right panel: radial speed. 

In the text 
Fig. 10 Radial velocity and number density at Earth vs. time. Simulation run is performed with the synthetic wind from Sect. 3.1.1 and the CME parameters as given in Sect. 3.2. As a reference, results for low, middle and high resolution are given by the dashed lines in yellow, red and darkblue. Solution AMR runs with 2 and 3 levels are given by the orange and blue curves, respectively. The xaxis corresponds to the days passed since the end of the relaxation phase (14 days into the simulation). We note that compared to previous figures, we have zoomed in to show days 1 through 7 only. 

In the text 
Fig. 11 Left panel: arrival time of the CME shock in hours. Right panel: shock width in hours. We show level 1 (noAMR), level 2 and level 3 results in yellow, orange and blue, respectively. The red and darkblue points show the results for the noAMR middle and high resolution simulations, respectively. The xaxis shows the corresponding AMR levels (level 1 corresponds to the case without AMR). 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.