Open Access
Issue
A&A
Volume 627, July 2019
Article Number A158
Number of page(s) 24
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201935717
Published online 16 July 2019

© L. Derome et al. 2019

Licence Creative Commons
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Particle physics detectors in space have opened a new era for the study of Galactic cosmic rays (GCRs). The Alpha Magnetic Spectrometer (AMS-02) instrument on the International Space Station (ISS) provides the best data to date for leptons and nuclei (Aguilar et al. 2013, 2014a,b, 2015a,b, 2016, 2018a,b), with an uncertainty of a few percent on a large energy range. Its measurements will probably remain unrivalled for at least the next decade in the GeV–TeV energy range.

In principle, high-precision data can be used to constrain different propagation scenarios or candidates in the context of dark matter indirect detection. However, promises of high-precision cosmic-ray (CR) physics can only be fulfilled if the various sources of uncertainties, data and model, are fully accounted for.

AMS-02 systematic uncertainties are diverse in origin and dominate the error budget of measured fluxes and ratios overall. In experiments measuring spectra, correlations in adjacent energy bins may be introduced at the data analysis stage. These correlations could wash out or mimic spectral features in the data. In principle, the best approach to automatically account for such effects is to fold the model prediction to the instrument full response and directly compare with the number of events. However, because the AMS-02 instrument response is not available, the next best approach is to incorporate a correlation matrix of errors or use nuisance parameters when comparing a model to the data. Such a matrix is also not available; however, we can rely on educated guesses to derive it and inspect the consequences on the model parameters.

The dominant source of uncertainty in the modelling is from nuclear cross sections (Donato et al. 2001; Maurin et al. 2010). Propagating these uncertainties to the model parameters have already been investigated in several studies (Giesen et al. 2015; Génolini et al. 2015; Tomassetti 2017; Reinert & Winkler 2018), and we revisit this question here in more details with the use of mock (simulated) data. Controlled data were introduced in a CR propagation context in Coste et al. (2012) to study possible biases on model parameters from using primary and secondary CR data of very different accuracy. In this study, mock data are used to characterise the bias on reconstructed model parameters when accounting for cross-section uncertainties, and more importantly, to assess how well nuisance parameters on cross sections allow one to recover unbiased values of these parameters.

To be able to test various model hypotheses at the required level of the data, the model calculation must be at a much higher precision than the data uncertainty. The diffusion equation is a second order differential equation in space and momentum, and we discuss the impact of energy boundary conditions here, along with other effects that could prevent reaching the desired precision.

Except for the numerical precision, studying how to best deal with data and cross-section uncertainties is mostly independent of the propagation model and of the specific quantity studied. For practical purpose, our methodology is illustrated on the B/C ratio, which is one of the most frequently discussed quantity in the CR literature (e.g. Maurin et al. 2001).

This is a methodology paper, and the full analysis of AMS-02 data and the interpretation of the model parameters are left for the companion paper (Génolini et al. 2019). The paper is organised as follows: Sect. 2 briefly introduces the 1D model and its parameters, and details two configurations used for the analysis. Section 3 discusses how to ensure a good precision at various stages of the calculation: boundary conditions, numerical stability, and comparison of the model to the data. Section 4 shows two different parametrisations of nuisance parameters for nuclear cross sections, and mock data are used to fully characterise the impact of cross-section uncertainties. In Sect. 5, we detail the systematics of AMS-02 data – from which a covariance matrix of error is built–, and we show the impact of the correlation length (of the covariance matrix) on the best fit results and errors on the model parameters. Cross sections and covariance matrix effects are then dealt with together in Sect. 6, to assess which one dominantly impacts the analysis of AMS-02 B/C data. We summarise our findings and give some recommendations in Sect. 7.

For the sake of readability, some more technical and detailed discussions are postponed in the appendices: Appendix A shows that the AMS-02 B/C data conversion from R to Ek/n as presented in Aguilar et al. (2016) induces an energy-dependent bias (∼3%); Appendix B details how the covariance matrix of errors and nuisance parameters are included in the χ2 analysis; Appendix C gathers several boundary coefficients that can be implemented for the numerical solution of the discretised second-order differential diffusion equation; Appendix D presents a thorough analysis of the stability of the latter solution (for the 1D model), also determining a stability criterion for the Crank–Nicholson solution with a vanishing second-order term (Va → 0); Appendix E shows a more detail view of how specific cross-section reactions impact the calculated B/C ratio.

2. Model and parameters

Many modellings of propagation models are possible, with different geometries, more or less involved spatial dependences, possible time-dependence, etc. For simplicity, and because most of our results and conclusions should not depend on this modelling, we use throughout the paper a 1D diffusion model, as implemented in the USINE package (Maurin 2018)1. The equations and solutions can be found for instance in Putze et al. (2011), and for conciseness, we do not repeat them here. Such a model also proves useful, because it can be compared, in some simple cases, to compact analytical solutions (see, e.g. Appendix D).

2.1. 1D propagation model

In this model, sources and gas are in a thin disc of half-height h in which energy losses and reacceleration occur. Particles diffuse in an homogeneous region of height L above and below the disc. The free parameters of the model that we vary in this study are: the rigidity dependence R = pc/(Ze), of the homogeneous and isotropic spatial diffusion coefficient K(R), see Sect. 2.2; the Alfvénic speed Va of scatterers mediating the strength of the diffusion in momentum Kpp(R), the latter being related to the spatial diffusion coefficient and depends on the geometry of the turbulence (Schlickeiser 2002); the strength of Vc, a constant Galactic wind perpendicular to the thin disc.

Other model parameters (halo size L, source spectrum) are irrelevant for this study. For instance, changing L would merely lead to a rescaling of the above parameters (e.g. Putze et al. 2011), so we fix it to 10 kpc for practical purpose (and we fix h to 100 pc). The B/C value is insensitive to the value of the universal spectral index of sources (e.g. Maurin et al. 2002; Génolini et al. 2015), so it is fixed in the analysis.

2.2. Parameters for Model A and Model B

The impact of various uncertainties on transport parameters depends on the exact number and nature of the free parameters considered. We rely on two configurations, which correspond to two extreme cases of a more generic parametrisation of the diffusion coefficient, further detailed in the companion paper. They are denoted Model A and Model B in the rest of the paper.

Model A is a diffusion-convection-reacceleration model with

(1)

where ηt allows for a sub-relativistic change of the diffusion coefficient as parametrised in Maurin et al. (2010), and where KHE = (1 + (R/Rh)δh/sh)sh is a high energy break whose parameter values are taken from Génolini et al. (2017). This configuration has 5 free transport parameters: K0, δ, ηt, Va, and Vc.

Model B is a pure diffusion model (no Vc, no Va) with a double broken power-law, at both high and low energy:

(2)

This configuration has 4 free parameters: K0, δ, Rl, and δl. The smoothness parameter sl has only a minor impact on the results, so it is fixed to 0.05 (quick transition) to speed up the fitting procedure.

Following Seo & Ptuskin (1994), we take, for both models, the diffusion coefficient in momentum

(3)

3. Model precision: general considerations

The requirements for any analysis is (i) that the model calculation can be enforced at a precision much better than the precision of the data which will constrain it, and (ii) to ensure that the model and the data compared refer to the exact same quantity.

We focus on the model calculation in this section, but at the data level, biases can also sometimes be introduced, as exemplified in Appendix A with the conversion of B/C data from R to Ek/n proposed in Aguilar et al. (2016).

3.1. Energy boundary conditions and numerical stability

The transport equation of CRs (e.g. Maurin 2018) is a second order differential equation in space and in momentum (if reacceleration is present), whose solution depends on the boundary conditions. These conditions should in principle be fixed by physics requirements, but there is no consensus as to what they are. We underline that spatial and momentum boundary conditions are generally not dealt with on the same footing: while they are usually clearly indicated in publications and are recognised as being part of the properties of a model for the former, this is not the case for the latter (which is discussed below).

The transport equation can be cast into a generic conservation equation on the energy co-ordinate x = lnEk/n:

(4)

with u is the cosmic-ray differential density in energy, u0 a source term, JE a current (convection and diffusion), and α, β, and γ are coefficients related to energy losses, diffusion in momentum, etc.2

Boundary conditions. At very high energy, for nuclei, the timescale of energy losses and gains becomes very large compared to the escape time, suggesting the condition u(xmax) = u0(xmax), which is seemingly used in all propagation codes.

At low energy, different boundary conditions have been used in the literature, corresponding to different physical situations: (i) no curvature in the spectrum, as introduced in Donato et al. (2001) for antiprotons: ∂2u/∂x2|xmin = 0; (ii) no energy flow, that is a vanishing current JE(xmin) = 0. Physically this means that, at xmin, the outward current from energy losses balances exactly the inward reacceleration current. This condition thus depends on the coefficients α, β and γ of the equation. From Eq. (4), one can infer that for very small values of γ, this will create a strong gradient of u to maintain JE = 0; (iii) no density gradient in momentum ∂f/∂p|xmin = 0 (Evoli et al. 2017), which translates into ∂/∂p(u/(pE))|Emin = 0.

Numerical solution and precision. In the semi-analytical models implemented in USINE and used here, as for most propagation codes, the second order differential equation in energy is solved numerically, by solving the equation on a grid. In practice, we make use of the semi-implicit Crank–Nicholson method and the resolution proceeds via the inversion of a tridiagonal matrix (see Appendix C). The first and last energy bins are set by the boundary conditions, as reported in Table C.1.

The precision of the numerical inversion depends on the number of points on the grid. We find that the boundary conditions at low energy described above yield similar results, provided that the number of energy bins is large enough and that the lower energy Emin is far below the range of interest. However, they differ with respect to the minimum number of energy bins required to reach the same level of precision. The B/C ratio obtained from each of these conditions are compared in Fig. 1 to a reference ratio obtained setting Emin = 1 MeV n−1 and using 5000 points per energy decade. We checked that the reference ratio does not depend, at the per mille level, on the chosen boundary condition. The red, blue and green curves are obtained for the first, second and third boundary conditions at low energy, respectively. In the top panel, we change the number of points in the energy grid (10, 50 and 100 bins per decade) whereas in the bottom panel we move the low-energy bound Emin from 100 MeV n−1 down to 1 MeV n−1. We obtain similar results for each low-energy boundary condition, provided that the number of points per energy decade is larger than 10 and Emin is much lower than the energy range of interest. This is probably the reason why no boundary condition at low energy is specified in the numerical codes GALPROP (Strong et al. 2011, p. 35) and PICARD (Kissmann 2014, p. 5). However, the choice of the boundary condition is important for the convergence. We find that the first condition (no curvature in the spectrum) converges better than the others and yields a precision at the percent level even using only 10 bins per decade when starting from Emin = 1 MeV n−1. We therefore recommend to make use of the no curvature boundary condition at low energy, with Emin = 1 MeV n−1 and using 50 bins per decade, to ensure numerical systematics lower than the percent level.

thumbnail Fig. 1.

Relative difference for the B/C computed with different boundary conditions at LE, varying the number of energy bins per decade (top panel) and the minimal energy of the grid (bottom panel). The reference B/C has been computed using 5000 energy bins per decade and setting Emin = 1 MeV.

Open with DEXTER

Finally, this method can be unstable for γ → 0, in which case Eq. (4) boils down to a first order differential equation. A simple prescription to obtain the solution keeping the same solver is to enforce a non-null but small γ, with a non-vanishing yet small Va. We checked that setting a lower limit on the Alfvénic speed such that where Δx is the spacing of the energy grid, stabilises the Va = 0 solution at the per mille level.

3.2. “Model vs. data” error using Rmean or of bin range estimate

To our knowledge, models are always calculated at a single point and then compared to data measured over a bin range. This can lead to a systematics we quantify here.

Meaning of Rmean in data. All data measurements are based on a number of events in the detector (corrected for the acceptance, efficiencies, etc.) per unit area, solid angle, unit time, and energy bin. To be practical, let us assume that, like AMS-02, the experiment provides data per rigidity bin [Rmin,  Rmax]3. Usually, for display purpose, the central bin reported in the experiments is the geometric or more rarely the arithmetic mean:

(5)

Neither of those means are satisfactory to represent the measured flux F, because the correct central bin should be calculated from the condition

(6)

This is a well known issue (Lafferty & Wyatt 1995): providing the exact Rmean from the data requires the knowledge of the spectral shape of the flux that the experiment is actually trying to measure!

vs. for fluxes and ratios. In the context of minimisation studies, the above discussion should be irrelevant as the theoretical flux is known: it can be integrated over the bin range and compared with the data without approximation. However, standard practice in the literature is to fit the data using . We quantify below the relative difference between the exact and approximate calculation for a flux,

(7)

For practical calculations, we assume power-law fluxes F = F0Rα, so that the normalisation F0 simplifies in Eq. (7). We can also estimate the relative difference for B/C, taking the ratio of two power-law fluxes of indices αnum and αden:

(8)

Model error on fluxes. We show in Fig. 2 the module |ℰF| as a function of α. The different line styles correspond to different bin widths, and the larger the bin width, the larger |ℰF|. Also, is a better approximation than , except for α ∼ 0 (thick dim vs. thin light grey lines). is exact for α = 0 and α = 2 and the relative difference is positive above α = 2 and negative below α = 2. To guide the eye, the green dash-dotted line shows the typical 3% uncertainty on the AMS-02 data: all values of α and ΔR above this line, using in model calculations produces a bias larger than 3%. We do not study fluxes here, but primary and secondary fluxes have slopes of ∼2.8−3.1, for which the bias is maximal. Nevertheless, the maximal rigidity bin width in AMS-02 data for proton and He flux is Rmax/Rmin ≈ 1.6, for which the approximate calculation is smaller than the data uncertainty.

thumbnail Fig. 2.

Absolute value |ℰF| of the relative difference between the exact and approximate flux calculation in a bin, Eq. (7), as a function of the flux spectral index α. Two mean rigidity definitions are compared, in dim grey and in light grey, see Eq. (5), for different bin size ranges (from 1 decade, solid line, down to 0.1 decade, dotted line). To guide the eye, the green dash-dotted line shows when the relative difference is 3%. See text for discussion.

Open with DEXTER

Model error on ratios. We repeat the analysis for the ratio of two power laws of slope αnum and αden, see Eq. (8). Figure 3 shows colour-coded values of log10(ℰR). The x-axis is αden − αnum: for the B/C ratio, this difference ranges from ∼0.2 to 0.7 at high energy, while negative values mimic the decreasing ratio below a few GV. We show our results for three αden values, where the lower value (top panels) mimics the flattening of the Carbon spectrum at low energy, while the higher value (bottom panels) corresponds to the high-energy slope: ℰR grows with increasing αden − αnum and with the data bin size (y-axis); the three rows show a growing relative difference (from top to bottom) with αden. Also, as already observed for fluxes, the approximation is always better than : for any given x − y position in Fig. 3, ℰR is always smaller in the left panel than in the right panel. To guide the eye, the green dashed line delimit the contour for which ℰR ≡ 3%, with larger uncertainties in reddish regions and smaller uncertainties in blueish ones.

thumbnail Fig. 3.

Colour-coded relative difference log10(|ℰR|) between the exact and approximate ratio calculation in a bin, Eq. (8), as a function of the numerator and denominator spectral difference, αden − αnum (x-axis), and as a function of the rigidity bin width, log10(Rmax/Rmin) (y-axis). For instance, a value of 2 (red) corresponds to a relative difference of 100%, and a value of −1 (blue) to a relative difference of 0.1%. The left and right panels correspond to calculations using and respectively, see Eq. (5). The rows correspond to three different values of the denominator spectral index αden. In each panel, the dash-dotted green contour delimits regions in which the relative difference is above or below 3%. See text for discussion.

Open with DEXTER

For B/C from AMS-02 data, the bin range size goes from 0.08 decade at low rigidity to 0.3 decade for the last few rigidity bins where the systematic uncertainty reaches 10% (Aguilar et al. 2016, 2018b)4. From Fig. 3, this translate into negligible values of ℰR for almost all rigidities. However, for the highest rigidity bins, the model error from using is systematic (same sign) and reaches a maximum of −3% for αnum − αden ≳ 0.7. In a region where power-law breaks are usually fitted, the exact calculation is recommended5.

4. Handling cross-section uncertainties

Nuclear cross sections are measured by “external” experiments, and these measurements can be incorporated as a distribution of probability in the χ2 minimisation via nuisance parameters (see Appendix B): cross sections far from their most probable values must be penalised in the minimisation, see Eq. (B.6).

The difficulty lies in the characterisation of the uncertainties, the choice of the nuisance parameters, and assessing the robustness of the procedure. We start by characterising the impact of cross-section uncertainties on the B/C ratio (Sect. 4.1). We then present two different strategies for the choice of nuisance parameters (Sect. 4.2). To assess the successfulness of these two strategies, we have to rely on the analysis of mock data for many configurations (Sect. 4.3). We then discuss how well these configurations capture and propagate all cross-section uncertainties to the transport parameter level (Sect. 4.4).

4.1. Quantifying the impact on B/C ratio

Cross section data uncertainties are typically at ∼5−10% level for inelastic cross sections, and 15−25% level for production cross sections (Génolini et al. 2018). However, because the data are sometimes scarce, old, not always consistent with one another, and sometimes even missing for some reactions, several parametrisation of the whole network of reactions exist. A conservative estimate of the impact of cross-section uncertainties on the B/C calculation can be based on the scatter observed from using several of these parametrisations (see Génolini et al. 2018 for more details).

For inelastic cross sections, σinel, we use below B94 (Barashenkov & Polanski 1994), W96 (Wellisch & Axen 1996), T99 (Tripathi et al. 1996, 1999), and W03 (Webber et al. 2003). Except for T99, the scaling σHe/σH is taken from Ferrando et al. (1988). For production cross sections, σprod, we use W98 (Webber et al. 1998a,b,c), S016, W03 (Webber et al. 2003), and G17 (Moskalenko et al. 2001; Moskalenko & Mashnik 2003).

For a given set of propagation parameters, we calculate the B/C ratio for various parametrisations, and we plot in Fig. 4 the relative variation with respect to a reference T99 for σinel and W03 for σprod. The maximum impact of inelastic cross sections is ≲3%7 at ∼5 GV (left panel). It slowly decreases to zero at higher R, because the escape time from the Galaxy decreases with R while the destruction time remains constant (see, e.g. Fig. D.1). The maximum impact of the production cross sections is ≲10%, and it is equally seen at low and high rigidity (right panel) because the Boron flux is directly related to its production cross section. These results are similar for Models A and B (not shown) and are independent of the solar modulation level (compare the black and grey lines in Fig. 4).

thumbnail Fig. 4.

Impact of different cross-section parametrisations on B/C flux calculation as a function of rigidity, w.r.t. a reference (denoted Ref. in the legend). The left and right panels are for inelastic and production cross sections respectively (see Sect. 4.1). The thick (resp. thin) lines are for the interstellar (resp. solar-modulated at ϕForce-Field = 800 MV) calculation. The red vertical line highlights the first rigidity point of AMS-02 B/C data.

Open with DEXTER

Actually, a huge network of reactions is involved in the calculation of any given secondary cosmic ray, making the modelling of uncertainties for each individual reaction a daunting task. Obviously, this full network is taken into account to calculate B/C, but we model and incorporate cross-section uncertainties for the most relevant reactions only: for the production, 16O and 12C make ∼70% of the Boron flux via 10, 11B (Génolini et al. 2018), and for inelastic interaction, 16O, 12C, and 11B are the most relevant. The results of Appendix E show that the variation seen on the B/C ratio from cross-section uncertainties almost completely originate from these dominant reactions.

4.2. From uncertainties to nuisance parameters

To estimate uncertainties on selected reactions, one could use a parametric formula to fit the cross-section data and extract the best-fit parameters and uncertainties to propagate as nuisance parameters. This is the strategy followed by Reinert & Winkler (2018) for production cross sections. However, as already said, the reliability of the data is not always clear, with many inconsistent data points, from a mixture of very old and more recent experiments with probably underestimated systematics (Génolini et al. 2018). All these unknowns most certainly break down the statistical meaning of the χ2 values and uncertainty determination of these nuclear data. We assume here that the cross-section uncertainties are fully captured by existing parametrisations (Sect. 4.1)8. It is difficult to argue which of the two above approaches gives the most realistic description. Indeed, the values and uncertainties on the cross-section data and the models could be fully, partly, or not at all correlated, which would increase or decrease the uncertainty on the calculated B/C ratio (see the discussion in Génolini et al. 2018). Without new cross-section data, the degree of belief one can have in the modelling of cross-section uncertainties can hardly be improved.

Our approach probably allows for larger uncertainties than those taken in Tomassetti (2017) and Reinert & Winkler (2018), which also focus on a subset of reactions only. However, we recall that the uncertainties on this subset must ‘emulate’ the total uncertainties from the whole network of reactions (see the discussion in Appendix E). To go beyond qualitative arguments, we inspect in Sect. 4.3 whether the degrees of freedom used to model cross-section uncertainties are conservative enough not to bias the determination of the transport parameters. Before doing so, the next two paragraphs discuss two ways to model cross-section uncertainties as nuisance parameters.

4.2.1. Normalisation, scale, and slope (NSS)

Technically, how to choose nuisance parameters so that they enable to move from one parametrisation to another? The latter are shown (solid lines) in Fig. 5, with σinel (resp. σprod) in the left (resp. right) panels, and there is no obvious scaling formula between the curves. A possibility is to start from a reference cross section and apply several simple (uncorrelated with some non-commutative) transformations:

(9)

(10)

(11)

thumbnail Fig. 5.

Models (as listed in Sect. 4.1) and range of cross sections (quantiles corresponding to median, 68%, and 95% CLs) generated from a reference cross section biased by Gaussian distributed nuisance parameters (as gathered in Table 1). The reactions showed correspond to the dominant ones discussed in Sect. 4.1, for inelastic (left column) and production (right column) cross sections.

Open with DEXTER

Table 1.

Values of μ and σ for Gaussian distributed nuisance parameters for cross sections listed.

This set of transformations is denoted NSS in the following. It is our first option to generate nuisance parameters for cross-section uncertainties. To better visualise how NSS change cross sections, we draw 1000 values for each of the three uncorrelated NSS parameters (Norm., Scale, and Slope), and then show in Fig. 6 the median, 1σ, and 2σ contours from the associated 1000 realisations of σNSS/σref. Whereas the normalisation and the low-energy slope changes are independent of any reference cross section (left and right panels), the energy scale bias is strongly dependent on it (middle panel). Indeed, the reference in Fig. 6 is an inelastic cross section whose energy dependence has a low-energy peak, a dip, and a second smaller peak (see Fig. 5): shifted to the right-hand side and divided by the unscaled one, it gives a series of three bumps.

thumbnail Fig. 6.

Median, 1-σ, and 2-σ for the distribution of σNSS/σref(Ek/n) values, from 1000 Gaussian distributed normalisation values (9), energy scale (10), and slopes (11) with respective mean and variance (μ, σ)Norm = (1, 0.05), (μ, σ)Scale = (1, 0.2), and (μ, σ)Slope = (0, 0.02).

Open with DEXTER

For each reaction, the NSS nuisance parameters are chosen so that σNSS/σref ± 1σ – calculated from Gaussian distributed samples of (μ, σ)Norm,  Scale,  Slope – encompasses the various cross-section parametrisations. This is shown in Fig. 5 for inelastic (left panels) and production (right panels) cross sections, with grey lines showing the median (dashed), 1σ (dotted), and 2σ (dash-dotted) envelopes of σNSS/σref; solid coloured lines are the literature parametrisations. The corresponding NSS parameters are gathered in Table 1, and they serve as nuisance parameters in the analysis below9. In order to keep as fewest nuisance parameters as possible (not to slow down too much the minimisation procedure), only a normalisation and energy scale is applied to inelastic cross sections, whereas a normalisation and a slope suffices to capture the range covered by production cross-section parametrisations.

4.2.2. Linear combination (LC)

Our second and more straightforward option is to define cross sections as a linear combination of the available cross-section parametrisations,

(12)

where the index i runs on the parametrisations shown in Fig. 5. The sum of the Ci coefficients must be close to 1, so that we naturally recover each parametrisation when only one Ci is non-null. This LC allows to combine the different shapes (energy dependences) of the various cross sections, which is key for the determination of the transport parameters (see next section). To allow for possible normalisation systematics, we apply a loose constraint on the sum of the Ci coefficients,

(13)

To be able to compare this approach to the NSS approach above, we set (μC, σC)inel = (1, 0.04) and (μC, σC)prod = (1, 0.15) to match the spread set on normalisation parameters in Table 1. The constraint (13) is accounted for as a penalty in the minimisation, that is an additional term in the χ2,

(14)

The Ci parameters are taken to be flat in [−0.5, 1.5] and are forbidden to wander outside this range.

4.3. Mock data: generation and configurations

We are almost ready to address some important questions related to cross-section uncertainties. How do they propagate to transport parameter uncertainties? Can we recover the true values of the transport parameters using “wrong” values for the cross sections? However, the only way to answer these questions is to analyse controlled data, whose input ingredients and parameters are known.

4.3.1. Mock data generation

To generate simulated data as close as possible to real data, we proceed as follows. Firstly, select the model (e.g. 1D model here) and its input ingredients (e.g. a specific cross-section dataset). Secondly, select a dataset to fit (e.g. B/C AMS-02 data) and perform the fit. Thirdly, use the best-fit model to simulate data close to the real data in the following way: (i) interpolate model values at data energies, ; (ii) draw at each a value from a normal distribution 𝒢(0,1), and use the latter and (data error) to form the mock data valuess

(iii) repeat as many times as necessary to obtain the desired number of simulated data. Lastly, analyse the mock data using the same setup or varying the input ingredients (depending on what is studied, see below).

4.3.2. Mock data configurations

We list and label the many cases considered below: in the forthcoming figures, each configuration is associated with a unique colour- and line-style, as recapped in Table 2.

Table 2.

Summary of mock data configurations used to test cross-section nuisance parameters in Sect. 4.3.

Propagation parameters (×2)10: each analysis is repeated for Model A (free parameters K0, δ, ηt, Va, and Vc) and Model B (free parameters K0, δ, Rl, and δl), see Sect. 2.2. We do so because the two models have different correlations between their parameters and cross-section uncertainties may impact them differently. For instance, reacceleration smooths spectral features, and it is present in Model A only.

Cross sections to generate and fit mock data (×2): In the unbiased case, the cross sections in the propagation model used to generate and fit mock data are the same (T99 for σinel and W03 for σprod). In the biased case, mock data are still fit with T99 and W03, but they were generated from a different set of cross sections (W96 for σinel and G17 for σprod)11.

Type of nuisance parameters to fit mock data (×2): we either use “normalisation, scale, and slope” (NSS) nuisance parameters, that is μ|σNorm,  Scale,  Slope values prescribed in Table 1, or linear combination (LC) nuisance parameters, that is Ci coefficient weighting various cross-section parametrisations with normalisation uncertainties σCi = σNorm (see previous section and Eq. (12)).

Nuisance parameters to fit mock data (×4): we have four types of runs to assess the impact of adding more and more nuisance parameters in the analysis, labelled No nuis. (transport parameters only), Inel. (free transport parameters and nuisance for σinel), Prod. (same but σprod instead), and Inel.+Prod. (combined).

4.4. Results of the mock data analysis

The analysis starts with the generation of 1000 mock data (see above), based on given values of the transport parameters, and a given choice of cross-section parametrisations. We then perform a χ2 analysis on each mock data and store the best-fit parameters and the associated value12. By construction, for our mock data, so that from the one-dimensional distribution of the parameter values and their correlations, we can reconstruct 1σ and 2σ confidence intervals. Also, comparing the parameter distribution and their “true” input value allows to assess the successfulness of our procedure.

4.4.1. Unbiased case: Sanity check

Figure 7 shows the 1σ contours (68% confidence level) from the 2D probability distribution functions of the transport parameters, with and without nuisance parameters in the fit. We underline a few features in these plots:

thumbnail Fig. 7.

Reconstruction of 1σ contours (68% confidence level) from the analysis of 1000 mock data for Model A (top panel) and Model B (bottom panel), for the unbiased case (cross sections for the analysis are the same as the cross sections used to generate mock data). For display purpose, the 2D probability distribution functions are estimated using a Gaussian Kernel (default method to define the bandwidth of gaussian_kde in scipy python library); the irregular shapes are related to statistical fluctuations. The colour code is related to the nuisance parameters used and the line style to the type of nuisance parameters (see Table 2). The “+” symbols represent true values. See text for discussion.

Open with DEXTER

Contours from statistical errors only: black solid lines are 1σ contours from the analysis without nuisance parameters. Their size and shape depend on the level and energy dependence of the data statistical error (blue line of Fig. 10). Tight correlations are seen on K0 and δ for both models. As expected for the unbiased analysis (i.e. same cross sections used to generate and fit mock data), the contours encompass the true value of the parameters (“+” symbols).

Fit with σinel and σprod nuisance parameters: blue lines (resp. orange lines) show the 1σ contours from the fit with σinel (resp. σprod) as nuisance parameters. The size of the contours is not too strongly impacted by the cross-section nuisance parameters, because the minimum of χ2 is left unchanged (not shown) and the additional degrees of freedom provided by the nuisance parameters are “unused”. Nevertheless, the contours are deformed differently, because inelastic and production cross sections impact differently the B/C ratio – see for instant the blue (σine) and orange (σprod) contours for K0 vs. Rlo in Model B. The fit with combined nuisance parameters (green lines) gives contours that encompass both the previous ones. The fine details depend on the Model (A or B) and the type of nuisance (NSS in solid and LC in dashed lines) used.

From these results alone (unbiased case), one cannot conclude on the impact of cross-section uncertainties on the transport parameters. In fact, the unbiased case is just an elaborate sanity check. It confirms that, in a scenario where cross sections are perfectly known, adding cross-section uncertainties has only a marginal effect.

4.4.2. Biased case: uncertainties and biases on transport parameters

To go further, we repeat the analysis fitting mock data with cross sections that differ from the ones used to generate them, that is biased case (see Table 2). We show in Fig. 8, from top to bottom, the distribution and the 1D probability distribution function of all transport parameters. For readability, we only show the results for the No nuis. (black lines) and Inel.+Prod. (green lines) cases. In this figure, solid (dashed) lines correspond to NSS (LC) nuisance type.

thumbnail Fig. 8.

Distribution of best-fit values (, then transport parameters) from the analysis of 1000 mock data for Model A (left panel) and B (right panel), for the biased case (cross sections for the analysis differ from those used to generate mock data). For display purpose, 1D probability distribution functions are estimated using a Gaussian Kernel. The line style and colours indicate the type and configuration of nuisance parameters used (see Table 2). The vertical dashed lines represent true values. See text for discussion.

Open with DEXTER

Impact on goodness of fit (top panels): the black lines, which correspond to a fit with the transport parameters only (No nuis.), show that using wrong cross sections can lead to values larger than one. Taken at face value, one would conclude that the model is excluded. Adding cross-section nuisance parameters – which encompass the true cross-section values at 1σ – allows to recover (green lines). The LC nuisance parameters (green dashed lines) fare slightly better than NSS ones (green solid lines): this is understood as the “true” cross-section values can be reached in the LC case, whereas they can only be approached in the NSS case.

Biased transport parameters (remaining panels): without nuisance parameters (black lines), the transport parameters are strongly biased, up to several σ away from their true value (vertical dashed line). Using nuisance parameters (black vs. green lines) has two effects: it enlarges the probability distribution function of the transport parameters, and it shifts the distribution towards the true value. Overall, the two schemes allow to recover unbiased parameters. A mismatch is observed for the strongly correlated δ and K0 parameters when using NSS in Model B. The latter is particularly sensitive to any small energy-dependent difference in the cross-section values as it directly reflects on the calculated B/C. On the other hand, in Model A, this difference can be smoothed out by reacceleration.

We finally comment on the fact that the LC case does not recover fully unbiased transport parameters. Whereas nuisance parameters enable the cross sections to match their “true” values (the one used to generate the data), they can only do so for the selected four production cross sections and three inelastic reactions. The remaining ones are different from those used to generate the mock. This “reaction network” effect explains the observed residual biases.

4.4.3. Biased case: posterior on nuisance parameters

We show in Fig. 9 violin plots for the nuisance parameters obtained after the fit. We only show these parameters for the Inel.+Prod. configuration, that is the analysis in which we fit mock data with transport parameters plus production and inelastic cross-section nuisance parameters (see Table 2). The rows in Fig. 9 show, for the four production and three inelastic reactions (see Sect. 4.1), the values of the associated nuisance parameters (two per reaction). It is interesting to show the results for the unbiased (resp. biased) case, in red (resp. blue), corresponding to the use of the same (resp. different) cross sections to generate and fit mock data. Let us comment on these distributions.

NSS analysis (left panels). the nuisance parameters for the NSS analysis are a normalisation and energy scale parameters for production cross section, and a normalisation and low-energy slope for inelastic cross sections (see Eqs. (9)–(11) and Table 1). The parameters are centred and normalised (to their σ value) so that on the x-axis, unbiased parameters are expected to be centred on zero and between −1 and +1 (1σ range): (i) in the Unbiased analysis (red), as expected, the distributions are centred on zero and have overall a very small width; (ii) in the Biased analysis (blue), the distributions are now offset, because nuisance parameters are used to improve the fit. Almost all parameters fall within their 1σ value, indicating that the range of variation for these nuisance parameters was well calibrated in Sect. 4.2.

Actually, we tried different values of the nuisance parameters (not shown): with a smaller range, the “true” cross sections cannot be recovered as the nuisance parameters would need to be several σ away from their central value, which would penalise the χ2. In turn, this biases the transport parameters (discussed in the previous section): the mild bias that was observed for the NSS case (w.r.t. to the LC case) is related to the fact that μslope is almost 2σ away in Fig. 9.

LC analysis (right panels). nuisance parameters for the LC analysis are the coefficients (see Eq. (12)), where the index i runs on the cross-section parametrisation enabled in the analysis, and j runs on the various reactions considered. These coefficients typically vary from 0 (if the parametrisation is unused) to 1 (if the cross section is dominant in the LC). In principle, we should have as many coefficients as the number of available cross-section parametrisations. However, the inspection of Fig. 4 shows that some parametrisations are, up to a normalisation, very close to some others. As a result, when more than two parametrisations are considered as nuisance (not shown), there are several possible combinations to reproduce the original cross section (used to generate the mock): the χ2 function has several local minima and MINUIT has difficulties to find the true minimum13. That is why we chose only two parametrisations in our analyses (as listed in Table 2) and the distribution of values of the nuisance parameters are shown in the right panels of Fig. 9:

In the Unbiased analysis (red), cross sections used to generate and analyse mock data are T99 for σinel (resp. W03 for σprod), so that we should recover CT99 ≈ 1 (resp. CW03 ≈ 1) for all inelastic (resp. production) cross sections and 0 for the other Ci. This is what is observed for all reactions. The parameters for inelastic cross sections (three bottom rows) display overall very broad distributions: we recall that the latter only have a small impact on the B/C calculation (≲3%), so that a fit with data with similar uncertainties will not be sensitive to them. In the Biased analysis (blue), the starting cross sections to generate mock data, for all reactions in the network, are W96 for σinel and G17 for σprod, and we observe CW96 ≈ 1, CG17 ≈ 1, and Cothers ≈ 0, as expected.

4.4.4. Conclusions on the impact of cross-section uncertainties

We have seen that assuming wrong cross sections can strongly bias the model fit, and thus bias the deduced transport parameters. Starting from the wrong cross-section values, we showed that nuisance parameters on a limited number of reactions allow to mostly recover the true values of the transport parameters. However, the procedure is not perfect owing to “reaction network” effects, that is the fact that we only use as nuisance a small, though representative, sample of all the reactions involved. The LC parameters fare slightly better than NSS parameters, but this is only true because LC parameters always contain “true” cross sections of the analysis. In real life, we do not know what are the real cross sections, and there is no guarantee that the LC approach would still fare better than the NSS one.

We want to stress that the above procedure is not even as straightforward as presented. There was, to some extent, some fine tuning done on the range chosen for the nuisance parameters to best recover our mock data. Although we were guided by the spread between the cross sections (see Fig. 5), we had somehow to extend the range of some parameters in successive tests. In particular, for NSS, the low-energy slope was taken larger than what was strictly required from the inspection of the spread (see Fig. 5). However, a posteriori, this made sense, because not only the normalisation but also the slope of the cross-section reaction matters in the analysis (especially for Model B), and a larger slope parameter was needed to reconcile the energy-dependences of W03 and G17 parametrisations. Even for LC, which looks less problematic, there were some issues. We already underlined the pitfalls (in terms of minimisation with MINUIT) of having too similar cross-section parametrisations in the linear combination. It was not mentioned earlier, but the allowed range set for the LC coefficients also matters: using [0, 1.5] instead of [−0.5, 1.5] affects the distribution of the nuisance parameters shown in Fig. 9, although the transport parameters are only very mildly affected.

Furthermore, the reader should keep in mind that regarding (i) the importance of cross-section uncertainties and (ii) which reactions should be used as nuisance, the conclusions strongly depend on the data uncertainties assumed. Indeed, the above analysis was based on AMS-02-like statistical uncertainties, that is an extreme and too conservative situation for the data. This was chosen in order to demonstrate the proof of principle of our approach. Adding systematics, which are dominant over most of the energy range in AMS-02 data, will obviously make cross-section uncertainties less impacting and the residual biases on transport parameters less severe. Accordingly, with larger uncertainties, the number of reactions to include as nuisance is also decreased: there is no gain in adding cross sections whose impact on B/C is smaller than the data uncertainties, only issues. Indeed, unnecessary reactions increase the run time of minimisations, and worse, these reactions create multiple minima that are harder to deal with. Part of these issues would be alleviated by using more evolved sampling engines, like a Markov Chain Monte Carlo (e.g. Putze & Derome 2014), but it remains better to use as few reactions as possible.

To assess the realistic impact of cross sections on the B/C analysis, we could repeat the above analysis with mock data accounting for statistical and systematic uncertainties. However, it is more interesting to illustrate how real data analysis should proceed. This is presented in Sect. 6, but before doing so, we have to discuss how to handle systematic uncertainties in the B/C analysis.

5. Handling systematics from experimental data

Almost all, if not all CR phenomenological studies, account for data uncertainties as the quadratic sum of statistical and systematics uncertainties. Doing so ignores any possible energy correlations for the systematic errors. This has two important consequences on the model best-fit analysis. For instance, considering two extreme cases, fully uncorrelated and fully correlated uncertainties, corresponds to adding quadratically the uncertainties or to allow for a global normalisation of the data (or more precisely to an energy-dependent normalisation related to the energy dependence of the uncertainty). Starting from the same uncertainties, a χ2 analysis on the two different cases would lead to a smaller in the former than in the latter case, and possibly to different values for the best-fit parameters of the model.

A better approach is to use the correlation matrix of error in the χ2 analysis (see Appendix B). However, the AMS-02 collaboration does not provide this matrix, and we have to rely on the provided information to build one. We then inspect how sensitive the analysis is on our choices.

5.1. Origin of B/C systematic errors

The errors on the B/C ratio measured by AMS-02 are described in Aguilar et al. (2018b). The different contributions obtained from table VI of the Supplemental Material of Aguilar et al. (2018b) are shown in Fig. 10 as thick solid lines.

As explained in Aguilar et al. (2018b), the unfolding error (Unf.) corresponds to the contribution coming from the uncertainty on the rigidity resolution function and the unfolding procedure. The rigidity scale error (Scale) is the sum of the contribution from residual tracker misalignment and from the uncertainty on the magnetic field map measurement and its temperature time-dependent correction. The acceptance error (Acc.) is the sum of different contributions: survival probability of Boron and Carbon in the detector, Boron contamination from heavier nuclei fragmentation (mainly carbon), and uncertainty on the “data/Monte Carlo” corrections to the Boron and Carbon acceptances.

5.2. Building the covariance matrix: correlation length

To properly take into account AMS-02 data uncertainties, one needs to define the covariance matrices Cα for α = (Stat., Unf., Scale, Acc.), and minimise the χ2 defined by Eqs (B.1B.2). As these covariance matrices are not provided explicitly in Aguilar et al. (2018b), we start from the covariance matrices of relative errors , estimated from the following expression:

(15)

with the ij-th element built from the relative errors and at rigidity bins Ri and Rj, and where the parameter is the correlation length associated with the error α (in unit of decade of rigidity).

For this study, we set the covariance matrix to be (see Appendix B for a justification), and we set the correlation lengths to the following values: (i) because the number of events on each bin are independent; (ii) since the uncertainty on the rigidity scale affects all rigidities similarly; (iii) because errors from the unfolding procedure and from the rigidity response function affect intermediate scales. As seen on Fig. 10, this error is sub-dominant compared to Stat. and Acc. errors, and we checked that the results are not affected by our choice for ; (iv) The value of the correlation length for the Acc. error is more critical, because this error dominates the systematic error and it cannot be easily defined. The dependence of /d.o.f. and of the fitted parameters with this correlation is studied below for different values , which cover the range from lower than the bin size (fully uncorrelated) to the full range (fully correlated).

As the acceptance error is a combination of errors which are expected to have a rather small correlation length (“data/Monte Carlo” corrections) and others which are expected to have a large correlation length (cross-section normalisation), one can try to decompose this error into different contributions with different correlation lengths. In particular, the rise of the acceptance error at low rigidity is not expected to be correlated with larger rigidities: it is related to the rapid change of the acceptance at low energy mostly because of energy losses in the detector. One can therefore construct a better description of the covariance matrix by splitting acceptance errors in three independent parts: (i) a normalisation error, Acc. norm. (dash-dotted orange line in Fig 10), with a large correlation length (lρ ∼ 1.0); (ii) a rise at low rigidity, Acc. LE (dotted orange line), with an intermediate correlation length (lρ ∼ 0.3); (iii) a residual error, Acc. res. (dashed orange line), defined so that the quadratic sum of the three contributions equals the full acceptance error. This last part corresponds mainly to “data/Monte Carlo” corrections and the rigidity-dependent parts of other acceptance errors. Its correlation length is not well defined and left free in the following.

Near the completion of this article, we found out that Cuoco et al. (2019) also proposed to use a correlation matrix of errors to analyse AMS-02 data. However, while these authors focus on and use a single correlation length fit on the data, our analysis relies on several correlations lengths whose values are motivated by physics processes in the AMS-02 detectors.

5.3. Parameter and goodness-of-fit dependence on correlation length

We have built two different covariance matrices, which partly depend on an unknown correlation length . We can now perform a χ2 analysis with MINOS to extract robust errors on the parameters. The analysis is repeated on the two propagation model configurations A and B introduced in Sect. 2.1.

The top panels of Fig. 11 show the values obtained for /d.o.f. and best-fit parameters as a function of for models A (left) and B (right) and for the full acceptance error (blue circles) and the split acceptance error (orange circles). The B/C from the best-fit model along with AMS-02 data are shown on the bottom panels of Fig. 11 for the same models, that is for the full (blue lines, top) and split (orange lines, bottom) acceptance errors. As expected, /d.o.f. strongly depends on for both models. The best-fit parameters are stable (i.e. fluctuate within errors estimated from the fit) for low and large but undergo a rapid jump around for model A when one uses the full acceptance error description. These features are problematic since it means that the best-fit parameters are very sensitive to the choice of . In addition, with the full acceptance error, the best-fit obtained for model A and does not pass through the data points as featured by the upper-left plot of the bottom panels in Fig. 11. This is explained as follows: with a large correlation, the cost on the χ2 for a global deviation between data and model is moderate and thus accepted, and would correspond to a global bias in the measured B/C. Although correct from a mathematical standpoint, this interpretation is disputable given our crude modelling of the systematic errors. This unwanted behaviour is absent when we use the split acceptance error modelling.

thumbnail Fig. 9.

Nuisance parameters from the analysis of 1000 mock data for Model A (top panels) and Model B (bottom panels). The nuisance parameters are shown reaction by reaction (rows: production then inelastic cross sections), each reaction having two nuisance parameters (see list in Table 2). The number in parenthesis beside the reaction corresponds to its overall impact on the B/C calculation, as reported in Table 1. We show for the NSS analysis (left panels) normalised and centred parameters, whereas LC analysis (right panels) are shown between zero and one. We use violin plots to highlight the probability density of the parameter (y-axis) at different values (x-axis), and our violin plots include a marker for the median (black “+” symbol) along with the 1σ range of the parameter (thick black line). In order to directly compare the distribution of the nuisance parameters for the unbiased and biased analyses, we superimpose them on the same line in red and blue respectively. See text for discussion.

Open with DEXTER

thumbnail Fig. 10.

AMS-02 errors for B/C data. Solid lines correspond to the errors provided in Aguilar et al. (2018b), namely statistical, acceptance, scale, and unfolding (the step-like evolution is artificial and related to the rounding of the values provided in the table). The orange lines correspond to a further split of the acceptance errors: normalisation (norm.), low energy (LE), and residual (res.). See text for details.

Open with DEXTER

thumbnail Fig. 11.

Top panels: values obtained for the /d.o.f. and the fit parameters (and uncertainties) as a function of for propagation models A (left) and B (right) and for the full acceptance error (blue) and the split acceptance error (orange). In the case of split acceptance error, corresponds to the correlation length of the Acc. res. contribution only. Bottom panels: comparison of best-fit B/C (lines) and AMS-02 data (symbols) for the above models and configurations, i.e. full (blue lines, top) vs. split (orange lines, bottom) acceptance errors. Only a sample of correlation lengths are shown (0.015, 0.1, and 1 decade).

Open with DEXTER

From the above results, we conclude that the best way to handle the systematic errors is to use the split acceptance errors approach. Indeed, not only does it provide a more realistic description of the acceptance systematic error, but it also leads to more stable results w.r.t. the values taken for . In this approach, is a reasonable choice which gives a /d.o.f. ∼1 and conservative errors for the fit parameters.

6. Joint impact of cross-section uncertainties and data systematics

The previous section provides us with a realistic treatment of the data errors. In this light, we can revisit the impact of cross-section uncertainties on the best-fit parameters and errors (Sect. 4.2). Indeed, we showed in the most challenging case (statistical only in the data) that NSS and LC nuisance parameters enable to recover the correct transport parameters when starting from the wrong cross sections, whereas systematic errors dominate over a wide dynamical range (see Fig. 10). Nevertheless, because neither NSS nor the LC approach is perfect (see discussion in Sect. 4.4.4), it is important to test both to ensure, as a minimal consistency check, that consistent values of the transport parameters (within their uncertainties) are obtained in both approaches. For this purpose, we analyse the “real” B/C data using the covariance matrix of errors (see previous section), with both NSS and LC approaches, and varying the number of reactions used as nuisance parameters. This allows us to validate our strategy for the actual data, as used in the companion paper (Génolini et al. 2019) for the B/C physics analysis. It also exemplifies how an analysis should be carried out in the methodology we propose.

Our results are presented in Fig. 12, in which we display the best-fit parameters and errors of Model A (top) and Model B (bottom) for an increasing number of reactions used as nuisance (from left to right). The four production cross sections are introduced by order of importance with respect to their contribution to the secondary boron. We then introduce the inelastic cross sections of 11B; we have checked that the impact of other inelastic cross sections is negligible. We recall that we can start from different parametrisations of the full network of cross sections: for production cross sections, we either start from G17) or W03), whereas we only consider T99 for inelastic ones (their impact is negligible, see below). Several comments are in order about these results.

thumbnail Fig. 12.

Evolution of the best-fit parameters and asymmetric 1σ errors (extracted from MINOS), increasing the number of nuclear cross-section reactions used as nuisance parameters (from left to right on the x-axis); Model A (top) and B (bottom) are shown. For every additional reaction as nuisance, four different fits are performed: starting either with G17) or W03 parametrisations for the production cross sections; using either CL or NSS for the implementation of nuisance parameters. The black dashed line in the xs panel corresponds to the threshold . See text for discussion.

Open with DEXTER

Firstly, for both Model A and B, even without nuisance parameters (no nuisance on x-axis), the best-fit parameters for the two cross-section cases considered (W03 and G17) are consistent within errors. This means that the covariance matrix mitigates most of the errors coming from using the wrong cross sections (see discussion in Sect. 4.3). This would not have been the case (not shown) using simply the statistical and systematic uncertainties in quadrature. This further demonstrates the importance of having the correct covariance matrix of errors. It also emphasises the fact that energy correlations on intermediate scales (one decade) in data uncertainties could relax, to some extent, the need for very accurate cross sections.

Secondly, when adding nuisance parameters (NSS or LC), the consistency between the parameters is improved, in particular for δ (in both models A and B). With more degrees of freedom, this translates into better values and larger uncertainties on the parameters. The latter is only significant for Model B, and the lack of increase in Model A probably comes from the fact that it has many degenerate “low-energy” degrees of freedom (Va, Vc, ηT) which already make the uncertainties maximal. We stress that the improvement depends on the initial set of cross sections used and method, but overall, for this specific analysis, with the covariance matrix of error dominating the error budget, only a few reactions need to be taken into account.

Thirdly, focusing on the fit quality ( values), we see that G17 systematically gives a better fit than W03. In the LC approach, the productions cross section initially set to W03 also choose to go to G17. This is probably not too surprising as G17 is expected to better match existing cross-section data, whereas W03 is based on a global semi-empirical fit to these same data (see Sect. 4 and Génolini et al. 2018).

Based on these results, the recommendation for the B/C analysis, is to start from G17 production cross sections, and to use the NSS nuisance parameters. The latter allow for more freedom than LC ones, so that slight improvements are still possible when including several cross-section reactions as nuisance (though these improvements are not very important statistically in this case). These recommendations are followed to define benchmark models in our companion paper (Génolini et al. 2019), in which the values of the parameters shown in Fig. 12 are discussed and interpreted.

7. Recommendations and conclusions

Faced with the challenges of interpreting cosmic-ray data of unprecedented accuracy, we have refined the methodology to properly account for all uncertainties (model, ingredient, and data) in the fit to the data. The proposed methodology was exemplified on the analysis of the AMS-02 B/C ratio.

The first step was to ensure a model precision higher than the data uncertainty: we inspected in detail the numerical stability of the model and the impact of energy boundary conditions. Some low-energy boundary conditions fare better than others, but setting any of them to MeV values ensure a good precision of the model calculation above a few hundreds of MeV n−1. When using a Crank–Nicholson approach to solve the second-order differential diffusion equation on energy, we have checked its precision and provided a criterion to numerically converge to the correct solution when Va → 0 (i.e. if no reacceleration, corresponding to a first-order differential equation). We have also quantified the systematics from using the point-estimate calculation of a flux or ratio compared to the correct calculation integrating the model over the energy bin. While the discussion is partly specific to the model used and the species inspected, our considerations are generic. Obviously, the precision tests should always be repeated and compared to the data uncertainty for other species, models, and data considered.

The second step was to handle properly cross-section uncertainties. We detailed the impact of the most important reactions on the B/C ratio (≲3% for inelastic cross sections and ∼10% for production cross sections). We then proposed an approach to account for these uncertainties via Gaussian distributed nuisance parameters, based on a combination of Normalisation, Scale, and low-energy Slope cross-section modifications (NSS) or based on linear combinations (LC) of existing parametrisations. We validated this choice on simulated data, showing that the degrees of freedom enabled by these nuisance parameters allow to recover the true parameters (when starting from a different set of cross sections simulated data were generated with). Simulated data also show that starting from the wrong cross-section values, valid propagation models would be excluded based on statistical criterion (). The nuisance parameters we proposed also cure this problem.

The third step was to handle as best as possible data uncertainties. We accounted for possible energy correlations in the AMS-02 data via the covariance matrix of errors. As the AMS-02 collaboration does not provide such a matrix, we proposed a best-guess model, based on the information available in Aguilar et al. (2018b) and its supplemental material. The crucial parameters are the correlation length associated with various systematics, correlating more or less strongly various energy bins. The dominant effect is from the acceptance systematics, and we showed an unphysical dependence of the transport parameters with the acceptance correlation length. After a more careful inspection of the systematics, we discussed the fact that the acceptance systematics is actually a mix of several systematics with very different correlation lengths. Splitting the acceptance systematics in three parts stabilised the dependence of the transport parameters with the correlation length. To fully solve this issue, the publication, along with the data, of the correlation matrix by the AMS-02 collaboration is necessary. This is likely a very difficult task, and waiting for its completion, further informations on the various systematics, further splits and indications on each of the systematics correlation length would already be extremely useful. Indeed, not only does it possibly biases the transport parameters fit to the data, but it also has a huge impact on the statistical interpretation of the model inspected: depending on the correlation length assumed, we can either conclude on a perfect fit to the data or exclude the model ().

The fourth and last step was to consider a realistic analysis, applying the method developed to handle cross-section uncertainties (2nd step) with the full data uncertainties, that is accounting for the covariance matrix of errors (3rd step). Because of energy correlations in the systematics, the impact of cross-section uncertainties can be lessened. In the context of the analysis of the AMS-02 B/C data, the impact of systematics was found to be dominant over that of cross-section uncertainties. In any case, for any analysis, we recommend to implement the dominant nuclear reactions as nuisance parameters, checking the results against various choices of production cross sections and the two possible strategies for the nuisance parameters (NSS or LC).

Our methodology can be used for any CR species, but the most important cross sections and their uncertainties depend on the species (e.g. Génolini et al. 2018), so that specific nuisance parameters need to be changed for both the NSS and LC methods. Then, data uncertainties have generally different origins for different species, with different sub-detectors and selections cuts applied. For these reasons, the conclusions that can be drawn concerning the most impacting effect (cross sections or data systematics) can be different from one species to another, and so should be carefully inspected for each species and data considered.

Further results based on this methodology are presented in two companion papers: Génolini et al. (2019) present the interpretation of the B/C ratio data and constraints on the transport parameters. Boudaud et al. (2019) use these transport parameters and their uncertainties to calculate the astrophysical flux of antiprotons. We emphasise that all the results presented here and in the companion papers are based on USINE v3.5 (Maurin 2018), available at https://lpsc.in2p3.fr/usine14.


1

A specific release, USINE v3.5 (to appear), was developed for this analysis, improving on the first public version v3.4, allowing for more nuisance parameters, more displays, etc.

2

For instance, in the case of the 1D diffusion model described in Sect. 2.1, the above coefficients are given by Putze et al. (2010): α(x) = 2h/(Ek𝒜), β(x) = blosses(E), γ(x) = β2Kpp/Ek, and u0(x) = 2hq/𝒜, with 𝒜 = 2h nvσinel + Vc + KS coth (SL/2) and .

3

The reasoning would be the same if we were to take Ek/n instead.

4

In experimental data, larger bins are used to limit statistical uncertainties whenever smaller number of events in the detectors are measured. For AMS-02 data, this happens at high energy (because of the power-law behaviour) but also at low energy (because of the geomagnetic rigidity cut-off).

5

In USINE, the keyword IsUseBinRange in the parameter file allows to calculate the full bin content value assuming a power law for each isotope within the bin range.

6

Same dataset as in Webber et al. (2003), but fitted by Aimé Soutoul (priv. comm.).

7

We wrongly reported a 10% impact in Génolini et al. (2018) because of an error in the inelastic cross section on He for two parametrisations (the faulty files have been corrected in USINE v3.5).

8

These parametrisations are based on fits to the same inhomogeneous sets of cross-section data, but the authors used different approaches and assumptions to fit them. For instance, the GALPROP parametrisation is renormalised to data whenever available, whereas other data sets are based on semi-empirical formulae designed to give an good fit over all reactions. The former parametrisation is expected to better represent the data, but by construction it sometimes shows non-physical energy dependences (step-like behaviour), whereas the other parametrisations do not.

9

See USINE v3.5 documentation for the syntax of the NSS parameters.

10

Reference values are those of Sect. 5. They are used for all mock configurations to allow for a more compact presentation of the results in Figs. 7 and 8. Thus, mock B/C data do not always represent the measured one (not shown), but this does not impact our conclusions.

11

We emphasise that cross-section values for all reactions, not just the dominant ones (see Sect. 4.1), are taken from the other parametrisations.

12

Statistical uncertainties only (taken from real data) are used in our analysis, in order to disentangle the issue of cross-section uncertainties and more involved data uncertainties (discussed in Sect. 5).

13

There are more involved methods to find it, but this would further complicate the analysis for no obvious gain on the results. What matters is the capability to recover the true cross section with one combination of Ci, not with several.

14

The current release is v3.4, but v3.5, specifically developed for this analysis, will be online soon.

16

If several systematics α are present, the global covariance matrix is given by the sum of all associated covariance matrices, 𝒞 = ∑α 𝒞α.

17

The following definition, taken from Maurin et al. (2001), is similar to that of Eq. (3) since KEE = β2Kpp, albeit with a slightly larger coefficient of 2/9 ∼ 0.22 instead of 0.17.

18

The source term has been set proportional to to grossly match the proton data. Notice that we have not performed a fit since Fig. D.2 is meant to be an illustration of how the various solutions u0, uloss and u behave with kinetic energy Ek.

Acknowledgments

We thank our “cosmic-ray” colleagues at LAPTh, LAPP, LUPM, and Universidade de São Paulo, and in particular P. Serpico for stimulating discussions and feedback that helped us clarify and improve the presentation of the methodology presented in this paper. This work has been supported by the “Investissements d’avenir, Labex ENIGMASS” and by the French ANR, Project DMAstro-LHC, ANR-12-BS05-0006. The work of Y.G. is supported by the IISN, the FNRS-FRS and a ULB ARC. The work of M.B. is supported by the European Research Council (ERC) under the EU Seventh Framework Programme (FP7/2007-2013)/ERC Starting Grant (Agreement No 278234 – “NewDark” project).

References

  1. Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2011, Science, 332, 69 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  2. Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2014, ApJ, 791, 93 [NASA ADS] [CrossRef] [Google Scholar]
  3. Aguilar, M., Alberti, G., Alpat, B., et al. 2013, Phys. Rev. Lett., 110, 141102 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  4. Aguilar, M., Aisa, D., Alpat, B., et al. 2014a, Phys. Rev. Lett., 113, 221102 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  5. Aguilar, M., Aisa, D., Alvino, A., et al. 2014b, Phys. Rev. Lett., 113, 121102 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  6. Aguilar, M., Aisa, D., Alpat, B., et al. 2015a, Phys. Rev. Lett., 114, 171103 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  7. Aguilar, M., Aisa, D., Alpat, B., et al. 2015b, Phys. Rev. Lett., 115, 211101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  8. Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2016, Phys. Rev. Lett., 117, 231102 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  9. Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2018a, Phys. Rev. Lett., 121, 051101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  10. Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2018b, Phys. Rev. Lett., 120, 021101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  11. Barashenkov, V. S., & Polanski, A. 1994, Electronic Guide for Nuclear Cross Sections, Tech. Rep. E2–94-417 (Dubna: Comm. JINR) [Google Scholar]
  12. Blobel, V. 2006, Statistical Inference Problems in High Energy Physics (06w5054) - Banff International Research Station [Google Scholar]
  13. Boudaud, M., Génolini, Y., Derome, L., et al. 2019, ArXiv e-prints [arXiv:1906.07119] [Google Scholar]
  14. Coste, B., Derome, L., Maurin, D., & Putze, A. 2012, A&A, 539, A88 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  15. Cuoco, A., Heisig, J., Klamt, L., Korsmeier, M., & Krämer, M. 2019, Phys. Rev. D, 99, 103014 [NASA ADS] [CrossRef] [Google Scholar]
  16. D’Agostini, G. 1994, Nucl. Instrum. Methods Phys. Res. A, 346, 306 [NASA ADS] [CrossRef] [Google Scholar]
  17. Donato, F., Maurin, D., Salati, P., et al. 2001, ApJ, 563, 172 [NASA ADS] [CrossRef] [Google Scholar]
  18. Evoli, C., Gaggero, D., Vittino, A., et al. 2017, JCAP, 2, 015 [NASA ADS] [CrossRef] [Google Scholar]
  19. Ferrando, P., Webber, W. R., Goret, P., et al. 1988, Phys. Rev. C, 37, 1490 [NASA ADS] [CrossRef] [Google Scholar]
  20. Génolini, Y., Boudaud, M., Batista, P. I., et al. 2019, Phys. Rev. D, 99, 123028 [NASA ADS] [CrossRef] [Google Scholar]
  21. Génolini, Y., Maurin, D., Moskalenko, I. V., & Unger, M. 2018, Phys. Rev. C, 98, 034611 [NASA ADS] [CrossRef] [Google Scholar]
  22. Génolini, Y., Putze, A., Salati, P., & Serpico, P. D. 2015, A&A, 580, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  23. Génolini, Y., Serpico, P. D., Boudaud, M., et al. 2017, Phys. Rev. Lett., 119, 241101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  24. Giesen, G., Boudaud, M., Génolini, Y., et al. 2015, JCAP, 9, 23 [NASA ADS] [CrossRef] [Google Scholar]
  25. James, F., & Roos, M. 1975, Comput. Phys. Commun., 10, 343 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  26. Kissmann, R. 2014, Astropart. Phys., 55, 37 [NASA ADS] [CrossRef] [Google Scholar]
  27. Lafferty, G. D., & Wyatt, T. R. 1995, Nucl. Instrum. Methods Phys. Res. A, 355, 541 [NASA ADS] [CrossRef] [Google Scholar]
  28. LeVeque, R. J. 1998, SIAM, 585, 112 [Google Scholar]
  29. Maurin, D. 2018, ArXiv e-prints [arXiv:1807.02968] [Google Scholar]
  30. Maurin, D., Donato, F., Taillet, R., & Salati, P. 2001, ApJ, 555, 585 [NASA ADS] [CrossRef] [Google Scholar]
  31. Maurin, D., Taillet, R., & Donato, F. 2002, A&A, 394, 1039 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  32. Maurin, D., Putze, A., & Derome, L. 2010, A&A, 516, A67 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  33. Maurin, D., Melot, F., & Taillet, R. 2014, A&A, 569, A32 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  34. Moskalenko, I. V., Mashnik, S. G., & Strong, A. W. 2001, ICRC, 5, 1836 [NASA ADS] [Google Scholar]
  35. Moskalenko, I. V., & Mashnik, S. G. 2003, ICRC, 4, 1969 [NASA ADS] [Google Scholar]
  36. Putze, A., & Derome, L. 2014, Phys. Dark Universe, 5, 29 [NASA ADS] [CrossRef] [Google Scholar]
  37. Putze, A., Derome, L., & Maurin, D. 2010, A&A, 516, A66 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  38. Putze, A., Maurin, D., & Donato, F. 2011, A&A, 526, A101 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  39. Reinert, A., & Winkler, M. W. 2018, JCAP, 1, 055 [NASA ADS] [CrossRef] [Google Scholar]
  40. Schlickeiser, R. 2002, Cosmic Ray Astrophysics (Berlin, Heidelberg: Springer-Verlag) [CrossRef] [Google Scholar]
  41. Seo, E. S., & Ptuskin, V. S. 1994, ApJ, 431, 705 [NASA ADS] [CrossRef] [Google Scholar]
  42. Stone, E. C., Cummings, A. C., McDonald, F. B., et al. 2013, Science, 341, 150 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
  43. Strong, A. W., Moskalenko, I. V., Porter, T. A., et al. 2011, http://galprop.stanford.edu/download/manuals/galprop_explanatory_supplement.pdf [Google Scholar]
  44. Tomassetti, N. 2017, Phys. Rev. D, 96, 103005 [NASA ADS] [CrossRef] [Google Scholar]
  45. Tripathi, R. K., Cucinotta, F. A., & Wilson, J. W. 1996, Nucl. Instrum. Methods Phys. Res. B, 117, 347 [NASA ADS] [CrossRef] [Google Scholar]
  46. Tripathi, R. K., Cucinotta, F. A., & Wilson, J. W. 1999, Nucl. Instrum. Methods Phys. Res. B, 155, 349 [NASA ADS] [CrossRef] [Google Scholar]
  47. Webber, W. R., Kish, J. C., Rockstroh, J. M., et al. 1998a, ApJ, 508, 940 [NASA ADS] [CrossRef] [Google Scholar]
  48. Webber, W. R., Kish, J. C., Rockstroh, J. M., et al. 1998b, ApJ, 508, 949 [NASA ADS] [CrossRef] [Google Scholar]
  49. Webber, W. R., Soutoul, A., Kish, J. C., & Rockstroh, J. M. 2003, ApJS, 144, 153 [NASA ADS] [CrossRef] [Google Scholar]
  50. Webber, W. R., Soutoul, A., Kish, J. C., et al. 1998c, Phys. Rev. C, 58, 3539 [NASA ADS] [CrossRef] [Google Scholar]
  51. Wellisch, H. P., & Axen, D. 1996, Phys. Rev. C, 54, 1329 [NASA ADS] [CrossRef] [Google Scholar]

Appendix A: Systematics from R to Ek/n approximate conversion

CR data are mostly published and analysed as a function of the kinetic energy per nucleon (see, e.g. the data collected in the Cosmic-Ray Data Base15, Maurin et al. 2014). The latter quantity is conserved in nuclear reactions (in the straight-ahead approximation) and propagation codes usually solve the transport equation per Ek/n. However, it is not the quantity CR detectors measure; for instance, hadronic calorimeters provide the total energy, whereas spectrometers like AMS-02 provide the rigidity. Conversion from one energy unit to another is only exact if the nucleus (m, A, Z) is identified. For elements, unless the isotopic content is known, the conversion is approximate.

The uncertainty brought from energy unit conversions was neglected in the past because of larger uncertainties, but this is no longer possible for modern data. For instance, the conversion from R to Ek/n in an experimental context in which only elemental fluxes are measured is discussed by the PAMELA collaboration in Appendix B of Adriani et al. (2014), and by the AMS collaboration for the B/C ratio in the Supplemental Material of Aguilar et al. (2016). As the practice remains in the field to fit data as a function of Ek/n, we argue below that this is not a good procedure.

To convert B/C data from R to Ek/n, the AMS-02 collaboration relies on an average mass number,

of 12 for Carbon and 10.7 for Boron. Figure A.1 shows these values compared to the theoretical calculation from a model reproducing B/C data. In the top panel, the varying ⟨AC for the model results from the fact that Carbon contains mostly primary 12C and a small fraction of secondary 13C (≲15% at ≲1 GV and steadily vanishing at higher rigidities). For the Boron (bottom panel), both 10B and 11B are of secondary origin, but their rigidity evolution is related to two subtle effects: (i) ≈15% of 10B comes from the decay of 10Be (Génolini et al. 2018), and as the effective lifetime increases with energy, the fraction of 10B with respect to that of 11B decreases with rigidity; (ii) 10B has a larger fraction originating from 2-step reactions (w.r.t. direct “1-step” production) than 11B has (see Table 1 and Fig. 3 of Génolini et al. 2018), and as 2-step reactions have a steeper rigidity dependence than 1-step ones, again the fraction of 10B decreases with rigidity (w.r.t. that of 11B).

thumbnail Fig. A.1.

Average mass number for carbon (top) and Boron (bottom) as a function of rigidity. The dashed blue lines correspond to the value calculated from the model (with the isotopic content known), whereas the solid black lines correspond to the choice made in Aguilar et al. (2016). See text for details.

Open with DEXTER

In Fig. A.2, we show the impact of these two different choices when converting B/C from R to Ek/n. The top panel shows the residual of B/C data vs. R (w.r.t. our best-fit model), in order to give a visual reference for the difference between the model and data. The bottom panel shows residuals of B/C vs. Ek/n for different conversions (w.r.t. our best-fit model converted without approximation). The black curve shows the conversion bias when assuming a constant ⟨A⟩ for Boron and Carbon, where part of the bias comes from assuming ⟨AB = 10.7 (red dotted line), and part from assuming ⟨AC = 12 (blue dashed line): the bias is positive and the largest at the lowest Ek/n (∼3%) and null at 1 GeV n−1; it is negative, reaches ∼2% at ∼4 GeV n−1, and then decreases. The pattern of AMS-02 data (converted using the same approximation) with respect to the black solid line is similar to the one seen in the top panel, indicating that the origin of the discrepancy with the exact model calculation is the wrong assumption made for ⟨A⟩.

thumbnail Fig. A.2.

Top panel: B/C residual (vs. R) calculated from the difference between AMS-02 data and our reference model (best-fit model). Bottom panel: B/C residual (vs. Ek/n) calculated from the difference between various R-to-Ek/n-converted B/C (red, blue and black lines) and the “exact” B/C (reference model converted with correct isotopic content). The AMS-02 data vs. Ek/n are taken from Aguilar et al. (2016), and result from the same conversion as done for the solid black line. See text for discussion

Open with DEXTER

We can summarise the above subtle discussion as follows: if we fit a model on AMS-02 B/C data as a function of rigidity, this model will be offset from the converted AMS-02 B/C data as a function of Ek/n. The offset is not a simple scaling, it is instead energy dependent because the isotopic content of B and C elements is energy dependent (in a non-trivial way). Whereas the maximum bias is “only” ∼3%, this is already significant compared to other AMS-02 uncertainties. Moreover, in AMS publication, the uncertainty associated with the conversion is estimated to range from 1% (lowest energy) to 4% (highest energy), which does not reflect our results. For all these reasons, we conclude that AMS-02 data, and in general all data, should be fit in their native energy scale in order to avoid a non-necessary bias introduced by converting the data to another energy scale. We stress that the USINE package allows one to fit any combination of data in their native energy scale.

Appendix B: χ2 with covariance or nuisance

To characterise the impact of the uncertainties on the model parameters, we rely on the χ2 analysis implemented in USINE and described in Maurin (2018), using the MINUIT package (James & Roos 1975) for minimisation. In particular, the MINOS option in MINUIT allows to reliably reconstruct asymmetric error bars on the parameters, taking into account both parameter correlations and non-linearities. The generic form of the χ2 we use is

(B.1)

where we loop over all time periods t (corresponding to different modulation levels) and all quantities q selected in the minimisation. The quantities 𝒟cov and 𝒩 are detailed below.

Covariance. The quadratic distance 𝒟cov measures the distance between the data and the model, accounting for a covariance matrix 𝒞16,

(B.2)

which correlates ij energy bins (nE bins in total). Without correlations, 𝒞 is diagonal (systematic errors σk on data), and we recover the standard expression

(B.3)

Covariance from relative errors. As discussed in Sect. 5, we built for each AMS-02 systematics α the covariance matrix of relative errors . The latter can be related to the covariance 𝒞α require for Eq. (B.2) in two different ways:

(B.4)

(B.5)

While using 𝒞data may seem more natural, Blobel (2006) showed that if an overall normalisation factor is present in the data, including it in the fit should be done via a factor in the model, not in the data; otherwise, the reconstructed model parameters are biased (see also D’Agostini 1994). For this reason we decided to use 𝒞model in our analysis. We note however that a global normalisation factor corresponds to a situation in which the correlation length is infinite, which is not the case for the data we consider (see Sect. 5). To ensure that either using Eq. (B.4) or (B.5) does not affect our conclusions, we fit 10 000 mock B/C data under these two assumptions, and checked that (i) the input model parameters were recovered in both cases, and (ii) up to the level of precision reached, potential biases were much smaller than the 1σ uncertainties on the reconstructed parameters.

Nuisance parameters. Nuisance parameters are parameters contingent to the analysis performed, but whose value can affect the result of the analysis. An example is given by CR cross sections, that are instrumental for the model calculation, but whose values and uncertainties were determined by “external” experiments. Nuisance parameters can appear at various levels of the modelling: (i) global nuisance parameters 𝒩 related to the model and thus independent of the data (e.g. cross sections), (ii) time-dependent nuisance parameters 𝒩t (e.g. modulation parameter for a specific data-taking period), (iii) data-dependent nuisance parameters 𝒩t,q (e.g. systematic errors on data as an alternative to using a covariance matrix).

In principle, any probability distribution function is possible, and it is determined from the auxiliary experiment. However, in USINE, only Gaussian-distributed nuisance parameters are enabled, so that each adds in the χ2, Eq. (B.1), a contribution

(B.6)

where and are the mean and variance of the parameter, and y the tested value.

Appendix C: Coefficients for boundary conditions

The discretisation of Eq. (4) on a grid over x ≡ ln Ek/n gives

(C.1)

with the current defined to be

(C.2)

This equation can readily be written as a matrix equation

(C.3)

(C.4)

𝕄 is a tridiagonal matrix defined by its coefficients

(C.5)

(C.6)

(C.7)

Solving this system requires to fix the boundary conditions. Several possibilities have been tested in Appendix D.

As regards the low-energy boundary at xmin ≡ x0, we list several suitable prescriptions, and report results for the corresponding b0 and c0 in the half-upper part of Table C.1: (i) no energy flow (L1) means JE = 0 at x0, translating into for the defined grid steps; (ii) no curvature in the spectrum (L2 and L3) is obtained by using the prescription of LeVeque (1998) for a second order accurate method,

(C.8)

which implies that u−1 = 2u0 − u1, and we compute the coefficients b0 and c0 for a first (L2) and second (L3) order accurate method – the former was used in Donato et al. (2001); (iii) no phase space density gradient (L4) means that the phase space distribution f is flat for a CR momentum p = 0. We implement this condition at x0. Requiring that ∂f/∂p vanishes translates into

(C.9)

If written in term of x = lnEk it reads

(C.10)

Table C.1.

Matrix coefficients for several boundary conditions of the transport equation.

If we discretise this condition, we are led to

(C.11)

and get, according to LeVeque (1998), a second order accurate method. Injecting this condition into the differentiation scheme let us define b0 and c0.

At high-energy (i.e. at the highest point xmax ≡ xK of the grid), several conditions can be implemented along the same lines. The resulting coefficients aK and bK are listed in the half-bottom part of Table C.1: (i) no energy flow (H1), i.e. JE = 0 at xK, which translates into , (ii) no curvature in the spectrum (H2 and H3), for which we require that

(C.12)

implying uK + 1 = 2uK − uK − 1; (iii) no energy losses nor diffusive reacceleration (H4), so that as discussed in Appendix D, the CR density u is given by u0 at high energy insofar as energy losses and diffusive reacceleration do not play any role in this regime. We require that u(xK) = u0(xK), hence the condition aK = 0 and bK = 1 of Table C.1.

Appendix D: Stability of the numerical solution

The numerical solution of Eq. (4) might exhibit instabilities when diffusive reacceleration vanishes with the Alfvénic speed Va. To explore the onset of these instabilities and to remedy them, we consider the same 1D geometry as discussed in Sect. 2.1, that is a thin disc of half-thickness h and a magnetic halo of half-thickness L. We focus on Model A, presented in Sect. 2.2, that is a standard diffusion/convection/reacceleration transport model. For definitiveness, all figures presented in this appendix are based on the following values for the transport parameters: K0 = 0.030 kpc2 Myr−1, δ = 0.65, ηt = −0.49, L = 10 kpc, Vc = 15.1 km s−1 and Va = 74.6 km s−1. These values are among those that give a good fit to the B/C ratio and as such are sufficient to illustrate our discussion.

D.1. Simplified 1D model and solutions

Transport equation. The CR density in space and energy u(z, E) fulfils the transport equation

(D.1)

where z is the vertical co-ordinate. The source term qacc denotes the rate with which CRs are injected. CRs diffuse with a spatial diffusion coefficient K that follows Eq. (1), are convected away from the disc at velocity Vc, suffer from energy losses at a rate b(E)< 0 (ionisation, Coulomb friction, and adiabatic expansion in the wind), or gain energy via an energy diffusion coefficient17. CR injection, as well as energy losses and energy diffusion are localised in the disc.

From now on, the Galactic disc is treated as in the infinitely thin limit, and all nuclear interactions on the interstellar medium (ISM) have been discarded. Requiring that the CR density vanishes at the boundaries z = ±L of the magnetic halo allows to straightforwardly express u(z, E) as a function of its value inside the Galactic disc u(E, 0).

Timescales. Transport inside the magnetic halo, energy losses and diffusive reacceleration are the three processes at play in the transport of CR nuclei. To determine which of these processes is dominant and inside which energy range it prevails, we can calculate the associated timescales. Galactic diffusion and convection can be combined into the residence time of CRs inside the Galactic disc

(D.2)

where τconv = h/Vc and τdiff = hL/K are the convection and diffusion timescales. For energy losses, we define τloss as the ratio −Ek/b(E), where Ek is CR kinetic energy. Diffusive reacceleration occurs over a timescale . These timescales are plotted as a function of kinetic energy Ek in Fig. D.1. The behaviour is fairly generic and the same trends appear for heavier nuclei as well as for secondary species.

thumbnail Fig. D.1.

Timescales associated to the various processes at play in the Galactic transport of CR protons are plotted as a function of kinetic energy Ek. The diffusion τdiff and convection τconv timescales are respectively featured by the long dashed and short dashed-dotted black curves. They are combined into the disc residence time τdisc plotted as the solid black line. The energy loss and diffusive reacceleration timescales correspond to the solid purple and orange curves. The red solid line features the combined timescale τeff as given in Eq. (D.3).

Open with DEXTER

All these processes can be combined through the effective timescale τeff which we define as

(D.3)

High- and low-energy analytical solutions. We are only interested in the solution in the disc, u(z = 0, E)≡u, and using the above timescales the transport Eq. (D.1) boils down into the PDE

(D.4)

High-energy limit: space diffusion dominates over the other processes. As energy losses and diffusive reacceleration do not play any role in this regime, the solution to the CR transport Eq. (D.4) is

(D.5)

Low-energy limit: energy losses dominate and diffusive reacceleration can be neglected. The transport Eq. (D.4) has an analytic solution which can be cast into the form

(D.6)

where the pseudo-time is defined as

(D.7)

For above-mentioned reasons, the analytic solutions u0 and uloss become equal in the high-energy limit. We have set them equal at the highest energy point Ek, max of our analysis, for which a value of 103 GeV is assumed.

These solutions are used below to check the precision of the numerical calculation.

Numerical solution. The numerical solution of Eq. (D.1) is obtained as follows. First, the equation can be recast into Eq. (4), where the coefficients α, β, and γ, of our simplified model are

(D.8)

As explained in Appendix C, this Eq. (4) can be put on a grid in x ≡ lnEk/n to numerically solve the equation. The solution depends on the boundary conditions implemented, the impact of which is studied below.

D.2. Impact of boundary conditions

Various boundary conditions can be implemented at the lowest xmin ≡ x0 and highest xmax ≡ xK energy points of the x-grid as shown in Table C.1.

Reference boundary conditions vs. analytical solutions. At xmax, the four boundary conditions H of Table C.1 yield the same CR proton flux and we always find that u is close to u0 above a few tens of GeV. We have decided to implement prescription H4, which is the most natural condition given that τeff ≃ τdisc at high energy. At xmin, the prescription that yields the most stable behaviour is L3 with ∂2u/∂x2 = 0 up to second order. Our fiducial conditions are therefore L3 and H4.

For illustrative purpose, Fig. D.2 shows a comparison of the analytical and numerical solutions18. The long dashed-dotted black curve stands for the approximation u0, for which energy losses and diffusive reacceleration are switched off. The solid black curve features the solution uloss, where diffusive reacceleration alone is suppressed. The exact solution u is featured by the short dashed purple and long dashed orange curves. The former is obtained through the direct inversion of Eq. (C.3) while for the latter, a Crank–Nicholson recursion is used to get uCN converging from u0 to the exact solution u. Both results agree with a precision better than 10−10.

thumbnail Fig. D.2.

CR proton flux Φp is plotted as a function of kinetic energy Ek. The long dashed-dotted and solid black curves stand respectively for the approximations u0 and uloss. The exact solution u is derived numerically assuming boundary conditions L3 and H4. The long dashed orange and short dashed purple curves correspond to different methods used to solve transport Eq. (4). Protons are injected with a rate where Np = 5.8 × 10−11 protons cm−3 GeV−1 Myr−1 and α = 2.3. With these values, we get Φp in rough agreement with the Voyager 1 (Stone et al. 2013), PAMELA (Adriani et al. 2011) and AMS-02 (Aguilar et al. 2015a) data.

Open with DEXTER

Varying low-energy boundary conditions. In Fig. D.3, we plot the relative difference induced on the fiducial flux of Fig. D.2 when prescription L3 is respectively replaced by conditions L1 (solid red), L2 (long dashed orange) and L4 (short dashed-dotted green) of Table C.1. Above 3 MeV, all fluxes agree up to double precision. Below that energy, some differences appear. Condition L2 always yields a flux that agrees with the fiducial result with a precision better than 10−4. Condition L4 is associated with moderate wiggles, with a relative difference that nevertheless reaches 10% at 1 MeV. The worst prescription is L1 which generates very large instabilities exceeding 100% below 2 MeV. It is remarkable that in spite of these, the fiducial result is obtained above 3 MeV.

thumbnail Fig. D.3.

Changing the low-energy boundary conditions of Table C.1 modifies the numerical result obtained for u. In this plot, the variations of the proton flux relative to the fiducial case of Fig. D.2 are displayed as a function of kinetic energy Ek. Notice that all conditions yield the same flux above 3 MeV. Close to the boundary, prescriptions L1 and L4 generate wiggles and the flux becomes inaccurate. Prescriptions L2 and L3 (fiducial) yield the same result.

Open with DEXTER

In the regime where diffusive reacceleration is switched off, the numerical solution u of Eq. (D.4) is given by the analytic solution uloss displayed in relation (D.6). This situation offers a unique opportunity to investigate how low-energy boundary conditions affect the stability and precision of the numerical solution. To this purpose, we have used the cosmic-ray transport parameters of Model A with the exception of a vanishing Alfvénic speed Va. The relative difference between u and uloss is plotted as a function of proton kinetic energy in Fig. D.4. The numerical solution is derived by direct inversion of Eq. (C.3). Depending on the prescription used at xmin, u can be very close to the actual result uloss or completely out of range. As featured by the short dashed purple curve, the most precise condition is L3 with a level of precision of 10−4. Condition L2 yields also a very accurate solution u with a relative error of at most 10−3 at 100 MeV. As could have been anticipated from Fig. (D.3), setting JE = 0 yields the worst numerical result which is orders of magnitude larger than the correct solution below 30 GeV. Finally, condition L4 only yields the correct result above 70 GeV. We find that the relative difference between u and uloss even exceeds 100% below 2 GeV, as exhibited by the short dashed-dotted green curve of Fig. D.4.

thumbnail Fig. D.4.

Cosmic-ray transport parameters of Model A are used, except for the Alfvénic speed Va set equal to 1 m s−1. In this regime, the numerical result u of Eq. (D.4) is equal to the analytic solution uloss. The relative difference between u and uloss is plotted as a function of kinetic energy Ek. The low-energy boundary prescriptions of Table C.1 are respectively featured by the solid red (L1), long dashed orange (L2), short dashed purple (L3) and short dashed-dotted green (L4) curves. The most precise condition is L3 while the worst one is L1.

Open with DEXTER

D.3. Numerical stability when Va→ 0

Although condition L3 yields the most precise solution, the short dashed purple curve of Fig. D.4 exhibits wiggles for vanishing Alfvénic speed. In our example, these instabilities never exceed a level of 10−4 and have no effect on the numerical solution u. In some configurations though, in particular those with a less fine-grained x-grid, they could reach a level where they would impair the capability of the fitting routine used in the B/C analysis. It is then important to understand the reason for these instabilities and to remedy them.

Is it specific to the solver used? To commence, we have investigated if these instabilities are related to the method used to derive the numerical solution u. In the direct inversion procedure, the matrix 𝕄 defined in Eq. (C.4) is inverted through a fast recursion that takes advantage of its tridiagonal nature. One may wonder if this procedure does not generate numerical errors insofar matrix 𝕄 could be far from the unity matrix 𝕀. Complementarily, the Crank–Nicholson procedure makes use of the matrix 𝕀 + 𝕄Δt/2 which is arbitrarily close to unity if the time step Δt of the recursion is small enough. We find that deriving u through a Crank–Nicholson recursion yields a relative change with respect to the direct inversion result of Fig. D.4 (short dashed purple curve) which is always less than 10−9 above 10 MeV while reaching a maximum of 4 × 10−6 at Ek = 2 MeV. In these calculations, the Alfvénic speed is Va = 1 m s−1. Increasing the speed alleviates further the discrepancy between the two numerical results. The difference is at most 2 × 10−6 for a velocity of 1 km s−1 and decreases below 4 × 10−7 at 1.75 km s−1. We conclude that the same wiggles appear should u be derived by directly inverting Eq. (C.3) or by letting u evolve à la Crank–Nicholson. The origin of instabilities has to be looked elsewhere.

Failure of 2nd-order schemes to solve 1st-order equations. While performing the previous check, we have serendipitously observed that the wiggles disappear as Va increases. In the panels of Fig. D.5, four different values have been assumed for the Alfvénic speed and the relative difference |u − uloss|/uloss is plotted as a function of Ek for each of them. In the upper panel, the numerical solution u exhibits instabilities which are no longer visible in the lower panel. The solution becomes smooth for a critical value of order 1.75 km s−1. Above that speed, u slowly departs from uloss as diffusive reacceleration starts to be felt by the proton flux.

thumbnail Fig. D.5.

In these two panels, the proton flux is plotted as in Fig. D.2 with the exception of the Alfvénic speed Va for which different values are assumed as indicated. When diffusive reacceleration vanishes, we expect the proton flux to be given by the analytic solution uloss. The relative difference between the numerical result u (direct inversion) and uloss is calculated with the low-energy boundary prescription L3. As Va increases from 1 m s−1 to 2.5 km s−1, u becomes more stable and wiggles disappear. The transition occurs for a critical value of Va  ∼  1.75 km s−1. The number of bins per decade of energy is Ndec = 50.

Open with DEXTER

A tentative explanation for the presence of instabilities is that when Va is vanishingly small, the numerical result u is not derived with the appropriate method. The transport Eq. (D.4) becomes first order in this regime and only one boundary condition suffices to determine its solution. We have actually obtained uloss by requiring that it should be equal to u0 at the highest energy point. Strictly speaking, a low-energy boundary condition is no longer necessary. Of course, this is not as simple as that since we proceed numerically through the inversion of matrix 𝕄 whose elements ak, bk and ck depend on the functions α, β and γ as discussed in Appendix C. Energy losses and diffusive reacceleration respectively enter in the definition of matrix 𝕄 through the combinations αβx and αγx2, with Δx the spacing of the energy grid.

Regularisation of the 2nd-order scheme. From a numerical perspective, the Alfvénic speed vanishes when αγx2 is negligible with respect to 1, as is clear from Eq. (C.6). To explore this regime, we define the numerical strength of diffusive reacceleration through the ratio

(D.9)

which we have plotted as a function of kinetic energy Ek in Fig. D.6. Model A has been assumed with the exception of Va. The solid black curve corresponds to an Alfvénic speed of 1.75 km s−1. At that critical value, the instabilities of the numerical result u start to recede, as featured in the lower panel of Fig. D.5 by the smoothness of the short dashed purple curve above 300 MeV. Concomitantly the ratio τdisc/τDR Δx2 reaches a maximum of 0.5. Notice that the onset of stability is an ill-defined process which takes place for Alfvénic speeds between 1.5 and 2 km s−1, hence the yellow strip of Fig. D.6.

thumbnail Fig. D.6.

Ratio τdisc/τDR Δx2 is plotted as a function of kinetic energy Ek for three different configurations. In all cases, the parameters of Model A have been assumed with the exception of the Alfvénic speed. The solid black curve corresponds to the critical value Va = 1.75 km s−1 above which the instabilities which affect the numerical solution u in Fig. D.5 are noticeably reduced. For that velocity, the ratio τdisc/τDR Δx2 reaches a maximum of 0.5. Enlarging the range of Alfvénic speeds for which the onset of stability occurs yields the yellow band. The long dashed red and short dashed-dotted purple curves both correspond to Va = 0.5 km s−1. In the former case, the spacing Ndec of the energy grid is 25, whereas it is 300 in the latter.

Open with DEXTER

The prescription which we propose for getting rid of the numerical perturbations of u is to require the ratio τdisc/τDR Δx2 to overshoot that band, with a maximum exceeding a benchmark value of 0.7. Because the energy grid is made coarser with fewer bins per decade, the interval Δx increases and we expect the above criterion to be fulfilled for larger values of Va. This is actually what we observe. So far, all the results presented in Appendix D are based on 50 bins per decade. If we degrade the resolution of the x-grid and use 25 bins per decade, we find that wiggles start to recede when the Alfvénic speed exceeds a critical value of 1.75 × 2 = 3.5 km s−1. Conversely, if the x-grid is refined with 100 bins per decade, the transition occurs at 1.75/2 = 0.875 km s−1.

As a final consistency check, we have derived u with an Alfvénic speed Va of 0.5 km s−1 and two different values for the energy spacing Ndec. Our results are featured in Fig. D.7 where the same line patterns and colour codes as in Fig. D.6 have been used for the two lower curves. The long dashed red line has been derived with 25 bins per decade. Notice how the corresponding curve of Fig. D.6 does not overshoot the yellow band and hence does not fulfil the stability criterion. We expect u to exhibit numerical instabilities which we actually observe in Fig. D.7. In the case of the short dashed-dotted purple curve, a more refined grid is used with Ndec = 300 and the numerical solution u is smooth. The stability condition is now satisfied as shown by the corresponding curve of Fig. D.6.

thumbnail Fig. D.7.

Same as in Fig. D.5 but now with an Alfvénic speed Va of 0.5 km s−1 and two different values for Ndec, the number of bins per decade of energy. The line patterns and colour codes of the two lower curves are the same as in Fig. D.6, i.e. the long dashed red and short dashed-dotted purple lines respectively stand for Ndec = 25 and 300. Only in the latter case is the criterion for stability fulfilled.

Open with DEXTER

Recommendation and conclusion. To conclude this Appendix, let us discuss how to implement practically the criterion which we have found. To ensure the stability of the numerical result u, we require the ratio τdisc/τDR Δx2 to reach a maximum in excess of 0.7. In the regime where the Alfvénic speed is very small, this implies a very fine structure of the energy grid. In Fig. 1 for instance, the reference B/C case is derived with 5000 energy bins per decade. Our study confirms that this procedure leads to the correct CR flux. However, the CPU time significantly increases with Ndec and the B/C fit may become impracticable. The procedure which we suggest is to fix Ndec at a benchmark value of 50 for instance. For a given set of CR propagation parameters, the critical value of the Alfvénic speed at which u becomes stable can be determined from the ratio τdisc/τDR Δx2 calculated, say, at the lowest energy point. In practice, τdisc and τDR are respectively decreasing and increasing functions of Ek. This procedure ensures that is always much smaller than the Alfvénic speed at which diffusive reacceleration becomes as important as disc transport. With Ndec = 50, the former is only 3.8% of the latter. During the B/C fit, Va is forced to be larger than the lower limit which we find, in the case of Model A, to be 1.75 ± 0.25 km s−1. In this configuration, a very rough criterion is to require that the ratio exceeds 5 × 10−2 Myr−1 or, alternatively, that is larger than 10−4 Myr−1.

Appendix E: Impact of selected cross-section uncertainties on B/C

We detail the impact of the most dominant reactions (see Sect. 4) on the B/C ratio. Figures E.1 and E.2 show, in black lines, the impact of changing a single inelastic or production cross section (w.r.t. to the calculation with a reference cross-section file) and, for comparison purpose, blue lines show the relative difference between the reference cross section and the new one on the same plot: the B/C variation is always smaller than that of the cross sections (see Sect. 4).

thumbnail Fig. E.1.

Impact of inelastic cross-section uncertainties on B/C ratio for specific reactions as a function of rigidity, shown for Model A and Model B discussed in Sect. 2.1. In each panel, two quantities are shown: the relative difference between a cross section (of a given reaction) and a reference parametrisation T99 in blue (the associated y-axis is on the right-hand side of the plot, also in blue), and the associated impact on the B/C ratio for this cross-section parametrisation w.r.t. to the reference B/C (the associated y-axis is on the left-hand side in black). To guide the eye, the vertical red line indicates the rigidity of the first AMS-02 data point. The bottom panels show the overall impact when all reactions (i.e. all nuclei in the network) are replaced. For the latter panel, dark and grey curves correspond to IS and modulated (ϕFF = 0.8 GV) B/C ratio to emphasise that results are independent of the modulation level.

Open with DEXTER

thumbnail Fig. E.2.

Same as Fig. E.1 but for production and reference W03.

Open with DEXTER

For inelastic cross sections (Fig. E.1), the maximum variation between different parametrisations is ∼10% – related to the different positions and level of the peaks and dips of the cross sections (shown in blue in Fig. 5)–, leading to a ≲3% impact on B/C (black curves). For 16O, 11B, and 10B, the cross section and B/C variations are anti-correlated, while correlated for 12C. This is explained as follows: for 16O, a larger destruction cross section means less Oxygen at low energy, and so less Boron production (and less B/C); an increased destruction of 10B and 11B also leads to less B/C; the case of 12C is a trade-off between the fact that an increased destruction means less Carbon (increase in B/C), which means less Boron production (decrease in B/C). If we compare the variation on the B/C relative differences when all cross sections are changed (last panel) to the case in which only one reaction is changed, we see that 11B and 12C combined are responsible for almost the whole variability. The impact of the destruction of 10B is sub-dominant, and at the same level as 16O.

For production cross sections (Fig. E.2), the maximum variation between different parametrisations is ±20% for 16O→11B. There is more structure in the G17 cross sections than in all other ones, because they are normalised to data whenever existing. As for inelastic cross sections, the variation in B/C is smaller than the variation in production cross sections, but the relative variations are now correlated for all reactions, because more or less production directly reflects on the B/C ratio. The most impacting reactions also directly reflect the ranking established in Génolini et al. (2018), in which 12C→11B and then 16O→11B have the strongest effect. The sum of these two accounts for most of the variation seen in the bottom panel, in which all reactions in the network were changed at once. We also plot the impact of two reactions involved in “two-step” production of Boron, where 15N and 11B are intermediate steps. As discussed in Génolini et al. (2018), the “two-step” reactions can contribute up to ∼25% of the total production, and they do not have the same energy-dependence as “one-step” (or direct) production. As can be seen in the two bottom panels, these reactions are suppressed at high energy compared to the other shown, and overall, their impact on B/C is ≲3%. In Sect. 4.2, we did some checks adding these dominant channels as nuisance parameters to give extra degrees of freedom w.r.t. the energy dependence, but no gain on the results was found, so they were not considered in the main analysis.

All Tables

Table 1.

Values of μ and σ for Gaussian distributed nuisance parameters for cross sections listed.

Table 2.

Summary of mock data configurations used to test cross-section nuisance parameters in Sect. 4.3.

Table C.1.

Matrix coefficients for several boundary conditions of the transport equation.

All Figures

thumbnail Fig. 1.

Relative difference for the B/C computed with different boundary conditions at LE, varying the number of energy bins per decade (top panel) and the minimal energy of the grid (bottom panel). The reference B/C has been computed using 5000 energy bins per decade and setting Emin = 1 MeV.

Open with DEXTER
In the text
thumbnail Fig. 2.

Absolute value |ℰF| of the relative difference between the exact and approximate flux calculation in a bin, Eq. (7), as a function of the flux spectral index α. Two mean rigidity definitions are compared, in dim grey and in light grey, see Eq. (5), for different bin size ranges (from 1 decade, solid line, down to 0.1 decade, dotted line). To guide the eye, the green dash-dotted line shows when the relative difference is 3%. See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. 3.

Colour-coded relative difference log10(|ℰR|) between the exact and approximate ratio calculation in a bin, Eq. (8), as a function of the numerator and denominator spectral difference, αden − αnum (x-axis), and as a function of the rigidity bin width, log10(Rmax/Rmin) (y-axis). For instance, a value of 2 (red) corresponds to a relative difference of 100%, and a value of −1 (blue) to a relative difference of 0.1%. The left and right panels correspond to calculations using and respectively, see Eq. (5). The rows correspond to three different values of the denominator spectral index αden. In each panel, the dash-dotted green contour delimits regions in which the relative difference is above or below 3%. See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. 4.

Impact of different cross-section parametrisations on B/C flux calculation as a function of rigidity, w.r.t. a reference (denoted Ref. in the legend). The left and right panels are for inelastic and production cross sections respectively (see Sect. 4.1). The thick (resp. thin) lines are for the interstellar (resp. solar-modulated at ϕForce-Field = 800 MV) calculation. The red vertical line highlights the first rigidity point of AMS-02 B/C data.

Open with DEXTER
In the text
thumbnail Fig. 5.

Models (as listed in Sect. 4.1) and range of cross sections (quantiles corresponding to median, 68%, and 95% CLs) generated from a reference cross section biased by Gaussian distributed nuisance parameters (as gathered in Table 1). The reactions showed correspond to the dominant ones discussed in Sect. 4.1, for inelastic (left column) and production (right column) cross sections.

Open with DEXTER
In the text
thumbnail Fig. 6.

Median, 1-σ, and 2-σ for the distribution of σNSS/σref(Ek/n) values, from 1000 Gaussian distributed normalisation values (9), energy scale (10), and slopes (11) with respective mean and variance (μ, σ)Norm = (1, 0.05), (μ, σ)Scale = (1, 0.2), and (μ, σ)Slope = (0, 0.02).

Open with DEXTER
In the text
thumbnail Fig. 7.

Reconstruction of 1σ contours (68% confidence level) from the analysis of 1000 mock data for Model A (top panel) and Model B (bottom panel), for the unbiased case (cross sections for the analysis are the same as the cross sections used to generate mock data). For display purpose, the 2D probability distribution functions are estimated using a Gaussian Kernel (default method to define the bandwidth of gaussian_kde in scipy python library); the irregular shapes are related to statistical fluctuations. The colour code is related to the nuisance parameters used and the line style to the type of nuisance parameters (see Table 2). The “+” symbols represent true values. See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. 8.

Distribution of best-fit values (, then transport parameters) from the analysis of 1000 mock data for Model A (left panel) and B (right panel), for the biased case (cross sections for the analysis differ from those used to generate mock data). For display purpose, 1D probability distribution functions are estimated using a Gaussian Kernel. The line style and colours indicate the type and configuration of nuisance parameters used (see Table 2). The vertical dashed lines represent true values. See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. 9.

Nuisance parameters from the analysis of 1000 mock data for Model A (top panels) and Model B (bottom panels). The nuisance parameters are shown reaction by reaction (rows: production then inelastic cross sections), each reaction having two nuisance parameters (see list in Table 2). The number in parenthesis beside the reaction corresponds to its overall impact on the B/C calculation, as reported in Table 1. We show for the NSS analysis (left panels) normalised and centred parameters, whereas LC analysis (right panels) are shown between zero and one. We use violin plots to highlight the probability density of the parameter (y-axis) at different values (x-axis), and our violin plots include a marker for the median (black “+” symbol) along with the 1σ range of the parameter (thick black line). In order to directly compare the distribution of the nuisance parameters for the unbiased and biased analyses, we superimpose them on the same line in red and blue respectively. See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. 10.

AMS-02 errors for B/C data. Solid lines correspond to the errors provided in Aguilar et al. (2018b), namely statistical, acceptance, scale, and unfolding (the step-like evolution is artificial and related to the rounding of the values provided in the table). The orange lines correspond to a further split of the acceptance errors: normalisation (norm.), low energy (LE), and residual (res.). See text for details.

Open with DEXTER
In the text
thumbnail Fig. 11.

Top panels: values obtained for the /d.o.f. and the fit parameters (and uncertainties) as a function of for propagation models A (left) and B (right) and for the full acceptance error (blue) and the split acceptance error (orange). In the case of split acceptance error, corresponds to the correlation length of the Acc. res. contribution only. Bottom panels: comparison of best-fit B/C (lines) and AMS-02 data (symbols) for the above models and configurations, i.e. full (blue lines, top) vs. split (orange lines, bottom) acceptance errors. Only a sample of correlation lengths are shown (0.015, 0.1, and 1 decade).

Open with DEXTER
In the text
thumbnail Fig. 12.

Evolution of the best-fit parameters and asymmetric 1σ errors (extracted from MINOS), increasing the number of nuclear cross-section reactions used as nuisance parameters (from left to right on the x-axis); Model A (top) and B (bottom) are shown. For every additional reaction as nuisance, four different fits are performed: starting either with G17) or W03 parametrisations for the production cross sections; using either CL or NSS for the implementation of nuisance parameters. The black dashed line in the xs panel corresponds to the threshold . See text for discussion.

Open with DEXTER
In the text
thumbnail Fig. A.1.

Average mass number for carbon (top) and Boron (bottom) as a function of rigidity. The dashed blue lines correspond to the value calculated from the model (with the isotopic content known), whereas the solid black lines correspond to the choice made in Aguilar et al. (2016). See text for details.

Open with DEXTER
In the text
thumbnail Fig. A.2.

Top panel: B/C residual (vs. R) calculated from the difference between AMS-02 data and our reference model (best-fit model). Bottom panel: B/C residual (vs. Ek/n) calculated from the difference between various R-to-Ek/n-converted B/C (red, blue and black lines) and the “exact” B/C (reference model converted with correct isotopic content). The AMS-02 data vs. Ek/n are taken from Aguilar et al. (2016), and result from the same conversion as done for the solid black line. See text for discussion

Open with DEXTER
In the text
thumbnail Fig. D.1.

Timescales associated to the various processes at play in the Galactic transport of CR protons are plotted as a function of kinetic energy Ek. The diffusion τdiff and convection τconv timescales are respectively featured by the long dashed and short dashed-dotted black curves. They are combined into the disc residence time τdisc plotted as the solid black line. The energy loss and diffusive reacceleration timescales correspond to the solid purple and orange curves. The red solid line features the combined timescale τeff as given in Eq. (D.3).

Open with DEXTER
In the text
thumbnail Fig. D.2.

CR proton flux Φp is plotted as a function of kinetic energy Ek. The long dashed-dotted and solid black curves stand respectively for the approximations u0 and uloss. The exact solution u is derived numerically assuming boundary conditions L3 and H4. The long dashed orange and short dashed purple curves correspond to different methods used to solve transport Eq. (4). Protons are injected with a rate where Np = 5.8 × 10−11 protons cm−3 GeV−1 Myr−1 and α = 2.3. With these values, we get Φp in rough agreement with the Voyager 1 (Stone et al. 2013), PAMELA (Adriani et al. 2011) and AMS-02 (Aguilar et al. 2015a) data.

Open with DEXTER
In the text
thumbnail Fig. D.3.

Changing the low-energy boundary conditions of Table C.1 modifies the numerical result obtained for u. In this plot, the variations of the proton flux relative to the fiducial case of Fig. D.2 are displayed as a function of kinetic energy Ek. Notice that all conditions yield the same flux above 3 MeV. Close to the boundary, prescriptions L1 and L4 generate wiggles and the flux becomes inaccurate. Prescriptions L2 and L3 (fiducial) yield the same result.

Open with DEXTER
In the text
thumbnail Fig. D.4.

Cosmic-ray transport parameters of Model A are used, except for the Alfvénic speed Va set equal to 1 m s−1. In this regime, the numerical result u of Eq. (D.4) is equal to the analytic solution uloss. The relative difference between u and uloss is plotted as a function of kinetic energy Ek. The low-energy boundary prescriptions of Table C.1 are respectively featured by the solid red (L1), long dashed orange (L2), short dashed purple (L3) and short dashed-dotted green (L4) curves. The most precise condition is L3 while the worst one is L1.

Open with DEXTER
In the text
thumbnail Fig. D.5.

In these two panels, the proton flux is plotted as in Fig. D.2 with the exception of the Alfvénic speed Va for which different values are assumed as indicated. When diffusive reacceleration vanishes, we expect the proton flux to be given by the analytic solution uloss. The relative difference between the numerical result u (direct inversion) and uloss is calculated with the low-energy boundary prescription L3. As Va increases from 1 m s−1 to 2.5 km s−1, u becomes more stable and wiggles disappear. The transition occurs for a critical value of Va  ∼  1.75 km s−1. The number of bins per decade of energy is Ndec = 50.

Open with DEXTER
In the text
thumbnail Fig. D.6.

Ratio τdisc/τDR Δx2 is plotted as a function of kinetic energy Ek for three different configurations. In all cases, the parameters of Model A have been assumed with the exception of the Alfvénic speed. The solid black curve corresponds to the critical value Va = 1.75 km s−1 above which the instabilities which affect the numerical solution u in Fig. D.5 are noticeably reduced. For that velocity, the ratio τdisc/τDR Δx2 reaches a maximum of 0.5. Enlarging the range of Alfvénic speeds for which the onset of stability occurs yields the yellow band. The long dashed red and short dashed-dotted purple curves both correspond to Va = 0.5 km s−1. In the former case, the spacing Ndec of the energy grid is 25, whereas it is 300 in the latter.

Open with DEXTER
In the text
thumbnail Fig. D.7.

Same as in Fig. D.5 but now with an Alfvénic speed Va of 0.5 km s−1 and two different values for Ndec, the number of bins per decade of energy. The line patterns and colour codes of the two lower curves are the same as in Fig. D.6, i.e. the long dashed red and short dashed-dotted purple lines respectively stand for Ndec = 25 and 300. Only in the latter case is the criterion for stability fulfilled.

Open with DEXTER
In the text
thumbnail Fig. E.1.

Impact of inelastic cross-section uncertainties on B/C ratio for specific reactions as a function of rigidity, shown for Model A and Model B discussed in Sect. 2.1. In each panel, two quantities are shown: the relative difference between a cross section (of a given reaction) and a reference parametrisation T99 in blue (the associated y-axis is on the right-hand side of the plot, also in blue), and the associated impact on the B/C ratio for this cross-section parametrisation w.r.t. to the reference B/C (the associated y-axis is on the left-hand side in black). To guide the eye, the vertical red line indicates the rigidity of the first AMS-02 data point. The bottom panels show the overall impact when all reactions (i.e. all nuclei in the network) are replaced. For the latter panel, dark and grey curves correspond to IS and modulated (ϕFF = 0.8 GV) B/C ratio to emphasise that results are independent of the modulation level.

Open with DEXTER
In the text
thumbnail Fig. E.2.

Same as Fig. E.1 but for production and reference W03.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.