Issue 
A&A
Volume 627, July 2019



Article Number  A158  
Number of page(s)  24  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201935717  
Published online  16 July 2019 
Fitting B/C cosmicray data in the AMS02 era: a cookbook
Model numerical precision, data covariance matrix of errors, crosssection nuisance parameters, and mock data
^{1}
LPSC, Université Grenoble Alpes, CNRS/IN2P3, 53 Avenue des Martyrs, 38026 Grenoble, France
email: david.maurin@lpsc.in2p3.fr, laurent.derome@lpsc.in2p3.fr
^{2}
LAPTh, Université Savoie Mont Blanc & CNRS, 74941 Annecy Cedex, France
^{3}
LPTHE, Sorbonne Université & CNRS, 4 Place Jussieu, 75252 Paris Cedex 05, France
^{4}
Service de Physique Théorique, Université Libre de Bruxelles, Boulevard du Triomphe, CP225, 1050 Brussels, Belgium
email: pierre.salati@lapth.cnrs.fr
Received:
17
April
2019
Accepted:
16
June
2019
Context. AMS02 on the International Space Station has been releasing data of unprecedented accuracy. This poses new challenges for their interpretation.
Aims. We refine the methodology to get a statistically sound determination of the cosmicray propagation parameters. We inspect the numerical precision of the model calculation, nuclear crosssection uncertainties, and energy correlations in data systematic errors.
Methods. We used the 1D diffusion model in USINE. Our χ^{2} analysis includes a covariance matrix of errors for AMS02 systematics and nuisance parameters to account for crosssection uncertainties. Mock data were used to validate some of our choices.
Results. We show that any mismodelling of nuclear crosssection values or the energy correlation length of the covariance matrix of errors biases the analysis. It also makes good models (χ_{min}^{2}/d.o.f. ≈ 1) appear as excluded (χ_{min}^{2}/d.o.f. ≫ 1). We provide a framework to mitigate these effects (AMS02 data are interpreted in a companion paper).
Conclusion. New production crosssection data and the publication by the AMS02 collaboration of a covariance matrix of errors for each data set would be an important step towards an unbiased view of cosmicray propagation in the Galaxy.
Key words: astroparticle physics / diffusion / methods: miscellaneous / cosmic rays
© L. Derome et al. 2019
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Particle physics detectors in space have opened a new era for the study of Galactic cosmic rays (GCRs). The Alpha Magnetic Spectrometer (AMS02) instrument on the International Space Station (ISS) provides the best data to date for leptons and nuclei (Aguilar et al. 2013, 2014a,b, 2015a,b, 2016, 2018a,b), with an uncertainty of a few percent on a large energy range. Its measurements will probably remain unrivalled for at least the next decade in the GeV–TeV energy range.
In principle, highprecision data can be used to constrain different propagation scenarios or candidates in the context of dark matter indirect detection. However, promises of highprecision cosmicray (CR) physics can only be fulfilled if the various sources of uncertainties, data and model, are fully accounted for.
AMS02 systematic uncertainties are diverse in origin and dominate the error budget of measured fluxes and ratios overall. In experiments measuring spectra, correlations in adjacent energy bins may be introduced at the data analysis stage. These correlations could wash out or mimic spectral features in the data. In principle, the best approach to automatically account for such effects is to fold the model prediction to the instrument full response and directly compare with the number of events. However, because the AMS02 instrument response is not available, the next best approach is to incorporate a correlation matrix of errors or use nuisance parameters when comparing a model to the data. Such a matrix is also not available; however, we can rely on educated guesses to derive it and inspect the consequences on the model parameters.
The dominant source of uncertainty in the modelling is from nuclear cross sections (Donato et al. 2001; Maurin et al. 2010). Propagating these uncertainties to the model parameters have already been investigated in several studies (Giesen et al. 2015; Génolini et al. 2015; Tomassetti 2017; Reinert & Winkler 2018), and we revisit this question here in more details with the use of mock (simulated) data. Controlled data were introduced in a CR propagation context in Coste et al. (2012) to study possible biases on model parameters from using primary and secondary CR data of very different accuracy. In this study, mock data are used to characterise the bias on reconstructed model parameters when accounting for crosssection uncertainties, and more importantly, to assess how well nuisance parameters on cross sections allow one to recover unbiased values of these parameters.
To be able to test various model hypotheses at the required level of the data, the model calculation must be at a much higher precision than the data uncertainty. The diffusion equation is a second order differential equation in space and momentum, and we discuss the impact of energy boundary conditions here, along with other effects that could prevent reaching the desired precision.
Except for the numerical precision, studying how to best deal with data and crosssection uncertainties is mostly independent of the propagation model and of the specific quantity studied. For practical purpose, our methodology is illustrated on the B/C ratio, which is one of the most frequently discussed quantity in the CR literature (e.g. Maurin et al. 2001).
This is a methodology paper, and the full analysis of AMS02 data and the interpretation of the model parameters are left for the companion paper (Génolini et al. 2019). The paper is organised as follows: Sect. 2 briefly introduces the 1D model and its parameters, and details two configurations used for the analysis. Section 3 discusses how to ensure a good precision at various stages of the calculation: boundary conditions, numerical stability, and comparison of the model to the data. Section 4 shows two different parametrisations of nuisance parameters for nuclear cross sections, and mock data are used to fully characterise the impact of crosssection uncertainties. In Sect. 5, we detail the systematics of AMS02 data – from which a covariance matrix of error is built–, and we show the impact of the correlation length (of the covariance matrix) on the best fit results and errors on the model parameters. Cross sections and covariance matrix effects are then dealt with together in Sect. 6, to assess which one dominantly impacts the analysis of AMS02 B/C data. We summarise our findings and give some recommendations in Sect. 7.
For the sake of readability, some more technical and detailed discussions are postponed in the appendices: Appendix A shows that the AMS02 B/C data conversion from R to E_{k/n} as presented in Aguilar et al. (2016) induces an energydependent bias (∼3%); Appendix B details how the covariance matrix of errors and nuisance parameters are included in the χ^{2} analysis; Appendix C gathers several boundary coefficients that can be implemented for the numerical solution of the discretised secondorder differential diffusion equation; Appendix D presents a thorough analysis of the stability of the latter solution (for the 1D model), also determining a stability criterion for the Crank–Nicholson solution with a vanishing secondorder term (V_{a} → 0); Appendix E shows a more detail view of how specific crosssection reactions impact the calculated B/C ratio.
2. Model and parameters
Many modellings of propagation models are possible, with different geometries, more or less involved spatial dependences, possible timedependence, etc. For simplicity, and because most of our results and conclusions should not depend on this modelling, we use throughout the paper a 1D diffusion model, as implemented in the USINE package (Maurin 2018)^{1}. The equations and solutions can be found for instance in Putze et al. (2011), and for conciseness, we do not repeat them here. Such a model also proves useful, because it can be compared, in some simple cases, to compact analytical solutions (see, e.g. Appendix D).
2.1. 1D propagation model
In this model, sources and gas are in a thin disc of halfheight h in which energy losses and reacceleration occur. Particles diffuse in an homogeneous region of height L above and below the disc. The free parameters of the model that we vary in this study are: the rigidity dependence R = pc/(Ze), of the homogeneous and isotropic spatial diffusion coefficient K(R), see Sect. 2.2; the Alfvénic speed V_{a} of scatterers mediating the strength of the diffusion in momentum K_{pp}(R), the latter being related to the spatial diffusion coefficient and depends on the geometry of the turbulence (Schlickeiser 2002); the strength of V_{c}, a constant Galactic wind perpendicular to the thin disc.
Other model parameters (halo size L, source spectrum) are irrelevant for this study. For instance, changing L would merely lead to a rescaling of the above parameters (e.g. Putze et al. 2011), so we fix it to 10 kpc for practical purpose (and we fix h to 100 pc). The B/C value is insensitive to the value of the universal spectral index of sources (e.g. Maurin et al. 2002; Génolini et al. 2015), so it is fixed in the analysis.
2.2. Parameters for Model A and Model B
The impact of various uncertainties on transport parameters depends on the exact number and nature of the free parameters considered. We rely on two configurations, which correspond to two extreme cases of a more generic parametrisation of the diffusion coefficient, further detailed in the companion paper. They are denoted Model A and Model B in the rest of the paper.
Model A is a diffusionconvectionreacceleration model with
where η_{t} allows for a subrelativistic change of the diffusion coefficient as parametrised in Maurin et al. (2010), and where K_{HE} = (1 + (R/R_{h})^{δh/sh})^{−sh} is a high energy break whose parameter values are taken from Génolini et al. (2017). This configuration has 5 free transport parameters: K_{0}, δ, η_{t}, V_{a}, and V_{c}.
Model B is a pure diffusion model (no V_{c}, no V_{a}) with a double broken powerlaw, at both high and low energy:
This configuration has 4 free parameters: K_{0}, δ, R_{l}, and δ_{l}. The smoothness parameter s_{l} has only a minor impact on the results, so it is fixed to 0.05 (quick transition) to speed up the fitting procedure.
Following Seo & Ptuskin (1994), we take, for both models, the diffusion coefficient in momentum
3. Model precision: general considerations
The requirements for any analysis is (i) that the model calculation can be enforced at a precision much better than the precision of the data which will constrain it, and (ii) to ensure that the model and the data compared refer to the exact same quantity.
We focus on the model calculation in this section, but at the data level, biases can also sometimes be introduced, as exemplified in Appendix A with the conversion of B/C data from R to E_{k/n} proposed in Aguilar et al. (2016).
3.1. Energy boundary conditions and numerical stability
The transport equation of CRs (e.g. Maurin 2018) is a second order differential equation in space and in momentum (if reacceleration is present), whose solution depends on the boundary conditions. These conditions should in principle be fixed by physics requirements, but there is no consensus as to what they are. We underline that spatial and momentum boundary conditions are generally not dealt with on the same footing: while they are usually clearly indicated in publications and are recognised as being part of the properties of a model for the former, this is not the case for the latter (which is discussed below).
The transport equation can be cast into a generic conservation equation on the energy coordinate x = lnE_{k/n}:
with u is the cosmicray differential density in energy, u^{0} a source term, J_{E} a current (convection and diffusion), and α, β, and γ are coefficients related to energy losses, diffusion in momentum, etc.^{2}
Boundary conditions. At very high energy, for nuclei, the timescale of energy losses and gains becomes very large compared to the escape time, suggesting the condition u(x_{max}) = u^{0}(x_{max}), which is seemingly used in all propagation codes.
At low energy, different boundary conditions have been used in the literature, corresponding to different physical situations: (i) no curvature in the spectrum, as introduced in Donato et al. (2001) for antiprotons: ∂^{2}u/∂x^{2}_{xmin} = 0; (ii) no energy flow, that is a vanishing current J_{E}(x_{min}) = 0. Physically this means that, at x_{min}, the outward current from energy losses balances exactly the inward reacceleration current. This condition thus depends on the coefficients α, β and γ of the equation. From Eq. (4), one can infer that for very small values of γ, this will create a strong gradient of u to maintain J_{E} = 0; (iii) no density gradient in momentum ∂f/∂p_{xmin = 0} (Evoli et al. 2017), which translates into ∂/∂p(u/(pE))_{Emin} = 0.
Numerical solution and precision. In the semianalytical models implemented in USINE and used here, as for most propagation codes, the second order differential equation in energy is solved numerically, by solving the equation on a grid. In practice, we make use of the semiimplicit Crank–Nicholson method and the resolution proceeds via the inversion of a tridiagonal matrix (see Appendix C). The first and last energy bins are set by the boundary conditions, as reported in Table C.1.
The precision of the numerical inversion depends on the number of points on the grid. We find that the boundary conditions at low energy described above yield similar results, provided that the number of energy bins is large enough and that the lower energy E_{min} is far below the range of interest. However, they differ with respect to the minimum number of energy bins required to reach the same level of precision. The B/C ratio obtained from each of these conditions are compared in Fig. 1 to a reference ratio obtained setting E_{min} = 1 MeV n^{−1} and using 5000 points per energy decade. We checked that the reference ratio does not depend, at the per mille level, on the chosen boundary condition. The red, blue and green curves are obtained for the first, second and third boundary conditions at low energy, respectively. In the top panel, we change the number of points in the energy grid (10, 50 and 100 bins per decade) whereas in the bottom panel we move the lowenergy bound E_{min} from 100 MeV n^{−1} down to 1 MeV n^{−1}. We obtain similar results for each lowenergy boundary condition, provided that the number of points per energy decade is larger than 10 and E_{min} is much lower than the energy range of interest. This is probably the reason why no boundary condition at low energy is specified in the numerical codes GALPROP (Strong et al. 2011, p. 35) and PICARD (Kissmann 2014, p. 5). However, the choice of the boundary condition is important for the convergence. We find that the first condition (no curvature in the spectrum) converges better than the others and yields a precision at the percent level even using only 10 bins per decade when starting from E_{min} = 1 MeV n^{−1}. We therefore recommend to make use of the no curvature boundary condition at low energy, with E_{min} = 1 MeV n^{−1} and using 50 bins per decade, to ensure numerical systematics lower than the percent level.
Fig. 1. Relative difference for the B/C computed with different boundary conditions at LE, varying the number of energy bins per decade (top panel) and the minimal energy of the grid (bottom panel). The reference B/C has been computed using 5000 energy bins per decade and setting E_{min} = 1 MeV. 
Finally, this method can be unstable for γ → 0, in which case Eq. (4) boils down to a first order differential equation. A simple prescription to obtain the solution keeping the same solver is to enforce a nonnull but small γ, with a nonvanishing yet small V_{a}. We checked that setting a lower limit on the Alfvénic speed such that where Δx is the spacing of the energy grid, stabilises the V_{a} = 0 solution at the per mille level.
3.2. “Model vs. data” error using R_{mean} or of bin range estimate
To our knowledge, models are always calculated at a single point and then compared to data measured over a bin range. This can lead to a systematics we quantify here.
Meaning of R_{mean} in data. All data measurements are based on a number of events in the detector (corrected for the acceptance, efficiencies, etc.) per unit area, solid angle, unit time, and energy bin. To be practical, let us assume that, like AMS02, the experiment provides data per rigidity bin [R_{min}, R_{max}]^{3}. Usually, for display purpose, the central bin reported in the experiments is the geometric or more rarely the arithmetic mean:
Neither of those means are satisfactory to represent the measured flux F, because the correct central bin should be calculated from the condition
This is a well known issue (Lafferty & Wyatt 1995): providing the exact R_{mean} from the data requires the knowledge of the spectral shape of the flux that the experiment is actually trying to measure!
vs. for fluxes and ratios. In the context of minimisation studies, the above discussion should be irrelevant as the theoretical flux is known: it can be integrated over the bin range and compared with the data without approximation. However, standard practice in the literature is to fit the data using . We quantify below the relative difference between the exact and approximate calculation for a flux,
For practical calculations, we assume powerlaw fluxes F = F_{0}R^{−α}, so that the normalisation F_{0} simplifies in Eq. (7). We can also estimate the relative difference for B/C, taking the ratio of two powerlaw fluxes of indices α_{num} and α_{den}:
Model error on fluxes. We show in Fig. 2 the module ℰ_{F} as a function of α. The different line styles correspond to different bin widths, and the larger the bin width, the larger ℰ_{F}. Also, is a better approximation than , except for α ∼ 0 (thick dim vs. thin light grey lines). is exact for α = 0 and α = 2 and the relative difference is positive above α = 2 and negative below α = 2. To guide the eye, the green dashdotted line shows the typical 3% uncertainty on the AMS02 data: all values of α and ΔR above this line, using in model calculations produces a bias larger than 3%. We do not study fluxes here, but primary and secondary fluxes have slopes of ∼2.8−3.1, for which the bias is maximal. Nevertheless, the maximal rigidity bin width in AMS02 data for proton and He flux is R_{max}/R_{min} ≈ 1.6, for which the approximate calculation is smaller than the data uncertainty.
Fig. 2. Absolute value ℰ_{F} of the relative difference between the exact and approximate flux calculation in a bin, Eq. (7), as a function of the flux spectral index α. Two mean rigidity definitions are compared, in dim grey and in light grey, see Eq. (5), for different bin size ranges (from 1 decade, solid line, down to 0.1 decade, dotted line). To guide the eye, the green dashdotted line shows when the relative difference is 3%. See text for discussion. 
Model error on ratios. We repeat the analysis for the ratio of two power laws of slope α_{num} and α_{den}, see Eq. (8). Figure 3 shows colourcoded values of log_{10}(ℰ_{R}). The xaxis is α_{den} − α_{num}: for the B/C ratio, this difference ranges from ∼0.2 to 0.7 at high energy, while negative values mimic the decreasing ratio below a few GV. We show our results for three α_{den} values, where the lower value (top panels) mimics the flattening of the Carbon spectrum at low energy, while the higher value (bottom panels) corresponds to the highenergy slope: ℰ_{R} grows with increasing α_{den} − α_{num} and with the data bin size (yaxis); the three rows show a growing relative difference (from top to bottom) with α_{den}. Also, as already observed for fluxes, the approximation is always better than : for any given x − y position in Fig. 3, ℰ_{R} is always smaller in the left panel than in the right panel. To guide the eye, the green dashed line delimit the contour for which ℰ_{R} ≡ 3%, with larger uncertainties in reddish regions and smaller uncertainties in blueish ones.
Fig. 3. Colourcoded relative difference log_{10}(ℰ_{R}) between the exact and approximate ratio calculation in a bin, Eq. (8), as a function of the numerator and denominator spectral difference, α_{den} − α_{num} (xaxis), and as a function of the rigidity bin width, log_{10}(R_{max}/R_{min}) (yaxis). For instance, a value of 2 (red) corresponds to a relative difference of 100%, and a value of −1 (blue) to a relative difference of 0.1%. The left and right panels correspond to calculations using and respectively, see Eq. (5). The rows correspond to three different values of the denominator spectral index α_{den}. In each panel, the dashdotted green contour delimits regions in which the relative difference is above or below 3%. See text for discussion. 
For B/C from AMS02 data, the bin range size goes from 0.08 decade at low rigidity to 0.3 decade for the last few rigidity bins where the systematic uncertainty reaches 10% (Aguilar et al. 2016, 2018b)^{4}. From Fig. 3, this translate into negligible values of ℰ_{R} for almost all rigidities. However, for the highest rigidity bins, the model error from using is systematic (same sign) and reaches a maximum of −3% for α_{num} − α_{den} ≳ 0.7. In a region where powerlaw breaks are usually fitted, the exact calculation is recommended^{5}.
4. Handling crosssection uncertainties
Nuclear cross sections are measured by “external” experiments, and these measurements can be incorporated as a distribution of probability in the χ^{2} minimisation via nuisance parameters (see Appendix B): cross sections far from their most probable values must be penalised in the minimisation, see Eq. (B.6).
The difficulty lies in the characterisation of the uncertainties, the choice of the nuisance parameters, and assessing the robustness of the procedure. We start by characterising the impact of crosssection uncertainties on the B/C ratio (Sect. 4.1). We then present two different strategies for the choice of nuisance parameters (Sect. 4.2). To assess the successfulness of these two strategies, we have to rely on the analysis of mock data for many configurations (Sect. 4.3). We then discuss how well these configurations capture and propagate all crosssection uncertainties to the transport parameter level (Sect. 4.4).
4.1. Quantifying the impact on B/C ratio
Cross section data uncertainties are typically at ∼5−10% level for inelastic cross sections, and 15−25% level for production cross sections (Génolini et al. 2018). However, because the data are sometimes scarce, old, not always consistent with one another, and sometimes even missing for some reactions, several parametrisation of the whole network of reactions exist. A conservative estimate of the impact of crosssection uncertainties on the B/C calculation can be based on the scatter observed from using several of these parametrisations (see Génolini et al. 2018 for more details).
For inelastic cross sections, σ_{inel}, we use below B94 (Barashenkov & Polanski 1994), W96 (Wellisch & Axen 1996), T99 (Tripathi et al. 1996, 1999), and W03 (Webber et al. 2003). Except for T99, the scaling σ_{He}/σ_{H} is taken from Ferrando et al. (1988). For production cross sections, σ_{prod}, we use W98 (Webber et al. 1998a,b,c), S01^{6}, W03 (Webber et al. 2003), and G17 (Moskalenko et al. 2001; Moskalenko & Mashnik 2003).
For a given set of propagation parameters, we calculate the B/C ratio for various parametrisations, and we plot in Fig. 4 the relative variation with respect to a reference T99 for σ_{inel} and W03 for σ_{prod}. The maximum impact of inelastic cross sections is ≲3%^{7} at ∼5 GV (left panel). It slowly decreases to zero at higher R, because the escape time from the Galaxy decreases with R while the destruction time remains constant (see, e.g. Fig. D.1). The maximum impact of the production cross sections is ≲10%, and it is equally seen at low and high rigidity (right panel) because the Boron flux is directly related to its production cross section. These results are similar for Models A and B (not shown) and are independent of the solar modulation level (compare the black and grey lines in Fig. 4).
Fig. 4. Impact of different crosssection parametrisations on B/C flux calculation as a function of rigidity, w.r.t. a reference (denoted Ref. in the legend). The left and right panels are for inelastic and production cross sections respectively (see Sect. 4.1). The thick (resp. thin) lines are for the interstellar (resp. solarmodulated at ϕ_{ForceField} = 800 MV) calculation. The red vertical line highlights the first rigidity point of AMS02 B/C data. 
Actually, a huge network of reactions is involved in the calculation of any given secondary cosmic ray, making the modelling of uncertainties for each individual reaction a daunting task. Obviously, this full network is taken into account to calculate B/C, but we model and incorporate crosssection uncertainties for the most relevant reactions only: for the production, ^{16}O and ^{12}C make ∼70% of the Boron flux via ^{10, 11}B (Génolini et al. 2018), and for inelastic interaction, ^{16}O, ^{12}C, and ^{11}B are the most relevant. The results of Appendix E show that the variation seen on the B/C ratio from crosssection uncertainties almost completely originate from these dominant reactions.
4.2. From uncertainties to nuisance parameters
To estimate uncertainties on selected reactions, one could use a parametric formula to fit the crosssection data and extract the bestfit parameters and uncertainties to propagate as nuisance parameters. This is the strategy followed by Reinert & Winkler (2018) for production cross sections. However, as already said, the reliability of the data is not always clear, with many inconsistent data points, from a mixture of very old and more recent experiments with probably underestimated systematics (Génolini et al. 2018). All these unknowns most certainly break down the statistical meaning of the χ^{2} values and uncertainty determination of these nuclear data. We assume here that the crosssection uncertainties are fully captured by existing parametrisations (Sect. 4.1)^{8}. It is difficult to argue which of the two above approaches gives the most realistic description. Indeed, the values and uncertainties on the crosssection data and the models could be fully, partly, or not at all correlated, which would increase or decrease the uncertainty on the calculated B/C ratio (see the discussion in Génolini et al. 2018). Without new crosssection data, the degree of belief one can have in the modelling of crosssection uncertainties can hardly be improved.
Our approach probably allows for larger uncertainties than those taken in Tomassetti (2017) and Reinert & Winkler (2018), which also focus on a subset of reactions only. However, we recall that the uncertainties on this subset must ‘emulate’ the total uncertainties from the whole network of reactions (see the discussion in Appendix E). To go beyond qualitative arguments, we inspect in Sect. 4.3 whether the degrees of freedom used to model crosssection uncertainties are conservative enough not to bias the determination of the transport parameters. Before doing so, the next two paragraphs discuss two ways to model crosssection uncertainties as nuisance parameters.
4.2.1. Normalisation, scale, and slope (NSS)
Technically, how to choose nuisance parameters so that they enable to move from one parametrisation to another? The latter are shown (solid lines) in Fig. 5, with σ^{inel} (resp. σ^{prod}) in the left (resp. right) panels, and there is no obvious scaling formula between the curves. A possibility is to start from a reference cross section and apply several simple (uncorrelated with some noncommutative) transformations:
Fig. 5. Models (as listed in Sect. 4.1) and range of cross sections (quantiles corresponding to median, 68%, and 95% CLs) generated from a reference cross section biased by Gaussian distributed nuisance parameters (as gathered in Table 1). The reactions showed correspond to the dominant ones discussed in Sect. 4.1, for inelastic (left column) and production (right column) cross sections. 
Values of μ and σ for Gaussian distributed nuisance parameters for cross sections listed.
This set of transformations is denoted NSS in the following. It is our first option to generate nuisance parameters for crosssection uncertainties. To better visualise how NSS change cross sections, we draw 1000 values for each of the three uncorrelated NSS parameters (Norm., Scale, and Slope), and then show in Fig. 6 the median, 1σ, and 2σ contours from the associated 1000 realisations of σ^{NSS}/σ^{ref}. Whereas the normalisation and the lowenergy slope changes are independent of any reference cross section (left and right panels), the energy scale bias is strongly dependent on it (middle panel). Indeed, the reference in Fig. 6 is an inelastic cross section whose energy dependence has a lowenergy peak, a dip, and a second smaller peak (see Fig. 5): shifted to the righthand side and divided by the unscaled one, it gives a series of three bumps.
Fig. 6. Median, 1σ, and 2σ for the distribution of σ^{NSS}/σ_{ref}(E_{k/n}) values, from 1000 Gaussian distributed normalisation values (9), energy scale (10), and slopes (11) with respective mean and variance (μ, σ)^{Norm} = (1, 0.05), (μ, σ)^{Scale} = (1, 0.2), and (μ, σ)^{Slope} = (0, 0.02). 
For each reaction, the NSS nuisance parameters are chosen so that σ^{NSS}/σ^{ref} ± 1σ – calculated from Gaussian distributed samples of (μ, σ)^{Norm, Scale, Slope} – encompasses the various crosssection parametrisations. This is shown in Fig. 5 for inelastic (left panels) and production (right panels) cross sections, with grey lines showing the median (dashed), 1σ (dotted), and 2σ (dashdotted) envelopes of σ^{NSS}/σ^{ref}; solid coloured lines are the literature parametrisations. The corresponding NSS parameters are gathered in Table 1, and they serve as nuisance parameters in the analysis below^{9}. In order to keep as fewest nuisance parameters as possible (not to slow down too much the minimisation procedure), only a normalisation and energy scale is applied to inelastic cross sections, whereas a normalisation and a slope suffices to capture the range covered by production crosssection parametrisations.
4.2.2. Linear combination (LC)
Our second and more straightforward option is to define cross sections as a linear combination of the available crosssection parametrisations,
where the index i runs on the parametrisations shown in Fig. 5. The sum of the C_{i} coefficients must be close to 1, so that we naturally recover each parametrisation when only one C_{i} is nonnull. This LC allows to combine the different shapes (energy dependences) of the various cross sections, which is key for the determination of the transport parameters (see next section). To allow for possible normalisation systematics, we apply a loose constraint on the sum of the C_{i} coefficients,
To be able to compare this approach to the NSS approach above, we set (μ_{C}, σ_{C})^{inel} = (1, 0.04) and (μ_{C}, σ_{C})^{prod} = (1, 0.15) to match the spread set on normalisation parameters in Table 1. The constraint (13) is accounted for as a penalty in the minimisation, that is an additional term in the χ^{2},
The C_{i} parameters are taken to be flat in [−0.5, 1.5] and are forbidden to wander outside this range.
4.3. Mock data: generation and configurations
We are almost ready to address some important questions related to crosssection uncertainties. How do they propagate to transport parameter uncertainties? Can we recover the true values of the transport parameters using “wrong” values for the cross sections? However, the only way to answer these questions is to analyse controlled data, whose input ingredients and parameters are known.
4.3.1. Mock data generation
To generate simulated data as close as possible to real data, we proceed as follows. Firstly, select the model (e.g. 1D model here) and its input ingredients (e.g. a specific crosssection dataset). Secondly, select a dataset to fit (e.g. B/C AMS02 data) and perform the fit. Thirdly, use the bestfit model to simulate data close to the real data in the following way: (i) interpolate model values at data energies, ; (ii) draw at each a value from a normal distribution 𝒢(0,1), and use the latter and (data error) to form the mock data valuess
(iii) repeat as many times as necessary to obtain the desired number of simulated data. Lastly, analyse the mock data using the same setup or varying the input ingredients (depending on what is studied, see below).
4.3.2. Mock data configurations
We list and label the many cases considered below: in the forthcoming figures, each configuration is associated with a unique colour and linestyle, as recapped in Table 2.
Propagation parameters (×2)^{10}: each analysis is repeated for Model A (free parameters K_{0}, δ, η_{t}, V_{a}, and V_{c}) and Model B (free parameters K_{0}, δ, R_{l}, and δ_{l}), see Sect. 2.2. We do so because the two models have different correlations between their parameters and crosssection uncertainties may impact them differently. For instance, reacceleration smooths spectral features, and it is present in Model A only.
Cross sections to generate and fit mock data (×2): In the unbiased case, the cross sections in the propagation model used to generate and fit mock data are the same (T99 for σ^{inel} and W03 for σ^{prod}). In the biased case, mock data are still fit with T99 and W03, but they were generated from a different set of cross sections (W96 for σ^{inel} and G17 for σ^{prod})^{11}.
Type of nuisance parameters to fit mock data (×2): we either use “normalisation, scale, and slope” (NSS) nuisance parameters, that is μσ^{Norm, Scale, Slope} values prescribed in Table 1, or linear combination (LC) nuisance parameters, that is C_{i} coefficient weighting various crosssection parametrisations with normalisation uncertainties σ_{Ci} = σ^{Norm} (see previous section and Eq. (12)).
Nuisance parameters to fit mock data (×4): we have four types of runs to assess the impact of adding more and more nuisance parameters in the analysis, labelled No nuis. (transport parameters only), Inel. (free transport parameters and nuisance for σ^{inel}), Prod. (same but σ^{prod} instead), and Inel.+Prod. (combined).
4.4. Results of the mock data analysis
The analysis starts with the generation of 1000 mock data (see above), based on given values of the transport parameters, and a given choice of crosssection parametrisations. We then perform a χ^{2} analysis on each mock data and store the bestfit parameters and the associated value^{12}. By construction, for our mock data, so that from the onedimensional distribution of the parameter values and their correlations, we can reconstruct 1σ and 2σ confidence intervals. Also, comparing the parameter distribution and their “true” input value allows to assess the successfulness of our procedure.
4.4.1. Unbiased case: Sanity check
Figure 7 shows the 1σ contours (68% confidence level) from the 2D probability distribution functions of the transport parameters, with and without nuisance parameters in the fit. We underline a few features in these plots:
Fig. 7. Reconstruction of 1σ contours (68% confidence level) from the analysis of 1000 mock data for Model A (top panel) and Model B (bottom panel), for the unbiased case (cross sections for the analysis are the same as the cross sections used to generate mock data). For display purpose, the 2D probability distribution functions are estimated using a Gaussian Kernel (default method to define the bandwidth of gaussian_kde in scipy python library); the irregular shapes are related to statistical fluctuations. The colour code is related to the nuisance parameters used and the line style to the type of nuisance parameters (see Table 2). The “+” symbols represent true values. See text for discussion. 
Contours from statistical errors only: black solid lines are 1σ contours from the analysis without nuisance parameters. Their size and shape depend on the level and energy dependence of the data statistical error (blue line of Fig. 10). Tight correlations are seen on K_{0} and δ for both models. As expected for the unbiased analysis (i.e. same cross sections used to generate and fit mock data), the contours encompass the true value of the parameters (“+” symbols).
Fit with σ^{inel} and σ^{prod} nuisance parameters: blue lines (resp. orange lines) show the 1σ contours from the fit with σ^{inel} (resp. σ^{prod}) as nuisance parameters. The size of the contours is not too strongly impacted by the crosssection nuisance parameters, because the minimum of χ^{2} is left unchanged (not shown) and the additional degrees of freedom provided by the nuisance parameters are “unused”. Nevertheless, the contours are deformed differently, because inelastic and production cross sections impact differently the B/C ratio – see for instant the blue (σ^{ine}) and orange (σ^{prod}) contours for K_{0} vs. R_{lo} in Model B. The fit with combined nuisance parameters (green lines) gives contours that encompass both the previous ones. The fine details depend on the Model (A or B) and the type of nuisance (NSS in solid and LC in dashed lines) used.
From these results alone (unbiased case), one cannot conclude on the impact of crosssection uncertainties on the transport parameters. In fact, the unbiased case is just an elaborate sanity check. It confirms that, in a scenario where cross sections are perfectly known, adding crosssection uncertainties has only a marginal effect.
4.4.2. Biased case: uncertainties and biases on transport parameters
To go further, we repeat the analysis fitting mock data with cross sections that differ from the ones used to generate them, that is biased case (see Table 2). We show in Fig. 8, from top to bottom, the distribution and the 1D probability distribution function of all transport parameters. For readability, we only show the results for the No nuis. (black lines) and Inel.+Prod. (green lines) cases. In this figure, solid (dashed) lines correspond to NSS (LC) nuisance type.
Fig. 8. Distribution of bestfit values (, then transport parameters) from the analysis of 1000 mock data for Model A (left panel) and B (right panel), for the biased case (cross sections for the analysis differ from those used to generate mock data). For display purpose, 1D probability distribution functions are estimated using a Gaussian Kernel. The line style and colours indicate the type and configuration of nuisance parameters used (see Table 2). The vertical dashed lines represent true values. See text for discussion. 
Impact on goodness of fit (top panels): the black lines, which correspond to a fit with the transport parameters only (No nuis.), show that using wrong cross sections can lead to values larger than one. Taken at face value, one would conclude that the model is excluded. Adding crosssection nuisance parameters – which encompass the true crosssection values at 1σ – allows to recover (green lines). The LC nuisance parameters (green dashed lines) fare slightly better than NSS ones (green solid lines): this is understood as the “true” crosssection values can be reached in the LC case, whereas they can only be approached in the NSS case.
Biased transport parameters (remaining panels): without nuisance parameters (black lines), the transport parameters are strongly biased, up to several σ away from their true value (vertical dashed line). Using nuisance parameters (black vs. green lines) has two effects: it enlarges the probability distribution function of the transport parameters, and it shifts the distribution towards the true value. Overall, the two schemes allow to recover unbiased parameters. A mismatch is observed for the strongly correlated δ and K_{0} parameters when using NSS in Model B. The latter is particularly sensitive to any small energydependent difference in the crosssection values as it directly reflects on the calculated B/C. On the other hand, in Model A, this difference can be smoothed out by reacceleration.
We finally comment on the fact that the LC case does not recover fully unbiased transport parameters. Whereas nuisance parameters enable the cross sections to match their “true” values (the one used to generate the data), they can only do so for the selected four production cross sections and three inelastic reactions. The remaining ones are different from those used to generate the mock. This “reaction network” effect explains the observed residual biases.
4.4.3. Biased case: posterior on nuisance parameters
We show in Fig. 9 violin plots for the nuisance parameters obtained after the fit. We only show these parameters for the Inel.+Prod. configuration, that is the analysis in which we fit mock data with transport parameters plus production and inelastic crosssection nuisance parameters (see Table 2). The rows in Fig. 9 show, for the four production and three inelastic reactions (see Sect. 4.1), the values of the associated nuisance parameters (two per reaction). It is interesting to show the results for the unbiased (resp. biased) case, in red (resp. blue), corresponding to the use of the same (resp. different) cross sections to generate and fit mock data. Let us comment on these distributions.
NSS analysis (left panels). the nuisance parameters for the NSS analysis are a normalisation and energy scale parameters for production cross section, and a normalisation and lowenergy slope for inelastic cross sections (see Eqs. (9)–(11) and Table 1). The parameters are centred and normalised (to their σ value) so that on the xaxis, unbiased parameters are expected to be centred on zero and between −1 and +1 (1σ range): (i) in the Unbiased analysis (red), as expected, the distributions are centred on zero and have overall a very small width; (ii) in the Biased analysis (blue), the distributions are now offset, because nuisance parameters are used to improve the fit. Almost all parameters fall within their 1σ value, indicating that the range of variation for these nuisance parameters was well calibrated in Sect. 4.2.
Actually, we tried different values of the nuisance parameters (not shown): with a smaller range, the “true” cross sections cannot be recovered as the nuisance parameters would need to be several σ away from their central value, which would penalise the χ^{2}. In turn, this biases the transport parameters (discussed in the previous section): the mild bias that was observed for the NSS case (w.r.t. to the LC case) is related to the fact that μ^{slope} is almost 2σ away in Fig. 9.
LC analysis (right panels). nuisance parameters for the LC analysis are the coefficients (see Eq. (12)), where the index i runs on the crosssection parametrisation enabled in the analysis, and j runs on the various reactions considered. These coefficients typically vary from 0 (if the parametrisation is unused) to 1 (if the cross section is dominant in the LC). In principle, we should have as many coefficients as the number of available crosssection parametrisations. However, the inspection of Fig. 4 shows that some parametrisations are, up to a normalisation, very close to some others. As a result, when more than two parametrisations are considered as nuisance (not shown), there are several possible combinations to reproduce the original cross section (used to generate the mock): the χ^{2} function has several local minima and MINUIT has difficulties to find the true minimum^{13}. That is why we chose only two parametrisations in our analyses (as listed in Table 2) and the distribution of values of the nuisance parameters are shown in the right panels of Fig. 9:
In the Unbiased analysis (red), cross sections used to generate and analyse mock data are T99 for σ_{inel} (resp. W03 for σ_{prod}), so that we should recover C_{T99} ≈ 1 (resp. C_{W03} ≈ 1) for all inelastic (resp. production) cross sections and 0 for the other C_{i}. This is what is observed for all reactions. The parameters for inelastic cross sections (three bottom rows) display overall very broad distributions: we recall that the latter only have a small impact on the B/C calculation (≲3%), so that a fit with data with similar uncertainties will not be sensitive to them. In the Biased analysis (blue), the starting cross sections to generate mock data, for all reactions in the network, are W96 for σ_{inel} and G17 for σ_{prod}, and we observe C_{W96} ≈ 1, C_{G17} ≈ 1, and C_{others} ≈ 0, as expected.
4.4.4. Conclusions on the impact of crosssection uncertainties
We have seen that assuming wrong cross sections can strongly bias the model fit, and thus bias the deduced transport parameters. Starting from the wrong crosssection values, we showed that nuisance parameters on a limited number of reactions allow to mostly recover the true values of the transport parameters. However, the procedure is not perfect owing to “reaction network” effects, that is the fact that we only use as nuisance a small, though representative, sample of all the reactions involved. The LC parameters fare slightly better than NSS parameters, but this is only true because LC parameters always contain “true” cross sections of the analysis. In real life, we do not know what are the real cross sections, and there is no guarantee that the LC approach would still fare better than the NSS one.
We want to stress that the above procedure is not even as straightforward as presented. There was, to some extent, some fine tuning done on the range chosen for the nuisance parameters to best recover our mock data. Although we were guided by the spread between the cross sections (see Fig. 5), we had somehow to extend the range of some parameters in successive tests. In particular, for NSS, the lowenergy slope was taken larger than what was strictly required from the inspection of the spread (see Fig. 5). However, a posteriori, this made sense, because not only the normalisation but also the slope of the crosssection reaction matters in the analysis (especially for Model B), and a larger slope parameter was needed to reconcile the energydependences of W03 and G17 parametrisations. Even for LC, which looks less problematic, there were some issues. We already underlined the pitfalls (in terms of minimisation with MINUIT) of having too similar crosssection parametrisations in the linear combination. It was not mentioned earlier, but the allowed range set for the LC coefficients also matters: using [0, 1.5] instead of [−0.5, 1.5] affects the distribution of the nuisance parameters shown in Fig. 9, although the transport parameters are only very mildly affected.
Furthermore, the reader should keep in mind that regarding (i) the importance of crosssection uncertainties and (ii) which reactions should be used as nuisance, the conclusions strongly depend on the data uncertainties assumed. Indeed, the above analysis was based on AMS02like statistical uncertainties, that is an extreme and too conservative situation for the data. This was chosen in order to demonstrate the proof of principle of our approach. Adding systematics, which are dominant over most of the energy range in AMS02 data, will obviously make crosssection uncertainties less impacting and the residual biases on transport parameters less severe. Accordingly, with larger uncertainties, the number of reactions to include as nuisance is also decreased: there is no gain in adding cross sections whose impact on B/C is smaller than the data uncertainties, only issues. Indeed, unnecessary reactions increase the run time of minimisations, and worse, these reactions create multiple minima that are harder to deal with. Part of these issues would be alleviated by using more evolved sampling engines, like a Markov Chain Monte Carlo (e.g. Putze & Derome 2014), but it remains better to use as few reactions as possible.
To assess the realistic impact of cross sections on the B/C analysis, we could repeat the above analysis with mock data accounting for statistical and systematic uncertainties. However, it is more interesting to illustrate how real data analysis should proceed. This is presented in Sect. 6, but before doing so, we have to discuss how to handle systematic uncertainties in the B/C analysis.
5. Handling systematics from experimental data
Almost all, if not all CR phenomenological studies, account for data uncertainties as the quadratic sum of statistical and systematics uncertainties. Doing so ignores any possible energy correlations for the systematic errors. This has two important consequences on the model bestfit analysis. For instance, considering two extreme cases, fully uncorrelated and fully correlated uncertainties, corresponds to adding quadratically the uncertainties or to allow for a global normalisation of the data (or more precisely to an energydependent normalisation related to the energy dependence of the uncertainty). Starting from the same uncertainties, a χ^{2} analysis on the two different cases would lead to a smaller in the former than in the latter case, and possibly to different values for the bestfit parameters of the model.
A better approach is to use the correlation matrix of error in the χ^{2} analysis (see Appendix B). However, the AMS02 collaboration does not provide this matrix, and we have to rely on the provided information to build one. We then inspect how sensitive the analysis is on our choices.
5.1. Origin of B/C systematic errors
The errors on the B/C ratio measured by AMS02 are described in Aguilar et al. (2018b). The different contributions obtained from table VI of the Supplemental Material of Aguilar et al. (2018b) are shown in Fig. 10 as thick solid lines.
As explained in Aguilar et al. (2018b), the unfolding error (Unf.) corresponds to the contribution coming from the uncertainty on the rigidity resolution function and the unfolding procedure. The rigidity scale error (Scale) is the sum of the contribution from residual tracker misalignment and from the uncertainty on the magnetic field map measurement and its temperature timedependent correction. The acceptance error (Acc.) is the sum of different contributions: survival probability of Boron and Carbon in the detector, Boron contamination from heavier nuclei fragmentation (mainly carbon), and uncertainty on the “data/Monte Carlo” corrections to the Boron and Carbon acceptances.
5.2. Building the covariance matrix: correlation length
To properly take into account AMS02 data uncertainties, one needs to define the covariance matrices C_{α} for α = (Stat., Unf., Scale, Acc.), and minimise the χ^{2} defined by Eqs (B.1–B.2). As these covariance matrices are not provided explicitly in Aguilar et al. (2018b), we start from the covariance matrices of relative errors , estimated from the following expression:
with the ijth element built from the relative errors and at rigidity bins R_{i} and R_{j}, and where the parameter is the correlation length associated with the error α (in unit of decade of rigidity).
For this study, we set the covariance matrix to be (see Appendix B for a justification), and we set the correlation lengths to the following values: (i) because the number of events on each bin are independent; (ii) since the uncertainty on the rigidity scale affects all rigidities similarly; (iii) because errors from the unfolding procedure and from the rigidity response function affect intermediate scales. As seen on Fig. 10, this error is subdominant compared to Stat. and Acc. errors, and we checked that the results are not affected by our choice for ; (iv) The value of the correlation length for the Acc. error is more critical, because this error dominates the systematic error and it cannot be easily defined. The dependence of /d.o.f. and of the fitted parameters with this correlation is studied below for different values , which cover the range from lower than the bin size (fully uncorrelated) to the full range (fully correlated).
As the acceptance error is a combination of errors which are expected to have a rather small correlation length (“data/Monte Carlo” corrections) and others which are expected to have a large correlation length (crosssection normalisation), one can try to decompose this error into different contributions with different correlation lengths. In particular, the rise of the acceptance error at low rigidity is not expected to be correlated with larger rigidities: it is related to the rapid change of the acceptance at low energy mostly because of energy losses in the detector. One can therefore construct a better description of the covariance matrix by splitting acceptance errors in three independent parts: (i) a normalisation error, Acc. norm. (dashdotted orange line in Fig 10), with a large correlation length (l_{ρ} ∼ 1.0); (ii) a rise at low rigidity, Acc. LE (dotted orange line), with an intermediate correlation length (l_{ρ} ∼ 0.3); (iii) a residual error, Acc. res. (dashed orange line), defined so that the quadratic sum of the three contributions equals the full acceptance error. This last part corresponds mainly to “data/Monte Carlo” corrections and the rigiditydependent parts of other acceptance errors. Its correlation length is not well defined and left free in the following.
Near the completion of this article, we found out that Cuoco et al. (2019) also proposed to use a correlation matrix of errors to analyse AMS02 data. However, while these authors focus on and use a single correlation length fit on the data, our analysis relies on several correlations lengths whose values are motivated by physics processes in the AMS02 detectors.
5.3. Parameter and goodnessoffit dependence on correlation length
We have built two different covariance matrices, which partly depend on an unknown correlation length . We can now perform a χ^{2} analysis with MINOS to extract robust errors on the parameters. The analysis is repeated on the two propagation model configurations A and B introduced in Sect. 2.1.
The top panels of Fig. 11 show the values obtained for /d.o.f. and bestfit parameters as a function of for models A (left) and B (right) and for the full acceptance error (blue circles) and the split acceptance error (orange circles). The B/C from the bestfit model along with AMS02 data are shown on the bottom panels of Fig. 11 for the same models, that is for the full (blue lines, top) and split (orange lines, bottom) acceptance errors. As expected, /d.o.f. strongly depends on for both models. The bestfit parameters are stable (i.e. fluctuate within errors estimated from the fit) for low and large but undergo a rapid jump around for model A when one uses the full acceptance error description. These features are problematic since it means that the bestfit parameters are very sensitive to the choice of . In addition, with the full acceptance error, the bestfit obtained for model A and does not pass through the data points as featured by the upperleft plot of the bottom panels in Fig. 11. This is explained as follows: with a large correlation, the cost on the χ^{2} for a global deviation between data and model is moderate and thus accepted, and would correspond to a global bias in the measured B/C. Although correct from a mathematical standpoint, this interpretation is disputable given our crude modelling of the systematic errors. This unwanted behaviour is absent when we use the split acceptance error modelling.
Fig. 9. Nuisance parameters from the analysis of 1000 mock data for Model A (top panels) and Model B (bottom panels). The nuisance parameters are shown reaction by reaction (rows: production then inelastic cross sections), each reaction having two nuisance parameters (see list in Table 2). The number in parenthesis beside the reaction corresponds to its overall impact on the B/C calculation, as reported in Table 1. We show for the NSS analysis (left panels) normalised and centred parameters, whereas LC analysis (right panels) are shown between zero and one. We use violin plots to highlight the probability density of the parameter (yaxis) at different values (xaxis), and our violin plots include a marker for the median (black “+” symbol) along with the 1σ range of the parameter (thick black line). In order to directly compare the distribution of the nuisance parameters for the unbiased and biased analyses, we superimpose them on the same line in red and blue respectively. See text for discussion. 
Fig. 10. AMS02 errors for B/C data. Solid lines correspond to the errors provided in Aguilar et al. (2018b), namely statistical, acceptance, scale, and unfolding (the steplike evolution is artificial and related to the rounding of the values provided in the table). The orange lines correspond to a further split of the acceptance errors: normalisation (norm.), low energy (LE), and residual (res.). See text for details. 
Fig. 11. Top panels: values obtained for the /d.o.f. and the fit parameters (and uncertainties) as a function of for propagation models A (left) and B (right) and for the full acceptance error (blue) and the split acceptance error (orange). In the case of split acceptance error, corresponds to the correlation length of the Acc. res. contribution only. Bottom panels: comparison of bestfit B/C (lines) and AMS02 data (symbols) for the above models and configurations, i.e. full (blue lines, top) vs. split (orange lines, bottom) acceptance errors. Only a sample of correlation lengths are shown (0.015, 0.1, and 1 decade). 
From the above results, we conclude that the best way to handle the systematic errors is to use the split acceptance errors approach. Indeed, not only does it provide a more realistic description of the acceptance systematic error, but it also leads to more stable results w.r.t. the values taken for . In this approach, is a reasonable choice which gives a /d.o.f. ∼1 and conservative errors for the fit parameters.
6. Joint impact of crosssection uncertainties and data systematics
The previous section provides us with a realistic treatment of the data errors. In this light, we can revisit the impact of crosssection uncertainties on the bestfit parameters and errors (Sect. 4.2). Indeed, we showed in the most challenging case (statistical only in the data) that NSS and LC nuisance parameters enable to recover the correct transport parameters when starting from the wrong cross sections, whereas systematic errors dominate over a wide dynamical range (see Fig. 10). Nevertheless, because neither NSS nor the LC approach is perfect (see discussion in Sect. 4.4.4), it is important to test both to ensure, as a minimal consistency check, that consistent values of the transport parameters (within their uncertainties) are obtained in both approaches. For this purpose, we analyse the “real” B/C data using the covariance matrix of errors (see previous section), with both NSS and LC approaches, and varying the number of reactions used as nuisance parameters. This allows us to validate our strategy for the actual data, as used in the companion paper (Génolini et al. 2019) for the B/C physics analysis. It also exemplifies how an analysis should be carried out in the methodology we propose.
Our results are presented in Fig. 12, in which we display the bestfit parameters and errors of Model A (top) and Model B (bottom) for an increasing number of reactions used as nuisance (from left to right). The four production cross sections are introduced by order of importance with respect to their contribution to the secondary boron. We then introduce the inelastic cross sections of ^{11}B; we have checked that the impact of other inelastic cross sections is negligible. We recall that we can start from different parametrisations of the full network of cross sections: for production cross sections, we either start from G17) or W03), whereas we only consider T99 for inelastic ones (their impact is negligible, see below). Several comments are in order about these results.
Fig. 12. Evolution of the bestfit parameters and asymmetric 1σ errors (extracted from MINOS), increasing the number of nuclear crosssection reactions used as nuisance parameters (from left to right on the xaxis); Model A (top) and B (bottom) are shown. For every additional reaction as nuisance, four different fits are performed: starting either with G17) or W03 parametrisations for the production cross sections; using either CL or NSS for the implementation of nuisance parameters. The black dashed line in the xs panel corresponds to the threshold . See text for discussion. 
Firstly, for both Model A and B, even without nuisance parameters (no nuisance on xaxis), the bestfit parameters for the two crosssection cases considered (W03 and G17) are consistent within errors. This means that the covariance matrix mitigates most of the errors coming from using the wrong cross sections (see discussion in Sect. 4.3). This would not have been the case (not shown) using simply the statistical and systematic uncertainties in quadrature. This further demonstrates the importance of having the correct covariance matrix of errors. It also emphasises the fact that energy correlations on intermediate scales (one decade) in data uncertainties could relax, to some extent, the need for very accurate cross sections.
Secondly, when adding nuisance parameters (NSS or LC), the consistency between the parameters is improved, in particular for δ (in both models A and B). With more degrees of freedom, this translates into better values and larger uncertainties on the parameters. The latter is only significant for Model B, and the lack of increase in Model A probably comes from the fact that it has many degenerate “lowenergy” degrees of freedom (V_{a}, V_{c}, η_{T}) which already make the uncertainties maximal. We stress that the improvement depends on the initial set of cross sections used and method, but overall, for this specific analysis, with the covariance matrix of error dominating the error budget, only a few reactions need to be taken into account.
Thirdly, focusing on the fit quality ( values), we see that G17 systematically gives a better fit than W03. In the LC approach, the productions cross section initially set to W03 also choose to go to G17. This is probably not too surprising as G17 is expected to better match existing crosssection data, whereas W03 is based on a global semiempirical fit to these same data (see Sect. 4 and Génolini et al. 2018).
Based on these results, the recommendation for the B/C analysis, is to start from G17 production cross sections, and to use the NSS nuisance parameters. The latter allow for more freedom than LC ones, so that slight improvements are still possible when including several crosssection reactions as nuisance (though these improvements are not very important statistically in this case). These recommendations are followed to define benchmark models in our companion paper (Génolini et al. 2019), in which the values of the parameters shown in Fig. 12 are discussed and interpreted.
7. Recommendations and conclusions
Faced with the challenges of interpreting cosmicray data of unprecedented accuracy, we have refined the methodology to properly account for all uncertainties (model, ingredient, and data) in the fit to the data. The proposed methodology was exemplified on the analysis of the AMS02 B/C ratio.
The first step was to ensure a model precision higher than the data uncertainty: we inspected in detail the numerical stability of the model and the impact of energy boundary conditions. Some lowenergy boundary conditions fare better than others, but setting any of them to MeV values ensure a good precision of the model calculation above a few hundreds of MeV n^{−1}. When using a Crank–Nicholson approach to solve the secondorder differential diffusion equation on energy, we have checked its precision and provided a criterion to numerically converge to the correct solution when V_{a} → 0 (i.e. if no reacceleration, corresponding to a firstorder differential equation). We have also quantified the systematics from using the pointestimate calculation of a flux or ratio compared to the correct calculation integrating the model over the energy bin. While the discussion is partly specific to the model used and the species inspected, our considerations are generic. Obviously, the precision tests should always be repeated and compared to the data uncertainty for other species, models, and data considered.
The second step was to handle properly crosssection uncertainties. We detailed the impact of the most important reactions on the B/C ratio (≲3% for inelastic cross sections and ∼10% for production cross sections). We then proposed an approach to account for these uncertainties via Gaussian distributed nuisance parameters, based on a combination of Normalisation, Scale, and lowenergy Slope crosssection modifications (NSS) or based on linear combinations (LC) of existing parametrisations. We validated this choice on simulated data, showing that the degrees of freedom enabled by these nuisance parameters allow to recover the true parameters (when starting from a different set of cross sections simulated data were generated with). Simulated data also show that starting from the wrong crosssection values, valid propagation models would be excluded based on statistical criterion (). The nuisance parameters we proposed also cure this problem.
The third step was to handle as best as possible data uncertainties. We accounted for possible energy correlations in the AMS02 data via the covariance matrix of errors. As the AMS02 collaboration does not provide such a matrix, we proposed a bestguess model, based on the information available in Aguilar et al. (2018b) and its supplemental material. The crucial parameters are the correlation length associated with various systematics, correlating more or less strongly various energy bins. The dominant effect is from the acceptance systematics, and we showed an unphysical dependence of the transport parameters with the acceptance correlation length. After a more careful inspection of the systematics, we discussed the fact that the acceptance systematics is actually a mix of several systematics with very different correlation lengths. Splitting the acceptance systematics in three parts stabilised the dependence of the transport parameters with the correlation length. To fully solve this issue, the publication, along with the data, of the correlation matrix by the AMS02 collaboration is necessary. This is likely a very difficult task, and waiting for its completion, further informations on the various systematics, further splits and indications on each of the systematics correlation length would already be extremely useful. Indeed, not only does it possibly biases the transport parameters fit to the data, but it also has a huge impact on the statistical interpretation of the model inspected: depending on the correlation length assumed, we can either conclude on a perfect fit to the data or exclude the model ().
The fourth and last step was to consider a realistic analysis, applying the method developed to handle crosssection uncertainties (2nd step) with the full data uncertainties, that is accounting for the covariance matrix of errors (3rd step). Because of energy correlations in the systematics, the impact of crosssection uncertainties can be lessened. In the context of the analysis of the AMS02 B/C data, the impact of systematics was found to be dominant over that of crosssection uncertainties. In any case, for any analysis, we recommend to implement the dominant nuclear reactions as nuisance parameters, checking the results against various choices of production cross sections and the two possible strategies for the nuisance parameters (NSS or LC).
Our methodology can be used for any CR species, but the most important cross sections and their uncertainties depend on the species (e.g. Génolini et al. 2018), so that specific nuisance parameters need to be changed for both the NSS and LC methods. Then, data uncertainties have generally different origins for different species, with different subdetectors and selections cuts applied. For these reasons, the conclusions that can be drawn concerning the most impacting effect (cross sections or data systematics) can be different from one species to another, and so should be carefully inspected for each species and data considered.
Further results based on this methodology are presented in two companion papers: Génolini et al. (2019) present the interpretation of the B/C ratio data and constraints on the transport parameters. Boudaud et al. (2019) use these transport parameters and their uncertainties to calculate the astrophysical flux of antiprotons. We emphasise that all the results presented here and in the companion papers are based on USINE v3.5 (Maurin 2018), available at https://lpsc.in2p3.fr/usine^{14}.
For instance, in the case of the 1D diffusion model described in Sect. 2.1, the above coefficients are given by Putze et al. (2010): α(x) = 2h/(E_{k}𝒜), β(x) = b_{losses}(E), γ(x) = β^{2}K_{pp}/E_{k}, and u^{0}(x) = 2hq/𝒜, with 𝒜 = 2h nvσ_{inel} + V_{c} + KS coth (SL/2) and .
Same dataset as in Webber et al. (2003), but fitted by Aimé Soutoul (priv. comm.).
We wrongly reported a 10% impact in Génolini et al. (2018) because of an error in the inelastic cross section on He for two parametrisations (the faulty files have been corrected in USINE v3.5).
These parametrisations are based on fits to the same inhomogeneous sets of crosssection data, but the authors used different approaches and assumptions to fit them. For instance, the GALPROP parametrisation is renormalised to data whenever available, whereas other data sets are based on semiempirical formulae designed to give an good fit over all reactions. The former parametrisation is expected to better represent the data, but by construction it sometimes shows nonphysical energy dependences (steplike behaviour), whereas the other parametrisations do not.
We emphasise that crosssection values for all reactions, not just the dominant ones (see Sect. 4.1), are taken from the other parametrisations.
Statistical uncertainties only (taken from real data) are used in our analysis, in order to disentangle the issue of crosssection uncertainties and more involved data uncertainties (discussed in Sect. 5).
The following definition, taken from Maurin et al. (2001), is similar to that of Eq. (3) since K_{EE} = β^{2}K_{pp}, albeit with a slightly larger coefficient of 2/9 ∼ 0.22 instead of 0.17.
The source term has been set proportional to to grossly match the proton data. Notice that we have not performed a fit since Fig. D.2 is meant to be an illustration of how the various solutions u^{0}, u_{loss} and u behave with kinetic energy E_{k}.
Acknowledgments
We thank our “cosmicray” colleagues at LAPTh, LAPP, LUPM, and Universidade de São Paulo, and in particular P. Serpico for stimulating discussions and feedback that helped us clarify and improve the presentation of the methodology presented in this paper. This work has been supported by the “Investissements d’avenir, Labex ENIGMASS” and by the French ANR, Project DMAstroLHC, ANR12BS050006. The work of Y.G. is supported by the IISN, the FNRSFRS and a ULB ARC. The work of M.B. is supported by the European Research Council (ERC) under the EU Seventh Framework Programme (FP7/20072013)/ERC Starting Grant (Agreement No 278234 – “NewDark” project).
References
 Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2011, Science, 332, 69 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2014, ApJ, 791, 93 [NASA ADS] [CrossRef] [Google Scholar]
 Aguilar, M., Alberti, G., Alpat, B., et al. 2013, Phys. Rev. Lett., 110, 141102 [CrossRef] [PubMed] [Google Scholar]
 Aguilar, M., Aisa, D., Alpat, B., et al. 2014a, Phys. Rev. Lett., 113, 221102 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Aguilar, M., Aisa, D., Alvino, A., et al. 2014b, Phys. Rev. Lett., 113, 121102 [Google Scholar]
 Aguilar, M., Aisa, D., Alpat, B., et al. 2015a, Phys. Rev. Lett., 114, 171103 [CrossRef] [PubMed] [Google Scholar]
 Aguilar, M., Aisa, D., Alpat, B., et al. 2015b, Phys. Rev. Lett., 115, 211101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2016, Phys. Rev. Lett., 117, 231102 [NASA ADS] [CrossRef] [Google Scholar]
 Aguilar, M., Ali Cavasonza, L., Alpat, B., et al. 2018a, Phys. Rev. Lett., 121, 051101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Aguilar, M., Ali Cavasonza, L., Ambrosi, G., et al. 2018b, Phys. Rev. Lett., 120, 021101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Barashenkov, V. S., & Polanski, A. 1994, Electronic Guide for Nuclear Cross Sections, Tech. Rep. E2–94417 (Dubna: Comm. JINR) [Google Scholar]
 Blobel, V. 2006, Statistical Inference Problems in High Energy Physics (06w5054)  Banff International Research Station [Google Scholar]
 Boudaud, M., Génolini, Y., Derome, L., et al. 2019, ArXiv eprints [arXiv:1906.07119] [Google Scholar]
 Coste, B., Derome, L., Maurin, D., & Putze, A. 2012, A&A, 539, A88 [NASA ADS] [CrossRef] [EDP Sciences] [PubMed] [Google Scholar]
 Cuoco, A., Heisig, J., Klamt, L., Korsmeier, M., & Krämer, M. 2019, Phys. Rev. D, 99, 103014 [NASA ADS] [CrossRef] [Google Scholar]
 D’Agostini, G. 1994, Nucl. Instrum. Methods Phys. Res. A, 346, 306 [NASA ADS] [CrossRef] [Google Scholar]
 Donato, F., Maurin, D., Salati, P., et al. 2001, ApJ, 563, 172 [NASA ADS] [CrossRef] [Google Scholar]
 Evoli, C., Gaggero, D., Vittino, A., et al. 2017, JCAP, 2, 015 [NASA ADS] [CrossRef] [Google Scholar]
 Ferrando, P., Webber, W. R., Goret, P., et al. 1988, Phys. Rev. C, 37, 1490 [NASA ADS] [CrossRef] [Google Scholar]
 Génolini, Y., Boudaud, M., Batista, P. I., et al. 2019, Phys. Rev. D, 99, 123028 [NASA ADS] [CrossRef] [Google Scholar]
 Génolini, Y., Maurin, D., Moskalenko, I. V., & Unger, M. 2018, Phys. Rev. C, 98, 034611 [NASA ADS] [CrossRef] [Google Scholar]
 Génolini, Y., Putze, A., Salati, P., & Serpico, P. D. 2015, A&A, 580, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Génolini, Y., Serpico, P. D., Boudaud, M., et al. 2017, Phys. Rev. Lett., 119, 241101 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Giesen, G., Boudaud, M., Génolini, Y., et al. 2015, JCAP, 9, 23 [Google Scholar]
 James, F., & Roos, M. 1975, Comput. Phys. Commun., 10, 343 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Kissmann, R. 2014, Astropart. Phys., 55, 37 [NASA ADS] [CrossRef] [Google Scholar]
 Lafferty, G. D., & Wyatt, T. R. 1995, Nucl. Instrum. Methods Phys. Res. A, 355, 541 [NASA ADS] [CrossRef] [Google Scholar]
 LeVeque, R. J. 1998, SIAM, 585, 112 [Google Scholar]
 Maurin, D. 2018, ArXiv eprints [arXiv:1807.02968] [Google Scholar]
 Maurin, D., Donato, F., Taillet, R., & Salati, P. 2001, ApJ, 555, 585 [NASA ADS] [CrossRef] [Google Scholar]
 Maurin, D., Taillet, R., & Donato, F. 2002, A&A, 394, 1039 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Maurin, D., Putze, A., & Derome, L. 2010, A&A, 516, A67 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Maurin, D., Melot, F., & Taillet, R. 2014, A&A, 569, A32 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Moskalenko, I. V., Mashnik, S. G., & Strong, A. W. 2001, ICRC, 5, 1836 [NASA ADS] [Google Scholar]
 Moskalenko, I. V., & Mashnik, S. G. 2003, ICRC, 4, 1969 [NASA ADS] [Google Scholar]
 Putze, A., & Derome, L. 2014, Phys. Dark Universe, 5, 29 [NASA ADS] [CrossRef] [Google Scholar]
 Putze, A., Derome, L., & Maurin, D. 2010, A&A, 516, A66 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Putze, A., Maurin, D., & Donato, F. 2011, A&A, 526, A101 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Reinert, A., & Winkler, M. W. 2018, JCAP, 1, 055 [NASA ADS] [CrossRef] [Google Scholar]
 Schlickeiser, R. 2002, Cosmic Ray Astrophysics (Berlin, Heidelberg: SpringerVerlag) [CrossRef] [Google Scholar]
 Seo, E. S., & Ptuskin, V. S. 1994, ApJ, 431, 705 [NASA ADS] [CrossRef] [Google Scholar]
 Stone, E. C., Cummings, A. C., McDonald, F. B., et al. 2013, Science, 341, 150 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
 Strong, A. W., Moskalenko, I. V., Porter, T. A., et al. 2011, http://galprop.stanford.edu/download/manuals/galprop_explanatory_supplement.pdf [Google Scholar]
 Tomassetti, N. 2017, Phys. Rev. D, 96, 103005 [NASA ADS] [CrossRef] [Google Scholar]
 Tripathi, R. K., Cucinotta, F. A., & Wilson, J. W. 1996, Nucl. Instrum. Methods Phys. Res. B, 117, 347 [NASA ADS] [CrossRef] [Google Scholar]
 Tripathi, R. K., Cucinotta, F. A., & Wilson, J. W. 1999, Nucl. Instrum. Methods Phys. Res. B, 155, 349 [NASA ADS] [CrossRef] [Google Scholar]
 Webber, W. R., Kish, J. C., Rockstroh, J. M., et al. 1998a, ApJ, 508, 940 [NASA ADS] [CrossRef] [Google Scholar]
 Webber, W. R., Kish, J. C., Rockstroh, J. M., et al. 1998b, ApJ, 508, 949 [NASA ADS] [CrossRef] [Google Scholar]
 Webber, W. R., Soutoul, A., Kish, J. C., & Rockstroh, J. M. 2003, ApJS, 144, 153 [NASA ADS] [CrossRef] [Google Scholar]
 Webber, W. R., Soutoul, A., Kish, J. C., et al. 1998c, Phys. Rev. C, 58, 3539 [NASA ADS] [CrossRef] [Google Scholar]
 Wellisch, H. P., & Axen, D. 1996, Phys. Rev. C, 54, 1329 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Systematics from R to E_{k/n} approximate conversion
CR data are mostly published and analysed as a function of the kinetic energy per nucleon (see, e.g. the data collected in the CosmicRay Data Base^{15}, Maurin et al. 2014). The latter quantity is conserved in nuclear reactions (in the straightahead approximation) and propagation codes usually solve the transport equation per E_{k/n}. However, it is not the quantity CR detectors measure; for instance, hadronic calorimeters provide the total energy, whereas spectrometers like AMS02 provide the rigidity. Conversion from one energy unit to another is only exact if the nucleus (m, A, Z) is identified. For elements, unless the isotopic content is known, the conversion is approximate.
The uncertainty brought from energy unit conversions was neglected in the past because of larger uncertainties, but this is no longer possible for modern data. For instance, the conversion from R to E_{k/n} in an experimental context in which only elemental fluxes are measured is discussed by the PAMELA collaboration in Appendix B of Adriani et al. (2014), and by the AMS collaboration for the B/C ratio in the Supplemental Material of Aguilar et al. (2016). As the practice remains in the field to fit data as a function of E_{k/n}, we argue below that this is not a good procedure.
To convert B/C data from R to E_{k/n}, the AMS02 collaboration relies on an average mass number,
of 12 for Carbon and 10.7 for Boron. Figure A.1 shows these values compared to the theoretical calculation from a model reproducing B/C data. In the top panel, the varying ⟨A⟩_{C} for the model results from the fact that Carbon contains mostly primary ^{12}C and a small fraction of secondary ^{13}C (≲15% at ≲1 GV and steadily vanishing at higher rigidities). For the Boron (bottom panel), both ^{10}B and ^{11}B are of secondary origin, but their rigidity evolution is related to two subtle effects: (i) ≈15% of ^{10}B comes from the decay of ^{10}Be (Génolini et al. 2018), and as the effective lifetime increases with energy, the fraction of ^{10}B with respect to that of ^{11}B decreases with rigidity; (ii) ^{10}B has a larger fraction originating from 2step reactions (w.r.t. direct “1step” production) than ^{11}B has (see Table 1 and Fig. 3 of Génolini et al. 2018), and as 2step reactions have a steeper rigidity dependence than 1step ones, again the fraction of ^{10}B decreases with rigidity (w.r.t. that of ^{11}B).
Fig. A.1. Average mass number for carbon (top) and Boron (bottom) as a function of rigidity. The dashed blue lines correspond to the value calculated from the model (with the isotopic content known), whereas the solid black lines correspond to the choice made in Aguilar et al. (2016). See text for details. 
In Fig. A.2, we show the impact of these two different choices when converting B/C from R to E_{k/n}. The top panel shows the residual of B/C data vs. R (w.r.t. our bestfit model), in order to give a visual reference for the difference between the model and data. The bottom panel shows residuals of B/C vs. E_{k/n} for different conversions (w.r.t. our bestfit model converted without approximation). The black curve shows the conversion bias when assuming a constant ⟨A⟩ for Boron and Carbon, where part of the bias comes from assuming ⟨A⟩_{B} = 10.7 (red dotted line), and part from assuming ⟨A⟩_{C} = 12 (blue dashed line): the bias is positive and the largest at the lowest E_{k/n} (∼3%) and null at 1 GeV n^{−1}; it is negative, reaches ∼2% at ∼4 GeV n^{−1}, and then decreases. The pattern of AMS02 data (converted using the same approximation) with respect to the black solid line is similar to the one seen in the top panel, indicating that the origin of the discrepancy with the exact model calculation is the wrong assumption made for ⟨A⟩.
Fig. A.2. Top panel: B/C residual (vs. R) calculated from the difference between AMS02 data and our reference model (bestfit model). Bottom panel: B/C residual (vs. E_{k/n}) calculated from the difference between various RtoE_{k/n}converted B/C (red, blue and black lines) and the “exact” B/C (reference model converted with correct isotopic content). The AMS02 data vs. E_{k/n} are taken from Aguilar et al. (2016), and result from the same conversion as done for the solid black line. See text for discussion 
We can summarise the above subtle discussion as follows: if we fit a model on AMS02 B/C data as a function of rigidity, this model will be offset from the converted AMS02 B/C data as a function of E_{k/n}. The offset is not a simple scaling, it is instead energy dependent because the isotopic content of B and C elements is energy dependent (in a nontrivial way). Whereas the maximum bias is “only” ∼3%, this is already significant compared to other AMS02 uncertainties. Moreover, in AMS publication, the uncertainty associated with the conversion is estimated to range from 1% (lowest energy) to 4% (highest energy), which does not reflect our results. For all these reasons, we conclude that AMS02 data, and in general all data, should be fit in their native energy scale in order to avoid a nonnecessary bias introduced by converting the data to another energy scale. We stress that the USINE package allows one to fit any combination of data in their native energy scale.
Appendix B: χ^{2} with covariance or nuisance
To characterise the impact of the uncertainties on the model parameters, we rely on the χ^{2} analysis implemented in USINE and described in Maurin (2018), using the MINUIT package (James & Roos 1975) for minimisation. In particular, the MINOS option in MINUIT allows to reliably reconstruct asymmetric error bars on the parameters, taking into account both parameter correlations and nonlinearities. The generic form of the χ^{2} we use is
where we loop over all time periods t (corresponding to different modulation levels) and all quantities q selected in the minimisation. The quantities 𝒟_{cov} and 𝒩 are detailed below.
Covariance. The quadratic distance 𝒟_{cov} measures the distance between the data and the model, accounting for a covariance matrix 𝒞^{16},
which correlates ij energy bins (n_{E} bins in total). Without correlations, 𝒞 is diagonal (systematic errors σ_{k} on data), and we recover the standard expression
Covariance from relative errors. As discussed in Sect. 5, we built for each AMS02 systematics α the covariance matrix of relative errors . The latter can be related to the covariance 𝒞_{α} require for Eq. (B.2) in two different ways:
While using 𝒞_{data} may seem more natural, Blobel (2006) showed that if an overall normalisation factor is present in the data, including it in the fit should be done via a factor in the model, not in the data; otherwise, the reconstructed model parameters are biased (see also D’Agostini 1994). For this reason we decided to use 𝒞_{model} in our analysis. We note however that a global normalisation factor corresponds to a situation in which the correlation length is infinite, which is not the case for the data we consider (see Sect. 5). To ensure that either using Eq. (B.4) or (B.5) does not affect our conclusions, we fit 10 000 mock B/C data under these two assumptions, and checked that (i) the input model parameters were recovered in both cases, and (ii) up to the level of precision reached, potential biases were much smaller than the 1σ uncertainties on the reconstructed parameters.
Nuisance parameters. Nuisance parameters are parameters contingent to the analysis performed, but whose value can affect the result of the analysis. An example is given by CR cross sections, that are instrumental for the model calculation, but whose values and uncertainties were determined by “external” experiments. Nuisance parameters can appear at various levels of the modelling: (i) global nuisance parameters 𝒩 related to the model and thus independent of the data (e.g. cross sections), (ii) timedependent nuisance parameters 𝒩^{t} (e.g. modulation parameter for a specific datataking period), (iii) datadependent nuisance parameters 𝒩^{t,q} (e.g. systematic errors on data as an alternative to using a covariance matrix).
In principle, any probability distribution function is possible, and it is determined from the auxiliary experiment. However, in USINE, only Gaussiandistributed nuisance parameters are enabled, so that each adds in the χ^{2}, Eq. (B.1), a contribution
where and are the mean and variance of the parameter, and y the tested value.
Appendix C: Coefficients for boundary conditions
The discretisation of Eq. (4) on a grid over x ≡ ln E_{k/n} gives
with the current defined to be
This equation can readily be written as a matrix equation
𝕄 is a tridiagonal matrix defined by its coefficients
Solving this system requires to fix the boundary conditions. Several possibilities have been tested in Appendix D.
As regards the lowenergy boundary at x_{min} ≡ x_{0}, we list several suitable prescriptions, and report results for the corresponding b_{0} and c_{0} in the halfupper part of Table C.1: (i) no energy flow (L1) means J_{E} = 0 at x_{0}, translating into for the defined grid steps; (ii) no curvature in the spectrum (L2 and L3) is obtained by using the prescription of LeVeque (1998) for a second order accurate method,
which implies that u_{−1} = 2_{ }u_{0} − u_{1}, and we compute the coefficients b_{0} and c_{0} for a first (L2) and second (L3) order accurate method – the former was used in Donato et al. (2001); (iii) no phase space density gradient (L4) means that the phase space distribution f is flat for a CR momentum p = 0. We implement this condition at x_{0}. Requiring that ∂f/∂p vanishes translates into
If written in term of x = lnE_{k} it reads
Matrix coefficients for several boundary conditions of the transport equation.
If we discretise this condition, we are led to
and get, according to LeVeque (1998), a second order accurate method. Injecting this condition into the differentiation scheme let us define b_{0} and c_{0}.
At highenergy (i.e. at the highest point x_{max} ≡ x_{K} of the grid), several conditions can be implemented along the same lines. The resulting coefficients a_{K} and b_{K} are listed in the halfbottom part of Table C.1: (i) no energy flow (H1), i.e. J_{E} = 0 at x_{K}, which translates into , (ii) no curvature in the spectrum (H2 and H3), for which we require that
implying u_{K + 1} = 2_{ }u_{K} − u_{K − 1}; (iii) no energy losses nor diffusive reacceleration (H4), so that as discussed in Appendix D, the CR density u is given by u^{0} at high energy insofar as energy losses and diffusive reacceleration do not play any role in this regime. We require that u(x_{K}) = u^{0}(x_{K}), hence the condition a_{K} = 0 and b_{K} = 1 of Table C.1.
Appendix D: Stability of the numerical solution
The numerical solution of Eq. (4) might exhibit instabilities when diffusive reacceleration vanishes with the Alfvénic speed V_{a}. To explore the onset of these instabilities and to remedy them, we consider the same 1D geometry as discussed in Sect. 2.1, that is a thin disc of halfthickness h and a magnetic halo of halfthickness L. We focus on Model A, presented in Sect. 2.2, that is a standard diffusion/convection/reacceleration transport model. For definitiveness, all figures presented in this appendix are based on the following values for the transport parameters: K_{0} = 0.030 kpc^{2} Myr^{−1}, δ = 0.65, η_{t} = −0.49, L = 10 kpc, V_{c} = 15.1 km s^{−1} and V_{a} = 74.6 km s^{−1}. These values are among those that give a good fit to the B/C ratio and as such are sufficient to illustrate our discussion.
D.1. Simplified 1D model and solutions
Transport equation. The CR density in space and energy u(z, E) fulfils the transport equation
where z is the vertical coordinate. The source term q_{acc} denotes the rate with which CRs are injected. CRs diffuse with a spatial diffusion coefficient K that follows Eq. (1), are convected away from the disc at velocity V_{c}, suffer from energy losses at a rate b(E)< 0 (ionisation, Coulomb friction, and adiabatic expansion in the wind), or gain energy via an energy diffusion coefficient^{17}. CR injection, as well as energy losses and energy diffusion are localised in the disc.
From now on, the Galactic disc is treated as in the infinitely thin limit, and all nuclear interactions on the interstellar medium (ISM) have been discarded. Requiring that the CR density vanishes at the boundaries z = ±L of the magnetic halo allows to straightforwardly express u(z, E) as a function of its value inside the Galactic disc u(E, 0).
Timescales. Transport inside the magnetic halo, energy losses and diffusive reacceleration are the three processes at play in the transport of CR nuclei. To determine which of these processes is dominant and inside which energy range it prevails, we can calculate the associated timescales. Galactic diffusion and convection can be combined into the residence time of CRs inside the Galactic disc
where τ_{conv} = h/V_{c} and τ_{diff} = hL/K are the convection and diffusion timescales. For energy losses, we define τ_{loss} as the ratio −E_{k}/b(E), where E_{k} is CR kinetic energy. Diffusive reacceleration occurs over a timescale . These timescales are plotted as a function of kinetic energy E_{k} in Fig. D.1. The behaviour is fairly generic and the same trends appear for heavier nuclei as well as for secondary species.
Fig. D.1. Timescales associated to the various processes at play in the Galactic transport of CR protons are plotted as a function of kinetic energy E_{k}. The diffusion τ_{diff} and convection τ_{conv} timescales are respectively featured by the long dashed and short dasheddotted black curves. They are combined into the disc residence time τ_{disc} plotted as the solid black line. The energy loss and diffusive reacceleration timescales correspond to the solid purple and orange curves. The red solid line features the combined timescale τ_{eff} as given in Eq. (D.3). 
All these processes can be combined through the effective timescale τ_{eff} which we define as
High and lowenergy analytical solutions. We are only interested in the solution in the disc, u(z = 0, E)≡u, and using the above timescales the transport Eq. (D.1) boils down into the PDE
Highenergy limit: space diffusion dominates over the other processes. As energy losses and diffusive reacceleration do not play any role in this regime, the solution to the CR transport Eq. (D.4) is
Lowenergy limit: energy losses dominate and diffusive reacceleration can be neglected. The transport Eq. (D.4) has an analytic solution which can be cast into the form
where the pseudotime is defined as
For abovementioned reasons, the analytic solutions u^{0} and u_{loss} become equal in the highenergy limit. We have set them equal at the highest energy point E_{k, max} of our analysis, for which a value of 10^{3} GeV is assumed.
These solutions are used below to check the precision of the numerical calculation.
Numerical solution. The numerical solution of Eq. (D.1) is obtained as follows. First, the equation can be recast into Eq. (4), where the coefficients α, β, and γ, of our simplified model are
As explained in Appendix C, this Eq. (4) can be put on a grid in x ≡ lnE_{k/n} to numerically solve the equation. The solution depends on the boundary conditions implemented, the impact of which is studied below.
D.2. Impact of boundary conditions
Various boundary conditions can be implemented at the lowest x_{min} ≡ x_{0} and highest x_{max} ≡ x_{K} energy points of the xgrid as shown in Table C.1.
Reference boundary conditions vs. analytical solutions. At x_{max}, the four boundary conditions H of Table C.1 yield the same CR proton flux and we always find that u is close to u^{0} above a few tens of GeV. We have decided to implement prescription H4, which is the most natural condition given that τ_{eff} ≃ τ_{disc} at high energy. At x_{min}, the prescription that yields the most stable behaviour is L3 with ∂^{2}u/∂x^{2} = 0 up to second order. Our fiducial conditions are therefore L3 and H4.
For illustrative purpose, Fig. D.2 shows a comparison of the analytical and numerical solutions^{18}. The long dasheddotted black curve stands for the approximation u^{0}, for which energy losses and diffusive reacceleration are switched off. The solid black curve features the solution u_{loss}, where diffusive reacceleration alone is suppressed. The exact solution u is featured by the short dashed purple and long dashed orange curves. The former is obtained through the direct inversion of Eq. (C.3) while for the latter, a Crank–Nicholson recursion is used to get u_{CN} converging from u^{0} to the exact solution u. Both results agree with a precision better than 10^{−10}.
Fig. D.2. CR proton flux Φ_{p} is plotted as a function of kinetic energy E_{k}. The long dasheddotted and solid black curves stand respectively for the approximations u^{0} and u_{loss}. The exact solution u is derived numerically assuming boundary conditions L3 and H4. The long dashed orange and short dashed purple curves correspond to different methods used to solve transport Eq. (4). Protons are injected with a rate where N_{p} = 5.8 × 10^{−11} protons cm^{−3} GeV^{−1} Myr^{−1} and α = 2.3. With these values, we get Φ_{p} in rough agreement with the Voyager 1 (Stone et al. 2013), PAMELA (Adriani et al. 2011) and AMS02 (Aguilar et al. 2015a) data. 
Varying lowenergy boundary conditions. In Fig. D.3, we plot the relative difference induced on the fiducial flux of Fig. D.2 when prescription L3 is respectively replaced by conditions L1 (solid red), L2 (long dashed orange) and L4 (short dasheddotted green) of Table C.1. Above 3 MeV, all fluxes agree up to double precision. Below that energy, some differences appear. Condition L2 always yields a flux that agrees with the fiducial result with a precision better than 10^{−4}. Condition L4 is associated with moderate wiggles, with a relative difference that nevertheless reaches 10% at 1 MeV. The worst prescription is L1 which generates very large instabilities exceeding 100% below 2 MeV. It is remarkable that in spite of these, the fiducial result is obtained above 3 MeV.
Fig. D.3. Changing the lowenergy boundary conditions of Table C.1 modifies the numerical result obtained for u. In this plot, the variations of the proton flux relative to the fiducial case of Fig. D.2 are displayed as a function of kinetic energy E_{k}. Notice that all conditions yield the same flux above 3 MeV. Close to the boundary, prescriptions L1 and L4 generate wiggles and the flux becomes inaccurate. Prescriptions L2 and L3 (fiducial) yield the same result. 
In the regime where diffusive reacceleration is switched off, the numerical solution u of Eq. (D.4) is given by the analytic solution u_{loss} displayed in relation (D.6). This situation offers a unique opportunity to investigate how lowenergy boundary conditions affect the stability and precision of the numerical solution. To this purpose, we have used the cosmicray transport parameters of Model A with the exception of a vanishing Alfvénic speed V_{a}. The relative difference between u and u_{loss} is plotted as a function of proton kinetic energy in Fig. D.4. The numerical solution is derived by direct inversion of Eq. (C.3). Depending on the prescription used at x_{min}, u can be very close to the actual result u_{loss} or completely out of range. As featured by the short dashed purple curve, the most precise condition is L3 with a level of precision of 10^{−4}. Condition L2 yields also a very accurate solution u with a relative error of at most 10^{−3} at 100 MeV. As could have been anticipated from Fig. (D.3), setting J_{E} = 0 yields the worst numerical result which is orders of magnitude larger than the correct solution below 30 GeV. Finally, condition L4 only yields the correct result above 70 GeV. We find that the relative difference between u and u_{loss} even exceeds 100% below 2 GeV, as exhibited by the short dasheddotted green curve of Fig. D.4.
Fig. D.4. Cosmicray transport parameters of Model A are used, except for the Alfvénic speed V_{a} set equal to 1 m s^{−1}. In this regime, the numerical result u of Eq. (D.4) is equal to the analytic solution u_{loss}. The relative difference between u and u_{loss} is plotted as a function of kinetic energy E_{k}. The lowenergy boundary prescriptions of Table C.1 are respectively featured by the solid red (L1), long dashed orange (L2), short dashed purple (L3) and short dasheddotted green (L4) curves. The most precise condition is L3 while the worst one is L1. 
D.3. Numerical stability when V_{a}→ 0
Although condition L3 yields the most precise solution, the short dashed purple curve of Fig. D.4 exhibits wiggles for vanishing Alfvénic speed. In our example, these instabilities never exceed a level of 10^{−4} and have no effect on the numerical solution u. In some configurations though, in particular those with a less finegrained xgrid, they could reach a level where they would impair the capability of the fitting routine used in the B/C analysis. It is then important to understand the reason for these instabilities and to remedy them.
Is it specific to the solver used? To commence, we have investigated if these instabilities are related to the method used to derive the numerical solution u. In the direct inversion procedure, the matrix 𝕄 defined in Eq. (C.4) is inverted through a fast recursion that takes advantage of its tridiagonal nature. One may wonder if this procedure does not generate numerical errors insofar matrix 𝕄 could be far from the unity matrix 𝕀. Complementarily, the Crank–Nicholson procedure makes use of the matrix 𝕀 + 𝕄_{ }Δt/2 which is arbitrarily close to unity if the time step Δt of the recursion is small enough. We find that deriving u through a Crank–Nicholson recursion yields a relative change with respect to the direct inversion result of Fig. D.4 (short dashed purple curve) which is always less than 10^{−9} above 10 MeV while reaching a maximum of 4 × 10^{−6} at E_{k} = 2 MeV. In these calculations, the Alfvénic speed is V_{a} = 1 m s^{−1}. Increasing the speed alleviates further the discrepancy between the two numerical results. The difference is at most 2 × 10^{−6} for a velocity of 1 km s^{−1} and decreases below 4 × 10^{−7} at 1.75 km s^{−1}. We conclude that the same wiggles appear should u be derived by directly inverting Eq. (C.3) or by letting u evolve à la Crank–Nicholson. The origin of instabilities has to be looked elsewhere.
Failure of 2ndorder schemes to solve 1storder equations. While performing the previous check, we have serendipitously observed that the wiggles disappear as V_{a} increases. In the panels of Fig. D.5, four different values have been assumed for the Alfvénic speed and the relative difference u − u_{loss}/u_{loss} is plotted as a function of E_{k} for each of them. In the upper panel, the numerical solution u exhibits instabilities which are no longer visible in the lower panel. The solution becomes smooth for a critical value of order 1.75 km s^{−1}. Above that speed, u slowly departs from u_{loss} as diffusive reacceleration starts to be felt by the proton flux.
Fig. D.5. In these two panels, the proton flux is plotted as in Fig. D.2 with the exception of the Alfvénic speed V_{a} for which different values are assumed as indicated. When diffusive reacceleration vanishes, we expect the proton flux to be given by the analytic solution u_{loss}. The relative difference between the numerical result u (direct inversion) and u_{loss} is calculated with the lowenergy boundary prescription L3. As V_{a} increases from 1 m s^{−1} to 2.5 km s^{−1}, u becomes more stable and wiggles disappear. The transition occurs for a critical value of V_{a} ∼ 1.75 km s^{−1}. The number of bins per decade of energy is N_{dec} = 50. 
A tentative explanation for the presence of instabilities is that when V_{a} is vanishingly small, the numerical result u is not derived with the appropriate method. The transport Eq. (D.4) becomes first order in this regime and only one boundary condition suffices to determine its solution. We have actually obtained u_{loss} by requiring that it should be equal to u^{0} at the highest energy point. Strictly speaking, a lowenergy boundary condition is no longer necessary. Of course, this is not as simple as that since we proceed numerically through the inversion of matrix 𝕄 whose elements a_{k}, b_{k} and c_{k} depend on the functions α, β and γ as discussed in Appendix C. Energy losses and diffusive reacceleration respectively enter in the definition of matrix 𝕄 through the combinations α_{ }β/Δx and α_{ }γ/Δx^{2}, with Δx the spacing of the energy grid.
Regularisation of the 2ndorder scheme. From a numerical perspective, the Alfvénic speed vanishes when α_{ }γ/Δx^{2} is negligible with respect to 1, as is clear from Eq. (C.6). To explore this regime, we define the numerical strength of diffusive reacceleration through the ratio
which we have plotted as a function of kinetic energy E_{k} in Fig. D.6. Model A has been assumed with the exception of V_{a}. The solid black curve corresponds to an Alfvénic speed of 1.75 km s^{−1}. At that critical value, the instabilities of the numerical result u start to recede, as featured in the lower panel of Fig. D.5 by the smoothness of the short dashed purple curve above 300 MeV. Concomitantly the ratio τ_{disc}/τ_{DR }Δx^{2} reaches a maximum of 0.5. Notice that the onset of stability is an illdefined process which takes place for Alfvénic speeds between 1.5 and 2 km s^{−1}, hence the yellow strip of Fig. D.6.
Fig. D.6. Ratio τ_{disc}/τ_{DR }Δx^{2} is plotted as a function of kinetic energy E_{k} for three different configurations. In all cases, the parameters of Model A have been assumed with the exception of the Alfvénic speed. The solid black curve corresponds to the critical value V_{a} = 1.75 km s^{−1} above which the instabilities which affect the numerical solution u in Fig. D.5 are noticeably reduced. For that velocity, the ratio τ_{disc}/τ_{DR }Δx^{2} reaches a maximum of 0.5. Enlarging the range of Alfvénic speeds for which the onset of stability occurs yields the yellow band. The long dashed red and short dasheddotted purple curves both correspond to V_{a} = 0.5 km s^{−1}. In the former case, the spacing N_{dec} of the energy grid is 25, whereas it is 300 in the latter. 
The prescription which we propose for getting rid of the numerical perturbations of u is to require the ratio τ_{disc}/τ_{DR }Δx^{2} to overshoot that band, with a maximum exceeding a benchmark value of 0.7. Because the energy grid is made coarser with fewer bins per decade, the interval Δx increases and we expect the above criterion to be fulfilled for larger values of V_{a}. This is actually what we observe. So far, all the results presented in Appendix D are based on 50 bins per decade. If we degrade the resolution of the xgrid and use 25 bins per decade, we find that wiggles start to recede when the Alfvénic speed exceeds a critical value of 1.75 × 2 = 3.5 km s^{−1}. Conversely, if the xgrid is refined with 100 bins per decade, the transition occurs at 1.75/2 = 0.875 km s^{−1}.
As a final consistency check, we have derived u with an Alfvénic speed V_{a} of 0.5 km s^{−1} and two different values for the energy spacing N_{dec}. Our results are featured in Fig. D.7 where the same line patterns and colour codes as in Fig. D.6 have been used for the two lower curves. The long dashed red line has been derived with 25 bins per decade. Notice how the corresponding curve of Fig. D.6 does not overshoot the yellow band and hence does not fulfil the stability criterion. We expect u to exhibit numerical instabilities which we actually observe in Fig. D.7. In the case of the short dasheddotted purple curve, a more refined grid is used with N_{dec} = 300 and the numerical solution u is smooth. The stability condition is now satisfied as shown by the corresponding curve of Fig. D.6.
Fig. D.7. Same as in Fig. D.5 but now with an Alfvénic speed V_{a} of 0.5 km s^{−1} and two different values for N_{dec}, the number of bins per decade of energy. The line patterns and colour codes of the two lower curves are the same as in Fig. D.6, i.e. the long dashed red and short dasheddotted purple lines respectively stand for N_{dec} = 25 and 300. Only in the latter case is the criterion for stability fulfilled. 
Recommendation and conclusion. To conclude this Appendix, let us discuss how to implement practically the criterion which we have found. To ensure the stability of the numerical result u, we require the ratio τ_{disc}/τ_{DR }Δx^{2} to reach a maximum in excess of 0.7. In the regime where the Alfvénic speed is very small, this implies a very fine structure of the energy grid. In Fig. 1 for instance, the reference B/C case is derived with 5000 energy bins per decade. Our study confirms that this procedure leads to the correct CR flux. However, the CPU time significantly increases with N_{dec} and the B/C fit may become impracticable. The procedure which we suggest is to fix N_{dec} at a benchmark value of 50 for instance. For a given set of CR propagation parameters, the critical value of the Alfvénic speed at which u becomes stable can be determined from the ratio τ_{disc}/τ_{DR }Δx^{2} calculated, say, at the lowest energy point. In practice, τ_{disc} and τ_{DR} are respectively decreasing and increasing functions of E_{k}. This procedure ensures that is always much smaller than the Alfvénic speed at which diffusive reacceleration becomes as important as disc transport. With N_{dec} = 50, the former is only 3.8% of the latter. During the B/C fit, V_{a} is forced to be larger than the lower limit which we find, in the case of Model A, to be 1.75 ± 0.25 km s^{−1}. In this configuration, a very rough criterion is to require that the ratio exceeds 5 × 10^{−2} Myr^{−1} or, alternatively, that is larger than 10^{−4} Myr^{−1}.
Appendix E: Impact of selected crosssection uncertainties on B/C
We detail the impact of the most dominant reactions (see Sect. 4) on the B/C ratio. Figures E.1 and E.2 show, in black lines, the impact of changing a single inelastic or production cross section (w.r.t. to the calculation with a reference crosssection file) and, for comparison purpose, blue lines show the relative difference between the reference cross section and the new one on the same plot: the B/C variation is always smaller than that of the cross sections (see Sect. 4).
Fig. E.1. Impact of inelastic crosssection uncertainties on B/C ratio for specific reactions as a function of rigidity, shown for Model A and Model B discussed in Sect. 2.1. In each panel, two quantities are shown: the relative difference between a cross section (of a given reaction) and a reference parametrisation T99 in blue (the associated yaxis is on the righthand side of the plot, also in blue), and the associated impact on the B/C ratio for this crosssection parametrisation w.r.t. to the reference B/C (the associated yaxis is on the lefthand side in black). To guide the eye, the vertical red line indicates the rigidity of the first AMS02 data point. The bottom panels show the overall impact when all reactions (i.e. all nuclei in the network) are replaced. For the latter panel, dark and grey curves correspond to IS and modulated (ϕ_{FF} = 0.8 GV) B/C ratio to emphasise that results are independent of the modulation level. 
For inelastic cross sections (Fig. E.1), the maximum variation between different parametrisations is ∼10% – related to the different positions and level of the peaks and dips of the cross sections (shown in blue in Fig. 5)–, leading to a ≲3% impact on B/C (black curves). For ^{16}O, ^{11}B, and ^{10}B, the cross section and B/C variations are anticorrelated, while correlated for ^{12}C. This is explained as follows: for ^{16}O, a larger destruction cross section means less Oxygen at low energy, and so less Boron production (and less B/C); an increased destruction of ^{10}B and ^{11}B also leads to less B/C; the case of ^{12}C is a tradeoff between the fact that an increased destruction means less Carbon (increase in B/C), which means less Boron production (decrease in B/C). If we compare the variation on the B/C relative differences when all cross sections are changed (last panel) to the case in which only one reaction is changed, we see that ^{11}B and ^{12}C combined are responsible for almost the whole variability. The impact of the destruction of ^{10}B is subdominant, and at the same level as ^{16}O.
For production cross sections (Fig. E.2), the maximum variation between different parametrisations is ±20% for ^{16}O→^{11}B. There is more structure in the G17 cross sections than in all other ones, because they are normalised to data whenever existing. As for inelastic cross sections, the variation in B/C is smaller than the variation in production cross sections, but the relative variations are now correlated for all reactions, because more or less production directly reflects on the B/C ratio. The most impacting reactions also directly reflect the ranking established in Génolini et al. (2018), in which ^{12}C→^{11}B and then ^{16}O→^{11}B have the strongest effect. The sum of these two accounts for most of the variation seen in the bottom panel, in which all reactions in the network were changed at once. We also plot the impact of two reactions involved in “twostep” production of Boron, where ^{15}N and ^{11}B are intermediate steps. As discussed in Génolini et al. (2018), the “twostep” reactions can contribute up to ∼25% of the total production, and they do not have the same energydependence as “onestep” (or direct) production. As can be seen in the two bottom panels, these reactions are suppressed at high energy compared to the other shown, and overall, their impact on B/C is ≲3%. In Sect. 4.2, we did some checks adding these dominant channels as nuisance parameters to give extra degrees of freedom w.r.t. the energy dependence, but no gain on the results was found, so they were not considered in the main analysis.
All Tables
Values of μ and σ for Gaussian distributed nuisance parameters for cross sections listed.
Matrix coefficients for several boundary conditions of the transport equation.
All Figures
Fig. 1. Relative difference for the B/C computed with different boundary conditions at LE, varying the number of energy bins per decade (top panel) and the minimal energy of the grid (bottom panel). The reference B/C has been computed using 5000 energy bins per decade and setting E_{min} = 1 MeV. 

In the text 
Fig. 2. Absolute value ℰ_{F} of the relative difference between the exact and approximate flux calculation in a bin, Eq. (7), as a function of the flux spectral index α. Two mean rigidity definitions are compared, in dim grey and in light grey, see Eq. (5), for different bin size ranges (from 1 decade, solid line, down to 0.1 decade, dotted line). To guide the eye, the green dashdotted line shows when the relative difference is 3%. See text for discussion. 

In the text 
Fig. 3. Colourcoded relative difference log_{10}(ℰ_{R}) between the exact and approximate ratio calculation in a bin, Eq. (8), as a function of the numerator and denominator spectral difference, α_{den} − α_{num} (xaxis), and as a function of the rigidity bin width, log_{10}(R_{max}/R_{min}) (yaxis). For instance, a value of 2 (red) corresponds to a relative difference of 100%, and a value of −1 (blue) to a relative difference of 0.1%. The left and right panels correspond to calculations using and respectively, see Eq. (5). The rows correspond to three different values of the denominator spectral index α_{den}. In each panel, the dashdotted green contour delimits regions in which the relative difference is above or below 3%. See text for discussion. 

In the text 
Fig. 4. Impact of different crosssection parametrisations on B/C flux calculation as a function of rigidity, w.r.t. a reference (denoted Ref. in the legend). The left and right panels are for inelastic and production cross sections respectively (see Sect. 4.1). The thick (resp. thin) lines are for the interstellar (resp. solarmodulated at ϕ_{ForceField} = 800 MV) calculation. The red vertical line highlights the first rigidity point of AMS02 B/C data. 

In the text 
Fig. 5. Models (as listed in Sect. 4.1) and range of cross sections (quantiles corresponding to median, 68%, and 95% CLs) generated from a reference cross section biased by Gaussian distributed nuisance parameters (as gathered in Table 1). The reactions showed correspond to the dominant ones discussed in Sect. 4.1, for inelastic (left column) and production (right column) cross sections. 

In the text 
Fig. 6. Median, 1σ, and 2σ for the distribution of σ^{NSS}/σ_{ref}(E_{k/n}) values, from 1000 Gaussian distributed normalisation values (9), energy scale (10), and slopes (11) with respective mean and variance (μ, σ)^{Norm} = (1, 0.05), (μ, σ)^{Scale} = (1, 0.2), and (μ, σ)^{Slope} = (0, 0.02). 

In the text 
Fig. 7. Reconstruction of 1σ contours (68% confidence level) from the analysis of 1000 mock data for Model A (top panel) and Model B (bottom panel), for the unbiased case (cross sections for the analysis are the same as the cross sections used to generate mock data). For display purpose, the 2D probability distribution functions are estimated using a Gaussian Kernel (default method to define the bandwidth of gaussian_kde in scipy python library); the irregular shapes are related to statistical fluctuations. The colour code is related to the nuisance parameters used and the line style to the type of nuisance parameters (see Table 2). The “+” symbols represent true values. See text for discussion. 

In the text 
Fig. 8. Distribution of bestfit values (, then transport parameters) from the analysis of 1000 mock data for Model A (left panel) and B (right panel), for the biased case (cross sections for the analysis differ from those used to generate mock data). For display purpose, 1D probability distribution functions are estimated using a Gaussian Kernel. The line style and colours indicate the type and configuration of nuisance parameters used (see Table 2). The vertical dashed lines represent true values. See text for discussion. 

In the text 
Fig. 9. Nuisance parameters from the analysis of 1000 mock data for Model A (top panels) and Model B (bottom panels). The nuisance parameters are shown reaction by reaction (rows: production then inelastic cross sections), each reaction having two nuisance parameters (see list in Table 2). The number in parenthesis beside the reaction corresponds to its overall impact on the B/C calculation, as reported in Table 1. We show for the NSS analysis (left panels) normalised and centred parameters, whereas LC analysis (right panels) are shown between zero and one. We use violin plots to highlight the probability density of the parameter (yaxis) at different values (xaxis), and our violin plots include a marker for the median (black “+” symbol) along with the 1σ range of the parameter (thick black line). In order to directly compare the distribution of the nuisance parameters for the unbiased and biased analyses, we superimpose them on the same line in red and blue respectively. See text for discussion. 

In the text 
Fig. 10. AMS02 errors for B/C data. Solid lines correspond to the errors provided in Aguilar et al. (2018b), namely statistical, acceptance, scale, and unfolding (the steplike evolution is artificial and related to the rounding of the values provided in the table). The orange lines correspond to a further split of the acceptance errors: normalisation (norm.), low energy (LE), and residual (res.). See text for details. 

In the text 
Fig. 11. Top panels: values obtained for the /d.o.f. and the fit parameters (and uncertainties) as a function of for propagation models A (left) and B (right) and for the full acceptance error (blue) and the split acceptance error (orange). In the case of split acceptance error, corresponds to the correlation length of the Acc. res. contribution only. Bottom panels: comparison of bestfit B/C (lines) and AMS02 data (symbols) for the above models and configurations, i.e. full (blue lines, top) vs. split (orange lines, bottom) acceptance errors. Only a sample of correlation lengths are shown (0.015, 0.1, and 1 decade). 

In the text 
Fig. 12. Evolution of the bestfit parameters and asymmetric 1σ errors (extracted from MINOS), increasing the number of nuclear crosssection reactions used as nuisance parameters (from left to right on the xaxis); Model A (top) and B (bottom) are shown. For every additional reaction as nuisance, four different fits are performed: starting either with G17) or W03 parametrisations for the production cross sections; using either CL or NSS for the implementation of nuisance parameters. The black dashed line in the xs panel corresponds to the threshold . See text for discussion. 

In the text 
Fig. A.1. Average mass number for carbon (top) and Boron (bottom) as a function of rigidity. The dashed blue lines correspond to the value calculated from the model (with the isotopic content known), whereas the solid black lines correspond to the choice made in Aguilar et al. (2016). See text for details. 

In the text 
Fig. A.2. Top panel: B/C residual (vs. R) calculated from the difference between AMS02 data and our reference model (bestfit model). Bottom panel: B/C residual (vs. E_{k/n}) calculated from the difference between various RtoE_{k/n}converted B/C (red, blue and black lines) and the “exact” B/C (reference model converted with correct isotopic content). The AMS02 data vs. E_{k/n} are taken from Aguilar et al. (2016), and result from the same conversion as done for the solid black line. See text for discussion 

In the text 
Fig. D.1. Timescales associated to the various processes at play in the Galactic transport of CR protons are plotted as a function of kinetic energy E_{k}. The diffusion τ_{diff} and convection τ_{conv} timescales are respectively featured by the long dashed and short dasheddotted black curves. They are combined into the disc residence time τ_{disc} plotted as the solid black line. The energy loss and diffusive reacceleration timescales correspond to the solid purple and orange curves. The red solid line features the combined timescale τ_{eff} as given in Eq. (D.3). 

In the text 
Fig. D.2. CR proton flux Φ_{p} is plotted as a function of kinetic energy E_{k}. The long dasheddotted and solid black curves stand respectively for the approximations u^{0} and u_{loss}. The exact solution u is derived numerically assuming boundary conditions L3 and H4. The long dashed orange and short dashed purple curves correspond to different methods used to solve transport Eq. (4). Protons are injected with a rate where N_{p} = 5.8 × 10^{−11} protons cm^{−3} GeV^{−1} Myr^{−1} and α = 2.3. With these values, we get Φ_{p} in rough agreement with the Voyager 1 (Stone et al. 2013), PAMELA (Adriani et al. 2011) and AMS02 (Aguilar et al. 2015a) data. 

In the text 
Fig. D.3. Changing the lowenergy boundary conditions of Table C.1 modifies the numerical result obtained for u. In this plot, the variations of the proton flux relative to the fiducial case of Fig. D.2 are displayed as a function of kinetic energy E_{k}. Notice that all conditions yield the same flux above 3 MeV. Close to the boundary, prescriptions L1 and L4 generate wiggles and the flux becomes inaccurate. Prescriptions L2 and L3 (fiducial) yield the same result. 

In the text 
Fig. D.4. Cosmicray transport parameters of Model A are used, except for the Alfvénic speed V_{a} set equal to 1 m s^{−1}. In this regime, the numerical result u of Eq. (D.4) is equal to the analytic solution u_{loss}. The relative difference between u and u_{loss} is plotted as a function of kinetic energy E_{k}. The lowenergy boundary prescriptions of Table C.1 are respectively featured by the solid red (L1), long dashed orange (L2), short dashed purple (L3) and short dasheddotted green (L4) curves. The most precise condition is L3 while the worst one is L1. 

In the text 
Fig. D.5. In these two panels, the proton flux is plotted as in Fig. D.2 with the exception of the Alfvénic speed V_{a} for which different values are assumed as indicated. When diffusive reacceleration vanishes, we expect the proton flux to be given by the analytic solution u_{loss}. The relative difference between the numerical result u (direct inversion) and u_{loss} is calculated with the lowenergy boundary prescription L3. As V_{a} increases from 1 m s^{−1} to 2.5 km s^{−1}, u becomes more stable and wiggles disappear. The transition occurs for a critical value of V_{a} ∼ 1.75 km s^{−1}. The number of bins per decade of energy is N_{dec} = 50. 

In the text 
Fig. D.6. Ratio τ_{disc}/τ_{DR }Δx^{2} is plotted as a function of kinetic energy E_{k} for three different configurations. In all cases, the parameters of Model A have been assumed with the exception of the Alfvénic speed. The solid black curve corresponds to the critical value V_{a} = 1.75 km s^{−1} above which the instabilities which affect the numerical solution u in Fig. D.5 are noticeably reduced. For that velocity, the ratio τ_{disc}/τ_{DR }Δx^{2} reaches a maximum of 0.5. Enlarging the range of Alfvénic speeds for which the onset of stability occurs yields the yellow band. The long dashed red and short dasheddotted purple curves both correspond to V_{a} = 0.5 km s^{−1}. In the former case, the spacing N_{dec} of the energy grid is 25, whereas it is 300 in the latter. 

In the text 
Fig. D.7. Same as in Fig. D.5 but now with an Alfvénic speed V_{a} of 0.5 km s^{−1} and two different values for N_{dec}, the number of bins per decade of energy. The line patterns and colour codes of the two lower curves are the same as in Fig. D.6, i.e. the long dashed red and short dasheddotted purple lines respectively stand for N_{dec} = 25 and 300. Only in the latter case is the criterion for stability fulfilled. 

In the text 
Fig. E.1. Impact of inelastic crosssection uncertainties on B/C ratio for specific reactions as a function of rigidity, shown for Model A and Model B discussed in Sect. 2.1. In each panel, two quantities are shown: the relative difference between a cross section (of a given reaction) and a reference parametrisation T99 in blue (the associated yaxis is on the righthand side of the plot, also in blue), and the associated impact on the B/C ratio for this crosssection parametrisation w.r.t. to the reference B/C (the associated yaxis is on the lefthand side in black). To guide the eye, the vertical red line indicates the rigidity of the first AMS02 data point. The bottom panels show the overall impact when all reactions (i.e. all nuclei in the network) are replaced. For the latter panel, dark and grey curves correspond to IS and modulated (ϕ_{FF} = 0.8 GV) B/C ratio to emphasise that results are independent of the modulation level. 

In the text 
Fig. E.2. Same as Fig. E.1 but for production and reference W03. 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.