LeHaMoC: a versatile time-dependent lepto-hadronic modeling code for high-energy astrophysical sources

Recent associations of high-energy neutrinos with active galactic nuclei (AGN) have revived the interest in leptohadronic models of radiation from astrophysical sources. The rapid increase in the amount of acquired multi-messenger data will require soon fast numerical models that may be applied to large source samples. We develop a time-dependent leptohadronic code, LeHaMoC, that offers several notable benefits compared to other existing codes, such as versatility and speed. LeHaMoC solves the Fokker-Planck equations of photons and relativistic particles (i.e. electrons, positrons, protons, and neutrinos) produced in a homogeneous magnetized source that may also be expanding. The code utilizes a fully implicit difference scheme that allows fast computation of steady-state and dynamically evolving physical problems. We first present test cases where we compare the numerical results obtained with LeHaMoC against exact analytical solutions and numerical results computed with ATHE$\nu$A, a well-tested code of similar philosophy but different numerical implementation. We find a good agreement (within 10-30%) with the numerical results obtained with ATHE$\nu$A without evidence of systematic differences. We then demonstrate the capabilities of the code through illustrative examples. First, we fit the spectral energy distribution from a jetted AGN in the context of a synchrotron-self Compton model and a proton-synchrotron model using Bayesian inference. Second, we compute the high-energy neutrino signal and the electromagnetic cascade induced by hadronic interactions in the corona of NGC 1068. LeHaMoC is easily customized to model a variety of high-energy astrophysical sources and has the potential to become a widely utilized tool in multi-messenger astrophysics.


Introduction
A wide variety of astrophysical sources, including supernova remnants (SNRs), microquasars, active galactic nuclei (AGN), and gamma-ray bursts (GRBs), display nonthermal emission spectral energy distributions (SEDs).This emission, which depending on the type of the source, may span from radio wavelengths to γ-ray energies, requires the presence of relativistic particles with extended energy distributions.Modeling of the broadband emission has been a widely used method for probing indirectly the physical conditions of these astrophysical accelerators, such as particle density, magnetic field strength, bulk motion speed, and others.
Since the discovery of γ-ray emission from AGN (see the reviews Dermer & Giebels (2016); Madejski & Sikora (2016) and references therein), the production mechanism of such energetic photons has been a matter of debate.Two schools of thought have been developed over the years while trying to explain the emission properties of γ-ray emitting sources.The first one postulates that the broadband emission can be fully attributed to energetic electrons accelerated in the source, with some contribution from secondary electron-positron pairs produced through photon-photon pair production (leptonic models).According to the second school, accelerated protons in the source could also make a significant contribution to the high-energy emission (leptohadronic models), either directly via proton synchrotron radiation (Aharonian 2000;Mücke & Protheroe 2001), or indirectly via the radiation of secondary pairs produced in proton-photon interactions (Mannheim 1993), or via neutral pion decay (Reimer et al. 2005).High-energy neutrinos, produced via the pion decay chain, are a natural outcome of this model class.
The recent discovery by the IceCube Collaboration of high-energy neutrino emission (at a significance of ∼ 4.2σ) associated with the prototype Seyfert II galaxy NGC 1068 (IceCube Collaboration et al. 2022) presents a new set of challenges for understanding the underlying physics of these sources.Notably, the observed GeV γ-ray emission from NGC 1068 is relatively low compared to the neutrino luminosity (IceCube Collaboration et al. 2022), which raises intriguing questions about the mechanisms responsible for this disparity and the potential production sites (see e.g., Murase et al. 2020;Inoue et al. 2022;Eichmann et al. 2022, for different explanations).The other astronom-ical source that has been associated with a significance larger than ∼ 3σ with IceCube neutrinos is the blazar1 TXS 0506+056 (IceCube Collaboration et al. 2018;IceCube Collaboration 2018).These observations, in addition to the discovery of the diffuse astrophysical neutrino flux by Ice-Cube (IceCube Collaboration 2013), have revived the interest in leptohadronic source models.
Leptohadronic radiative models are intrinsically more complex than leptonic ones.The main radiative processes in the latter model class are synchrotron radiation, inverse Compton scattering, photon-photon pair-production, and relativistic bremsstrahlung (which is relevant for sources with dense ionized gas).In leptohadronic models, one has to take also into account the interactions of relativistic protons with low-energy photons and/or gas.Both processes lead to the production of pions that decay into photons, pairs, and neutrinos.Photons and pairs can then initiate electromagnetic cascades in the source via photon-photon pair production, thus producing more targets for proton-photon interactions.The complexity of leptohadronic models lies in (i) the non-linear coupling of the various particle populations (through the so-called kinetic equations i.e., partial differential equations with respect to time and energy), (ii) the wide range of timescales probed by the physical processes, and (iii) the numerical modeling of the secondary particle production spectra.The adoption of different approaches for the modeling of secondary particle injection may yield 10-30% differences in the numerical results obtained with different radiative transfer codes (Cerruti et al. (2022), Cerutti et al., in prep.).
Various radiative transfer codes have been employed to study the complex emission processes and interpret the diverse observational properties of non-thermal emitting sources, including recent neutrino observations (e.g.Dimitrakoudis, S. et al. 2012;Böttcher et al. 2013;Cerruti et al. 2015;Gao et al. 2017;Gasparyan et al. 2022).The rapid increase in multi-messenger observations and modeling of larger and more diverse source samples requires high computation efficiency and code adaptability.In this work, we present LeHaMoC (Lepto-Hadronic Modeling Code), a novel radiative transfer code for non-thermal emitting sources.While LeHaMoC is inspired by the modeling code ATHEνA (Mastichiadis & Kirk 1995;Dimitrakoudis, S. et al. 2012), it offers several new key features: significantly shorter code execution times for steady-state problems due to an implicit finite difference scheme (Chang & Cooper 1970), modeling of expanding sources with varying physical conditions, the inclusion of proton-proton inelastic collisions, and fitting capabilities.Our primary goal is to provide a versatile and efficient numerical code to the community.
This paper is structured as follows.In Sec. 2 we outline the range of processes incorporated into LeHaMoC and describe the methodology employed to solve the kinetic equations for each particle species in the source.In Sec.3.1 we provide an assessment of the code's performance and accuracy.In Sec.3.2 we present indicative cases that are compared against analytical results and numerical results obtained with the well-tested code ATHEνA (Dimitrakoudis, S. et al. 2012).In Sec. 4 we showcase two astrophysical applications of LeHaMoC.Firstly, we fit the SED of a blazar that has been proposed as a potential high-energy neutrino emitter using a Bayesian inference approach.Secondly, we employ LeHaMoC to compute the high-energy neutrino signal and derive the electromagnetic cascade resulting from hadronic interactions within the corona of NGC 1068.By examining these specific cases, we demonstrate the versatility and utility of LeHaMoC in describing different astrophysical sources.Finally, we present the conclusions of this work in Sec. 5.

Code description
We consider a scenario in which relativistic charged particles (electrons and/or protons) are injected into a spherical blob that contains a magnetic field and may also adiabatically expand.Charged particles are subject to various physical processes that lead to the production of photons and secondary particles, such as relativistic electronpositron pairs, neutrons, and neutrinos.The processes we consider are -Electron and positron synchrotron radiation (e, syn), -Proton synchrotron radiation (p, syn), -Electron and positron inverse Compton scattering (ics), -Synchrotron self-absorption (ssa), -Photon-photon pair creation (γγ), -Proton-photon pion production (pγ, π), -Proton-photon (Bethe-Heitler) pair production (bh), -Proton-proton collisions (pp), -Adiabatic losses (ad).
All particle species are assumed to be isotropically distributed inside the source and to be fully described by the relevant distribution function N i at time t, which is a function of the charged particle Lorentz factor γ i , or frequency ν for photons and neutrinos.We trace the evolution of charged particles and photons inside the source as a function of energy and time by numerically solving a system of coupled integrodifferential (kinetic) equations that are summarized below.
In the equations above Q j i expresses the production rate of particle species i due to the process j, while the terms designated with the subscript in j indicate the injection rate of accelerated particles (primaries) into the source.Similarly, the L j i term represents the net loss rate of species i due to the physical process j.We have also introduced τ i,esc which is the physical escape timescale of species i from the source.In most cases, this timescale is taken to be a multiple of the source's light-crossing time t cr = R(t)/c, with R(t) being the radius of the spherical blob.Alternatively, if the escape probability is related to the gyroradius of the particle in the source, τ i,esc may depend on the particle energy.We allow for both a time-dependent and energydependent escape timescale in our code.
We neglect the neutron component and assume that the decay of pions is instantaneous, so we do not treat them using kinetic equations.Detailed expressions of the operators used in the kinetic equations can be found in Appendix A.
To solve the kinetic equations presented in the previous sections we use the fully implicit difference scheme proposed by Chang & Cooper (1970).The stiff differential equations that arise in the context of astrophysical jets, where the various timescales associated with the physical processes can range by many orders of magnitude, are particularly well-suited for this numerical scheme.The specific implicit numerical scheme involves the discretization of the kinetic equations in time and the Lorentz factor and frequency space.To discretize the kinetic equations in time, we use a time step that is related to the characteristic time scale of the system, which depends on the physical scenario studied.The use of an implicit scheme ensures that the solution at any time step is stable, which is useful since the time scales of different physical processes can vary widely.The discretization in energy is achieved by using a logarithmic energy grid that allows us to accurately capture the behavior of the distribution over a wide range of energies.The energy grid size is a parameter that can be defined by the user.Each discretized equation forms a tridiagonal matrix which we solve using Thomas algorithm (Thomas 1949).
To illustrate how the code works, we demonstrate the discretization into time t i and Lorentz factor γ j of the kinetic equation for the electron species (Eq.1), where the coefficients are where (dγ/dt) p are the energy losses to an electron due to the p process.

Code performance
We demonstrate the performance and accuracy of the numerical code through three steady-state leptonic runs.In all cases, electrons are injected in the source with a powerlaw distribution, N e (γ e ) ∝ γ −s e e , for γ e,max ≥ γ ≥ γ e,min .For the adopted parameters, electrons, which are continuously injected with γ e ≫ 1, are completely cooled down to γ e ≳ 1 (i.e.fast cooling regime) via synchrotron radiation (SYN) or inverse Compton (IC) scattering on a fixed blackbody radiation field.A synchrotron self-Compton scenario (SSC) where electrons are still cooled due to synchrotron losses is also considered.In all tests, we examine the relationship between the Lorentz factor grid resolution with the code execution time and the ratio of the bolometric photon luminosity L bol γ , and the relativistic electron luminosity at injection, L in j e .In the fast cooling regime, all energy injected into relativistic electrons should be radiated away, hence the two luminosities should be the same.The parameters used for the performance test are displayed in Table 1.It is important to note that our code is not parallelized and utilizes a single CPU core for computations.All the tests were performed in 12th Gen Intel® Core™ i5-1235U × 12 processor.
The number of grid points (n ppd ) directly affects the execution time for all three scenarios.The synchrotron scenario is less impacted because the synchrotron emissivity (see Eq. A.3) is a less complicated expression than the inverse Compton scattering emissivity, which involves an integral on the target photon field (see Eq. A.11), which may also evolve with time.The top panel of Fig. 1 shows that increasing the number of grid points per logarithmic decade increases the time to run the code for all scenarios.This trend is generally expected as more grid points require the computation of emissivity at each point, resulting in longer execution times.However, the relationship between the number of grid points and execution time is not linear.In the complete synchrotron cooling scenario, the execution time increases more rapidly when n ppd ≳ 20.In the complete inverse Compton cooling scenario, the execution time scales quadratically with the number of grid points because the emissivity involves the calculation of an integral to the target photon field.The SSC scenario follows the same scaling as the inverse Compton cooling scenario since the computation of the inverse Compton emissivity dominates the execution time.
For a typical choice of n ppd = 30, the code requires approximately 10 s to reach a steady state for synchro-Compton scenarios, using a time step dt = R 0 /c.Shorter running times can be achieved by increasing the time step.We verified that a steady-state solution can be accurately achieved with larger time steps.For example, a time step of 3 R 0 /c can yield accurate steady-state solutions for an SSC model, thus helping to speed up the code execution time by a factor of ∼ 3. Similar conclusions are expected for steady-state calculations including proton synchrotron radiation.The inclusion of photo-hadronic and pp interactions may raise the computation time to 5-10 minutes for the default energy resolution (n ppd = 20 − 30).
We also evaluated the accuracy of the newly developed code by analyzing the ratio of the bolometric photon to electron injection luminosity as a function of n ppd (see bottom panel of Fig. 1).As electrons are fast cooling due to synchrotron radiation in all three cases, the two luminosities should be the same.The results show that in the synchrotron and SSC scenarios, the luminosity ratio is about 4 for n ppd = 3, but then decreases gradually to unity (as it should) as the number of grid points increases.The reason for this is that the increased energy resolution allows for more accurate calculations of integrals over particle distributions.
We also examine how the number of grid points affects the spectral shape of the steady-state electron distribution in a fast-cooling synchrotron scenario.For this purpose, we compare the numerical results for three choices of the energy grid resolution (marked with symbols on the top panel of Fig. 1) against the analytical solution of Inoue & Takahara (1996) (see Eq. 2.26 therein).The steadystate electron distributions are displayed in Fig. 2, where we see that higher resolution in the energy grid leads to better agreement with the expected (analytical) solution.The adopted numerical scheme does not accurately produce the location of the cooling break if a sparse energy grid is used.According to Park & Petrosian (1996) (see their Eq.13) the magnitude of ∆γ should be smaller than the typical range of variation of the kinetic equation solution at Lorentz factor γ.
These examples illustrate the competition between accuracy and efficiency.To achieve faster computations, we have to use a coarser energy grid (typically with n ppd = 20 − 30) and introduce a correction factor to ensure energy balance between species.Given that the synchrotron photon field is a common target field for a variety of physical processes (e.g.Compton scattering and photohadronic interactions), we determine the appropriate correction factor by running a synchrotron fast cooling case, before we perform any simulation based on the energy grid selected by the user.The spectrum of the secondaries is not affected by the number of grid points.
Finally, it is worth noting that an extremely high accuracy is not needed for most astrophysical applications since fitting uncertainties are usually the limiting factor.This will become evident later in Sec.4.1 where we use the code to model the multi-wavelength spectrum of a typical jetted AGN (see also corner plots in Appendix C).Consequently, opting for n ppd = 10 − 20 strikes a balance between precision and computational efficiency.

Tests
We perform comparison tests against the numerical code ATHEνA (Mastichiadis & Kirk 1995;Dimitrakoudis, S. et al. 2012) in order to validate the accuracy and performance of the newly developed numerical code.In the following tests, we adopt n ppd = 30, unless stated otherwise.These tests include various radiative processes that are relevant for astrophysical jets, such as synchrotron radiation, inverse Compton scattering, and proton interactions with photons.We also compare our code results with analytical calculations.In the following sections, we will provide a few examples of these comparison tests and discuss the results in detail.

Test 1: A steady-state SSC model with electron cooling
In our first comparison test, we study a synchrotron self-Compton (SSC) model with γγ absorption.The aim is to compare the photon spectrum and the pair injection rate between the ATHEνA code and LeHaMoC.Electrons are injected with a power-law distribution into a spherical blob with radius R 0 that contains a magnetic field of strength B 0 and are allowed to escape on a timescale equal to the lightcrossing time of the source, i.e. t cr = R 0 /c (for the parameter values see Table 2).We include synchrotron emission and self-absorption, inverse Compton scattering, and γγ pair production.
We present our results in Fig. 3.The upper panel displays the steady-state electron distributions obtained with the two codes.Both solutions exhibit a smooth break due to synchrotron losses.Both numerical codes have the same location for the cooling break in the electron distribution.Beyond the cooling break, the cooled distribution displays the same power-law behavior as expected by synchrotron theory with a power-law index s e − 1.In the middle panel, we demonstrate the comoving photon spectra.We observe that the two codes yield comparable results (see ratio plot) over a broad range of frequencies, both before and after applying the correction for energy balance (see Sec. 3.1).The two codes predict different locations of the synchrotron self-absorption frequency (0.5 in logarithm), which causes a large difference between the two results at frequencies ≲ 100 GHz.This difference can be attributed to the simplified calculation of the synchrotron self-absorption coefficient in ATHEνA (see Eq. 40 in Mastichiadis & Kirk (1995)); there it is assumed that the electron emits all the power near the critical synchrotron frequency, while in LeHaMoC we use the full relation of the synchrotron power spectrum instead of this approximation (see Eq. A.8).When we implement the delta function approximation we find that the difference is reduced to less than 0.2 in logarithm.Furthermore, we compare the energy spectra of pairs injected via γγ pair production in the bottom panel of Fig. 3.The results are in excellent agreement up to the highest Lorentz factor values.

Fig. 4.
Comparison of injection spectra of secondary particles computed using ATHEνA and LeHaMoC for the case of interactions of relativistic protons with a power-law photon field.From top left and in clockwise order: γ-ray spectra from neutral pion decay, pair spectra from charged pion decays, all-flavor neutrino spectra from charged pion decays, and pair spectra from the Bethe-Heitler process.The parameters used for this test can be found in Table 2 under the column Test 2.

Test 2: Production rates of secondaries in proton-photon interactions with extended energy distributions
We consider next a generic case for proton-photon interactions where relativistic protons interact with a power-law target photon field.This choice is particularly relevant for astrophysical applications since power-law photon fields are expected to arise from non-thermal emission processes such as synchrotron radiation, and inverse Compton scattering.We are interested in comparing the injection spectra of secondaries produced via Bethe-Heitler and photopion production processes on a fixed target photon field.Therefore, we only account for the output of the emissivity functions as described in Appendix A and we neglect all emission produced by the secondaries.We also note that protons do not cool via photo-hadronic interactions due to the low target photon density.
The numerical results obtained with the two codes are shown in Fig. 4. Starting from the top left figure, we find good agreement in the π 0 decay γ-ray emissivity computed with the two codes, as indicated by the ratio χ in the lower panel; this remains close to unity over a large range of energies, except close to the peak of the spectrum where the difference becomes about 1.4.The pair production spectra from pion decays have a very good agreement (top right panel).The all-flavor neutrino spectra (bottom left panel) have very similar shapes (the ratio is flat across a wide range of energies).Finally, the spectra of Bethe-Heitler pairs are displayed on the bottom right panel.The spectral shapes up to γ e ∼ 10 10 are similar, and there is good agreement in the maximum injected energy (which corresponds to the peak of the spectrum).Nonetheless, the spectrum obtained with ATHEνA appears to be shifted toward higher Lorentz factors by a factor of ∼ 1.5.The sharp cutoff of the spectrum at γ e ∼ 10 10 computed with ATHEνA is artificial because the code considers interaction energies up to 10 4 above the threshold.

Test 3: A time-dependent adiabatic expansion leptonic model
We aim to compare the numerical solutions obtained with LeHaMoC with the exact analytical solution of the electron kinetic equation in a time-dependent problem that has no steady-state solution.The primary objective is to investigate the ability of the code to solve time-dependent problems and describe the particle distribution evolution with time.
For this purpose, we assume an expanding blob where electrons are injected once at the beginning of the simu-lation with a power-law distribution of slope s e and are left to cool only via adiabatic losses.The blob has an initial radius R 0 and expands with a constant velocity V exp .The magnetic field B 0 is considered constant throughout the simulation.Under these assumptions, we can obtain an analytical solution for the evolution of the electron distribution that reads (Kardashev 1962), Θ γ e,min − γ e,min,0 where β 0 ≡ V exp /R 0 and Θ(y) is the Heavyside function.Moreover, K is a normalization constant, γ e,min and γ e,max are the minimum and maximum Lorentz factor of the electron distribution at time t, while γ e,min,0 and γ e,max,0 are the minimum and maximum Lorentz factors at the moment of injection.Fig. 5 shows six snapshots of the particle distribution computed analytically using Eq. 10 (solid lines) and numerically with LeHaMoC.To highlight the impact of the time step dt on the numerical solutions, we performed runs with different choices (see inset legend).The numerical solution captures the decrease of γ e,min and γ e,max over time due to the adiabatic losses and matches well with the analytical solution in normalization as long as dt ≤ R 0 /c ≡ t cr,0 .In general, choices of dt > t cr,0 lead to diffusion of the numerical solution and do not capture the shape of the distribution with precision.
To further test the accuracy and reliability of our code, we investigate the conservation of electron number throughout the simulation for the numerical results of different time steps.Since electrons are not allowed to escape the system, the total of the electrons, N e,tot (t) = V(t) n e (γ e , t) dγ e , should remain constant over time.
We find that N e,tot remains constant for all values of dt as long as the integration is performed over a wide range of Lorentz factors (see black lines in Fig. 6).If, however, the integration is limited between the analytically expected γ e,min (t) and γ e,max (t), we find particle conservation for dt < R 0 /c as demonstrated by the blue lines in Fig. 6).We further comment on the impact of dt on the numerical solution of dynamically involving systems through another illustrative example in Appendix B.
By utilizing the analytical solution given by Eq. 10 for an expanding blob subject to adiabatic losses, we can also determine the evolution of the synchrotron-self absorption frequency ν ssa over time, assuming that synchrotron radiation is the only emission mechanism for electrons (see also Boula & Mastichiadis 2022, for leptonic models with expansion).We find that We compare the numerical results of ν ssa2 with the analytical scaling relation in Fig. 7.The solid line has the same slope as the one predicted analytically for β 0 t ≫ 1, while the numerical results for different time steps are depicted  with dashed, solid, and dashed-dotted lines.We observe that for time steps dt ≤ t cr,0 , the evolution of numerically derived ν ssa follows the expected time dependency at late times.However, for dt > t cr,0 , the numerical evolution of ν ssa with time is slower than predicted (see difference in slopes).

Test 4: A steady-state leptohadronic model
Finally, we present a generic steady-state leptohadronic scenario.The objective of this test is to compare the broadband photon and neutrino spectra obtained from ATHEνA and LeHaMoC, considering the inclusion of all physical processes (except for pp collisions, which are not included in ATHEνA).To initiate the simulation we assume that relativistic electrons and protons are injected with a distribution dN i /dγ = Kγ −s i e −γ/γ i,coff , for γ ≥ γ i,min , into a spherical blob with radius R 0 and magnetic field strength B 0 .Moreover, relativistic pairs are injected in the source through Bethe-Heitler pair production, pion production through proton photon interactions, and pair creation via γγ interactions.Particles and photons are allowed to escape on a timescale equal to the light-crossing time of the source, i.e. t cr = R 0 /c.The parameter values, which are presented in Table 2, are one of the parameter sets used in the Hadronic Code Comparison Project (Cerruti et al., in prep.)where four radiative transfer codes including ATHEνA are compared.
Both codes yield remarkably similar broadband photon spectra (thick lines), as shown in the top panel in Fig. 8 -see also the ratio plot in the bottom panel.We also present a decomposition of the total photon emission computed with LeHaMoC into different spectral components.For the selected parameter values the electron synchrotron spectrum contributes significantly to the X-ray range (up to ∼ 10 17 Hz).These synchrotron photons are energetic enough and can serve as targets for both Bethe-Heitler and pion production interactions with the protons in the system.Both interactions lead to the injection of high-energy pairs, with extended energy distributions (see e.g.Fig. 4) that will radiate via synchrotron and inverse Compton processes.In particular, the broad component peaking at approximately ∼ 10 19 Hz corresponds to the emission of synchrotron-cooled Bethe-Heitler pairs (see also Petropoulou et al. 2015).
Additionally, the component peaking at approximately ∼ 10 24 Hz arises from the synchrotron emission of pairs injected via charged pion decays.Inspection of the ratio plot suggests that both codes produce similar results across a wide range of photon energies where the secondary pair emission dominates.Finally, the spectrum exhibits a peak resulting from the direct decay of π 0 mesons, which gener-  ates very high-energy γ-rays (∼ 10 29 Hz).The observed decrease in luminosity (compare thick and thin green lines), resulting from the absorption of high-energy γ-ray photons by GHz photons, is consistently reproduced by both codes.Similarly, there is a good agreement in the all-flavor neutrino spectra computed with both codes.It is noteworthy though that the implementation used in LeHaMoC speeds up the computation of the multi-messenger emission by a factor of ∼ 3 compared to ATHEνA, bringing it down to ∼ 12 min.An even faster computation is also possible but at the cost of reduced accuracy.

Astrophysical applications
To demonstrate the fitting capabilities of the newly developed code we first model a blazar SED using emcee (Foreman-Mackey et al. 2013), a python implementation of the Affine invariant Markov chain Monte Carlo (MCMC) ensemble sampler.This allows us to better estimate the uncertainties in model parameters and to explore possible degeneracies in this multi-parameter problem.To illustrate the capabilities in computing hadronic-initiated electromagnetic cascade spectra, we present a pp-interaction model for the neutrino signal from the nearby Seyfert galaxy NGC 1068.

Blazar SED fitting
For this application, we choose 3HSP J095507.9+355101(Giommi et al. 2020;Paliya et al. 2020), which is a BL Lac object at redshift z = 0.557 (Paiano et al. 2020;Paliya et al. 2020) that belongs to the rare class of extreme blazars3 (Biteau et al. 2020).This blazar has been detected in GeV γ-rays by the Fermi Large Area Telescope (LAT) and is part of the 4FGL catalog (Abdollahi et al. 2020).It has also been possibly associated with a high-energy neutrino IceCube-200107A detected by the IceCube Neutrino Observatory (IceCube Collaboration 2020) one day before the blazar was found to undergo a hard X-ray flare.
Various emission models of the 3-day X-ray flare, including one-zone leptohadronic scenarios, were presented in Petropoulou et al. (2020).In all cases, the parameter values were selected based on physical arguments instead of being derived from a fitting procedure.The long execution time of the radiative code ATHEνA, even for a pure leptonic emission model, deemed necessary this "eyeball" description of the SED (see also Petropoulou et al. 2015Petropoulou et al. , 2017)).Here, we demonstrate that SED fitting using the newly developed code and a Bayesian inference approach is feasible with limited computational resources.3HSP J095507.9+355101 is ideal for this application.First, our results can be compared to SED models that are available in the literature.Second, we can investigate how the different spectral coverage of the low-and high-energy SED humps (see below for more details) affects our results.
We use the multi-wavelength data of January 11, 2020 (4 days after the neutrino detection) when the source was observed for the first time in hard X-rays by NuSTAR (Harrison et al. 2013).The quasi-simultaneous observations in the UV, soft X-rays, and hard X-rays provide a detailed picture of the low-energy part of the spectrum (see black symbols in Fig. 10).On the contrary, the high-energy part of the spectrum is less constrained observationally.Given that 3HSP J095507.9+355101 is a faint γ-ray source, the Fermi data need to be averaged over a long period (250 days) to allow for a significant detection of the source at least in two energy bins (see magenta symbols in Fig. 10).For details about the data selection and analysis, we refer the reader to Giommi et al. (2020).
The statistical uncertainties of the flux measurements typically underestimate uncertainties that stem from the non-simultaneity of data used in the SED, since blazar emission is variable on timescales of days to months (e.g. the X-ray data are obtained from ∼ks observations, while the γ-ray data are averaged over 250 days).We, therefore, add a term ln f to the likelihood function to account for any source of uncertainty that is not included in the statistical uncertainties of the measurements (for a similar application, see Karaferias et al. 2023), Here, f m,i stands for the model flux (in logarithm) evaluated at the frequency of the i-th data point, f i is the measured flux, and σ tot,i is the total variance, which is defined as with σ i being the errors of the flux measurements.We also note that the upper limits are included in fit by adding the following term in the likelihood function (see Sawicki 2012;Yang et al. 2020): where f lim, j is the j-th flux upper limit (in logarithm) and σ j is the respective uncertainty.
We apply first an SSC model with synchrotron selfabsorption and intrinsic γγ pair production included.The electron distribution is modeled as a power law of slope s e that starts from γ e,min and extends to a sharp high-energy cutoff at γ e,max .In the MCMC fitting all parameters are sampled from uniform distributions in logarithmic space, except for the power-law slope of the particle distribution, which is sampled from a uniform distribution in linear space.We produce a chain with 48 walkers that are propagated 50,000 steps each and discard the first 5,000 steps of each chain as burn-in.The corner plot showing the posterior distributions of the parameters is presented in Fig. C.1, and a random selection of SSC spectra computed using 100 posterior values is shown in Fig. 10.For comparison purposes, we include an SSC spectrum used in Petropoulou  2020) as an "eyeball" description of the SED (model D).The displayed SSC spectra highlight the importance of obtaining flux measurements with small uncertainties.In particular, the UV and X-ray measurements act as nodes for the model and help decrease the spread in the predicted synchrotron component of the SED.On the contrary, the high-energy hump of the SED is less constrained (due to upper limits and large error bars in the LAT energy range), leading to a larger spread in SSC spectra (and physical conditions) that are statistically compatible with the observations.While model D of Petropoulou et al. (2020) falls within the range of SSC spectra found by the Bayesian inference, it is only one of the possible realizations of the emitting region (see also corner plot in Fig. C.1).
We next consider the simplest lepto-hadronic scenario where relativistic protons are injected into the source with a power-law distribution and produce photons only via synchrotron radiation.This is known as the protonsynchrotron (PS) model for γ-ray blazar emission (Aharonian 2000; Mücke & Protheroe 2001).For this illustrative example, we neglect photo-hadronic interactions but note that these may not be negligible for all the parameters explored by the walkers in the chain.We defer a full lepto-hadronic MCMC modeling for future work.2020) is also plotted for comparison purposes (dashed blue line).Without imposing any constraints on the parameters (e.g.same powerlaw slopes or maximum energies for the electron and proton distributions), the low-and high-energy humps of the SED are decoupled.This, in combination with the γ-ray upper limits, leads to solutions with a spread in the predicted γ-ray peak energy.This spread in solutions is also reflected in the broad posterior distributions of the parameters describing the proton distribution.For example, the minimum proton energy and power-law slope cannot be constrained, thus justifying the simplifying assumption of fixing these parameters to default values (see Petropoulou et al. 2020).

Electromagnetic cascades in high-energy neutrino sources
NGC 1068 is a prototypical Seyfert 2 galaxy at d L ≃ 14.4 Mpc4 (Meyer et al. 2004) exhibiting AGN and starburst activity in its central region (Neff et al. 1994).Due to its proximity, it is the best-studied galaxy of this type.The estimated mass of the supermassive black hole (SMBH) in NGC 1068 is approximately 20 million times the mass of the Sun, i.e.M ≃ 10 7.3 M ⊙ (Panessa et al. 2006), and the corresponding Schwarzschild radius is R s ≃ 6 × 10 12 cm.It is a well-known bright X-ray source with estimated intrinsic X-ray luminosity (in the 0.1-200 keV range) of ∼ 4.6 × 10 43 erg s −1 (Bauer et al. 2015).NGC 1068 is also detected in GeV γ-rays by Fermi-LAT (Lenain et al. 2010;Abdollahi et al. 2020), but upper limits are set at even higher energy by MAGIC (Acciari et al. 2019).Recently, the IceCube Collaboration has reported a high-energy neutrino excess (at 4.2 σ confidence level) associated with NGC 1068 (IceCube Collaboration et al. 2022), strengthening previous reports of a 2.9σ excess in the 10-year timeintegrated search (Aartsen et al. 2020).
Many models for the neutrino and/or γ-ray emission of NGC 068 can be found in the literature (e.g.Murase et al. 2020;Kheirandish et al. 2021;Inoue et al. 2022;Eichmann et al. 2022).We do not attempt to offer a new interpretation but rather present the capabilities of our code using one of the published one-zone models for NGC 1068.In particular, we adopt the AGN corona scenario of Murase (2022) in which neutrinos are produced via pp interactions in a magnetized corona and γ-rays are attenuated via the disk-corona radiation field.For simplicity, we ignore the emission from accelerated (primary) electrons in the corona and focus on the electromagnetic cascade developed by secondary pairs.While details of the model can be found in Murase (2022), we outline below the basic model ingredients for completeness.
We assume that protons are accelerated in the corona into a power-law distribution, i.e. dN/dE p ∝ E −s p p , E p ≥ E p,min , where E p = m p γ p c 2 .Relativistic protons can interact with the cold (i.e.non-relativistic) protons present in the corona.The number density of the latter is approximated as n p,c ∼ √ 3τ T /(ζ e σ T R), where τ T represents the coronal optical depth to Thomson scattering, ζ e is the pair loading factor (i.e.number of pairs per proton), and R is the characteristic size of the corona.Introducing the normalized radius R ≡ R/R s , the coronal cold proton density is estimated as n p,c ≃ 10 9 cm −3 τ T,−1 ζ −1 e,0 R −1 1.5 , where we used the notation q x = q/10 x .
Given that the produced neutrino has a typical energy E ν ∼ E p /20, we require E p,min ∼ 20 TeV to reproduce the low-energy end of of the neutrino spectrum observed by IceCube.Because the spectrum of secondary particles produced in pp collisions follows closely the proton spectrum (Kelner et al. 2006a), we adopt the same slope for the parent proton distribution as the best-fit value for the spectral index of the neutrino spectrum determined by IceCube (i.e., s p = s ν = 3.2).We assume that the luminosity of the accelerated protons is comparable to the intrinsic X-ray luminosity of the corona, namely L p = 10 43.8 erg s −1 .
Following (Murase 2022) we introduce the parameter ξ B , which is defined as the ratio of magnetic field energy density, U B = B 2 /(8π), to the bolometric energy density of photons, U ph = L bol /(4πR 2 c).Here, B represents the magnetic field strength, and the photon field is assumed to be distributed within a spherical source of radius R. Given these considerations, the magnetic field strength can be expressed as . Photons from neutral pion decay may pair produce on the diskcorona photon field.For our example, we adopt the spectral template for L bol = 10 45 erg s −1 from Murase et al. (2020) (see Fig. 2 therein).Pairs produced via γγ pair production and charged pion decays emit synchrotron radiation and inverse Compton scatter photons to high-energy, thus leading to the development of an electromagnetic cascade in the coronal region.Finally, all charged particles are assumed to escape from the corona on a timescale t esc = 100R/c, while photons and neutrinos escape on R/c.
In Fig. 12 we present the broadband SED of our model for the case of an extended corona with radius R = 100 R s and ξ B = 1.Neutral pions decay into γ-ray photons with a spectrum similar to the neutrino spectrum (solid magenta line).These energetic photons are attenuated by low-energy photons present (e.g.attenuation due to the disk thermal emission is evident by the dip at energies E/(m e c 2 ) ≈ 10 6 ), leading to the production of secondary pairs that emit synchrotron radiation (solid blue line) and inverse Compton scatter photons from the disk-corona system (solid green line).For reference, we also plot the differential proton luminosity that is injected in the corona (dashed black line).
The high-energy photon and neutrino spectra of the model are compared to multi-messenger data of NGC 1068 in Fig. 13 for ξ B = 1 and different values of the source radius, to illustrate the effects of γ-ray attenuation.For comparison, we also show the photon spectrum without accounting for γγ absorption, represented by the dashed colored lines.Our results reveal that the emission within the 0.1-10 GeV range is primarily generated through syn- chrotron radiation.In this particular model, the magnetic field strength is inversely proportional to the radius of the source.As a result, stronger magnetic fields are expected in more compact regions (with smaller radii), pushing the synchrotron spectrum of secondaries to higher energies.As anticipated, we observe a greater degree of γγ absorption in more compact regions, but in all cases, the expected TeV emission from neutral pion decays is attenuated, hence all models are consistent with the MAGIC upper limits.We note that we have not included other potential sources of attenuation, such as the thermal emission from the dust torus.The model neutrino spectrum falls within the IceCube uncertainty band (95% confidence level) as expected.It is worth noting that the computed neutrino spectrum exhibits a spectral break at 0.1 TeV.This feature arises from the adoption of the δ−function approximation for pion production at energies below this limit, as described in (Kelner et al. 2006a) -see also Appendix A. Because of this assumption, the neutrino spectrum below 0.1 TeV is harder than the one presented in Murase (2022), where the full neutrino spectrum produced by the minimum energy pions in the system was computed.

Conclusions
In this work, we have presented our newly developed code, LeHaMoC, which aims to describe the multimessenger emission of non-thermal astrophysical sources.LeHaMoC solves the kinetic equations using an implicit scheme assuming that the particles occupy a spherical region and they are homogeneous.The highlights of our work include the comprehensive treatment of various non-thermal processes, the efficient computational times achieved for different scenarios, the inclusion of energy- conserving schemes, and the adiabatic expansion of the source.These features make LeHaMoC a valuable tool for studying non-thermal sources and their associated emission mechanisms.
One of the notable achievements of LeHaMoC is the significant reduction in computation times compared to the ATHEνA code.We have demonstrated that LeHaMoC can solve steady-state problems, involving leptons inside the emission region, in approximately ∼ 2 − 3 seconds which is a factor of 30 − 100 shorter than ATHEνA.For protonproton steady-state scenarios, LeHaMoC achieves computation times on the order of ∼ 20 seconds, and for photohadronic steady-state scenarios, it takes approximately ∼ 10 minutes (factor of 3 faster than ATHEνA).The improvement in the execution time can be attributed to the following factors.In comparison with LeHaMoC, the time discretization in ATHEνA is not user-configurable, instead, it is predefined within the algorithm5 .It is established based on the most significant derivative encountered, considering all processes and grid points.As a result, if the initial conditions result in substantial derivatives, the time step becomes quite small, often extending the computation time required to reach a steady state.Typically, this results in computations taking a considerable amount of time, sometimes spanning the first few light-crossing times.On the contrary, the steady-state solutions are not sensitive to the choice of time step within the Chang & Cooper scheme employed in LeHaMoC; hence, larger time steps can be employed leading to faster computations for steady-state problems.Besides the different numerical schemes, there are discrepancies in the way certain processes are incorporated within each code.A more detailed explanation of the process implementation in LeHaMoC can be found in Appendix A. Notable distinctions are the injection rates of the Bethe-Heitler and the photopion processes.In the ATHEνA code the Bethe-Heitler pair production rates and photopion production rates are based on tabulated Monte-Carlo results from Protheroe & Johnson (1996) and Mücke et al. (2000), respectively (for the implementation in ATHEνA see Mastichiadis et al. 2005;Dimitrakoudis, S. et al. 2012).In LeHaMoC we use instead the analytical parametrizations of Kelner & Aharonian (2008) for the energy distributions of secondary particles produced from photopion interactions and simplify the calculation of pair distribution from Bethe-Heitler production as described in A. The improved efficiency of LeHaMoC allows for more extensive simulations and understanding of the high-energy astrophysical sources.
The small computational time of LeHaMoC enables the application of advanced data fitting techniques, such as Markov Chain Monte Carlo (MCMC) methods.Using emcee we have demonstrated the capability of LeHaMoC to fit the SED of a blazar, using two different scenarios and taking into account the observations in the X-rays and γrays.By employing MCMC techniques, we gain a deeper understanding of the underlying physical parameters and their associated uncertainties, enabling us to effectively reproduce the observed spectrum.Furthermore, our analysis revealed that the spread in fluxes across the electromagnetic spectrum can be larger than the systematic differences of 10% between radiative transfer codes, including the ATHEνA code.Additionally, our results emphasize the importance of dense sampling of the SED through multiwavelength observing campaigns to capture the full complexity of non-thermal sources.
Other significant aspects of the code are the inclusion of energy-conserving treatment between the particles and the photons and the consideration of photons that are produced from the secondary particles as targets for particlephoton interactions.This latter feature allows the study of optically thick environments where feedback effects are important for driving oscillatory (limit-cycle) behavior found in hadronic supercriticalities (Petropoulou & Mastichiadis 2012;Mastichiadis et al. 2005Mastichiadis et al. , 2020)).By accurately accounting for these interactions, LeHaMoC may provide a more comprehensive description of the photon spectra escaping such environments.
Despite the advancements achieved in LeHaMoC there are still challenges to be addressed.The computational time needed for photohadronic scenarios currently prohibits the MCMC fitting applied to high-energy astrophysical sources.Furthermore, small changes in the physical parameters may lead to different spectral characteristics due to the interplay of the physical processes.To overcome these challenges, one potential way to move forward is to explore the use of neural networks as an alternative to numerical codes.Neural networks have gained traction in various fields, such as astrophysics (Hezaveh et al. 2017), space weather forecasting (Camporeale 2019), and biology (Wang et al. 2019).Implementing neural networks in the analysis of high-energy astrophysical emission could provide a more efficient approach to fitting big data sets by reducing significantly the computation time while provid-

Photon-photon pair production
A process that acts as a sink of high-energy photons, and as a source term for relativistic electrons is photon-photon pair creation.We introduce the absorption coefficient for a given high energy photon with frequency ν 1 as follows where σ γγ is the cross-section of the process, y = ϵϵ 1 with ϵ and ϵ 1 being the target and the γ-ray photon energies respectively normalized to the electron's rest mass energy.We use an approximation for the cross section from (Coppi & Blandford 1990): σ γγ (y) ≃ 0.652σ T y 2 − 1 y 3 ln(y)yΘ(y − 1), (A.17) where Θ(y) is the Heavyside function.We, therefore, introduce the γγ absorption loss term in Eq. 3 that reads The volumetric production rate of relativistic electrons (or positrons) of Lorentz factor γ e is given by (Mastichiadis & Kirk 1995): where R γγ is an approximation to reaction rate of the process (Mastichiadis & Kirk 1995).In the previous equation, all angle effects are neglected and we have assumed that the emerging pair has energy equal to that of the absorbed γ-ray.This leads to the following source term in Eq. 1 dN dtdγ e = j γγ γ e dV, (A.20)

Bethe-Heitler pair production
Bethe-Heitler pair production refers to the process of creating an electron-positron pair by a relativistic proton interacting with a low-energy photon.For ultra-relativistic protons γ p ≫ 1, the pair spectrum is given by ( 2γ p γ e and E −,max ≡ ω − 1.The upper panel of Fig. A.1 depicts the integral's value when γ p > γ e , whereas the lower panel represents the scenario with the reverse condition.Notably, we observe a division of the integral into two distinct regions at the plane where γ p = γ e .By using interpolation, the desired integral values can be obtained efficiently, thus reducing the computational time by a factor of ∼ 20 − 30. In order to calculate the volumetric production rate of pairs coming from a proton distribution n p (γ p ) ≡ N p (γ p )/V we use The energy loss rate of a high energy proton with Lorentz factor γ p is given by (Blumenthal 1970): where α f is the fine structure constant and the function ϕ(ξ) is adopted from Fig. 2 of (Blumenthal 1970).The energy loss term in Eq. 2 can be then written as

Photomeson production process
Photomeson interactions refer to the process by which low-energy photons interact with high-energy protons, resulting in the production of mainly π 0 and π ± mesons.The produced pions will decay in γ-rays and leptons.The production rate of a given type of particle with energy E l , where l can be γ, e + , e − , ν µ , νµ , ν e and νe , from a proton distribution n p (E p ) ≡ N p (E p )/V is given by (Kelner & Aharonian 2008): A.26) where E p = γ p m p c 2 , ϵ is the photon energy normalized to m e c 2 , x is the ratio between the energy of the produced particle and the proton's energy and Φ l (η, x) is the energy distribution of the species l -see Eq. ( 31) and Tables I-III in Kelner & Aharonian (2008).Therefore for the source term of relativistic particles with energy E l in Equations 1, 3, and 4 we use The energy loss rate of a high energy proton with Lorentz factor γ p are given by (Begelman et al. 1990): .28)where n ′ γ (ν ′ ) ≡ dN γ /dVdν ′ denotes the density of photons in the emission region's frame of reference while all bared quantities are measured in the proton's rest frame.The energy-dependent cross-section and the inelasticity of the process are represented as σ pπ and k p and are adopted from (Morejon et al. 2019) and (Stecker 1968) respectively.
The energy loss term in Eq. 2 can then be written as where E p = γ p m p c 2 , x = E l /E p , n g is the number density of cold protons in the ambient gas, and σ pp,inel is the crosssection of inelastic pp interactions.The function F l (x, E l /x) is related to the energy distribution of particle species l.We use the empirical functions as described in (Kelner et al. 2006b) for E p > 0.1 TeV, and x l = E l /E p ≥ 10 −3 , while for values E p < 0.1 TeV the spectra of γ-ray photons and other secondaries are computed to lower energies using the δfunction approximation for pion production as described in the same manuscript.Therefore, the source term of relativistic particles with energy E l in Eqs. 1, 3 and 4 is given by, The energy loss rate of a high energy proton with Lorentz factor γ p ≫ 1 is given by (see Eq. 4.11   between γ min and γ max .The source is expanding with velocity V exp = 0.1c.This problem has no steady-state solution and is also ideal for examining the performance of a numerical solver in time-dependent scenarios. The analytical solution to the kinetic equation is given by the following expression N e (γ, t) = K 1 + β 0 t β 0 s e [1 − (1 + β 0 t) −s e ]γ −s e , γ min ≤ γ ≤ γ max , (B.9) where β 0 ≡ V exp /R 0 and K is a normalization constant.The discretize Eq. 1 into time t i and Lorentz factor γ j in terms of the particle number V 1, j N i+1 e, j−1 + V 2, j N i+1 e, j + V 3, j N i+1 e, j+1 = N i e, j + Q i j ∆t, (B.10) where the coefficients are given by Eqs.B.2, B.3 and B.4 On the other hand, when we discretize the equation written in terms of the number density, we find V 1, j n i+1 e, j−1 + V 2, j n i+1 e, j + V 3, j n i+1 e, j+1 = n i e, j + Q i j ∆t, (B.11)where the coefficients are given by Eqs.B.6, B.7 and B.8 Again the only difference between the coefficients in the equations above lies in the coefficient V 2, j .When solving the kinetic equation using the number density, an additional term arises from the time derivative of the volume 3∆t V exp R i+1 .In figures B.2 and B.3, we display both the numerical and analytical solutions for the two cases, considering two different time steps for each scenario.Our findings reveal that when using the particle number approach, the numerical and analytical solutions exhibit the same behavior for both choices of ∆t.However, when solving the kinetic equation using the number density of particles, we observe a discrepancy between the analytical and numerical solutions for larger time steps.It is important to note that this discrepancy arises when the chosen time step ∆t > R 0 /(3V exp ).Consequently, we choose to solve the kinetic equation using the particle number rather than the number density, as it yields more accurate numerical solutions without the need for very small time steps.

Fig. 1 .
Fig. 1.Impact of grid resolution on code execution time and luminosity balance.Top panel: Relation between the code execution time and the number of grid points per logarithmic decade in particle energy for three scenarios of fast cooling electrons described in the text (see Sec. 3.1).Colored triangles indicate three choices for the number of grid points per decade that are used for the results shown in Fig. 2. Bottom panel: Ratio of bolometric photon luminosity to electron injection luminosity versus the number of grid points per logarithmic decade.A ratio of one indicates the expected energy balance between electrons and photons.

Fig. 2 .
Fig. 2. Analytical and numerical comparison of a synchrotron cooling problem and the impact of grid resolution.Top panel:Comparison of numerical (solid colored lines) and analytical (dashed black line) solutions for synchrotron cooling electrons in a spherical blob with a constant magnetic field.Each numerical result was calculated using a different number of grid points per decade in the electron Lorentz factor (n ppd ) as indicated in the legend (see also markers in the top panel of Fig.1).Bottom panel: Ratio χ between the numerical and the analytical solutions.

Fig. 3 .
Fig. 3. Steady-state SSC model computed using ATHEνA and LeHaMoC codes.We show a comparison of (from top to bottom): the steady-state electron spectra, photon spectra, and production rate of pairs due to γγ absorption.The bottom panel of each plot shows the ratio χ of the spectra computed with ATHEνA and LeHaMoC.All displayed spectra are measured in the comoving frame of the blob.The parameters used for this test can be found in Table 2 under the column Test 1.

Fig. 5 .
Fig.5.Comparison of numerical (dashed, dotted, and dasheddotted lines) and analytical (solid lines) solutions for adiabatic cooling electrons in an expanding blob with a constant magnetic field.Each numerical result was calculated using a different time step as indicated in the legend.The parameters used for this test can be found in Table2under column Test 3.

Fig. 6 .
Fig. 6.Evolution of the total number of particles N e with time for the same case shown in Fig 5. Different types of lines show N e as derived from the numerical solutions for different choices of the time step (see inset legend).The black lines represent the number of particles calculated by considering all grid points, while the blue lines represent the integration of the particle distribution within the analytically expected range of Lorentz factors, i.e. γ e,min (t) and γ e,max (t).For comparison, the analytical solution yields N e,tot = 4.9 × 10 50 electrons.

Fig. 7 .
Fig. 7. Evolution of the synchrotron-self absorption frequency ν ssa with time for the same case shown in Fig 5. Different types of lines show ν ssa as derived from the numerical solutions for different choices of the time step (see inset legend).For comparison, we show the analytical prediction for the late-time (t ≫ R 0 /V exp ) evolution of ν ssa with time.

Fig. 8 .
Fig. 8.Comparison of the steady-state photon spectra of ATHEνA code and LeHaMoC for a leptohadronic scenario for blazar emission.All displayed spectra are measured in the comoving frame of the spherical blob.The colored dashed lines are used to indicate the contribution to the photon spectrum coming from the pairs (blue and red) and the decay of neutral pions (green) to γrays before attenuation.The parameters used for this test can be found in Table 2 under column Test 4.

Fig. 9 .
Fig. 9. Comparison of the steady-state all-flavor neutrino spectra computed with ATHEνA and LeHaMoC codes for Test 4.

Fig. 10 .
Fig. 10.SED of 3HSP J095507.9+355101 using data from Giommi et al. (2020).Black symbols indicate observations taken on January 11, 2020 (i.e.soon after the arrival of the neutrino alert).Magenta symbols show the time-integrated Fermi-LAT data over a period of 250 days prior to the neutrino alert.SSC spectra computed for a random sample of 100 points from the posterior distributions are overplotted (solid orange lines).For comparison, the SSC model from Petropoulou et al. (2020) is also shown (dashed blue line).Photon attenuation by the extragalactic background light (EBL) is not taken into account.
Fig. 11 shows 100 SEDs (solid orange lines) obtained by randomly selecting 100 parameter sets from the posterior distributions shown in Fig. C.2.The proton-synchrotron model discussed in Petropoulou et al. (

Fig. 11 .
Fig. 11.Same as in Fig. 10 but for a proton synchrotron (PS) model of the SED.For comparison, the PS model from Petropoulou et al. (2020) is also shown (dashed blue line).

Fig. 12 .
Fig. 12. Spectral energy distribution of NGC 1068 is analyzed by considering an emitting region with R = 100 R s .The total photon spectrum originating from this region is shown by the solid black line.The solid red line represents the combined contribution of the corona and disk components (template adopted from Murase et al. (2020)).Solid blue and green lines represent the synchrotron and ICS emission produced by the secondary particles, respectively.The neutrino spectrum is depicted by the purple line, and the black dashed line represents the injected luminosity of protons.

Fig. 13 .
Fig. 13.γ-ray and neutrino emission of NGC 1068.Black symbols and grey arrows indicate gamma-ray measurements by Fermi-LAT (Abdollahi et al. 2020) and upper limits by MAGIC (Acciari et al. 2019), respectively.Colored solid (dashed) lines represent the γ-ray spectrum produced by the electromagnetic cascade inside the corona with (without) γγ absorption respectively, for different choices of radius and magnetic field (see legend and text for details).The solid dark line represents the muon and antimuon neutrino spectrum, and the grey-shaded region covers all power-law neutrino fluxes that are consistent with the IceCube data at 95% confidence level (IceCube Collaboration et al. 2022).

Fig. A. 1 .
Fig. A.1.Third integral of Eq.A.19 for several values of the integration limits here denoted as E −,min and E −,max .The top panel shows the integral's value for γ p > γ e , while the bottom panel shows the integral's values for γ p < γ e .
(pp) inelastic collisionsHigh-energy protons may interact with non-relativistic (cold) protons in dense astrophysical environments.Inelastic proton-proton collisions lead to the production of secondary particles, γ-rays, pairs, and neutrinos.The production rate of a given type of particle with energy E l , where l can be γ, e + , e − , ν µ , νµ , ν e and νe , from a proton distribution n p (E p ) ≡ N p (E p )/V is given by(Kelner et al.   inel (E l /x)n p (E γ /x)F l (x, E l /x)

Fig. B. 1 .
Fig. B.1.Comparison between numerical solutions (represented by dashed and dashed-dotted lines) and analytical solutions (represented by solid lines) using the same parameters as Test 3, using the number density in the kinetic equation.The numerical results were obtained with varying time steps, which are indicated in the legend.

Fig. B. 2 .
Fig. B.2.Comparison of numerical (dashed and dotted lines) resulted by solving Eq.B.1 and analytical (solid lines) solutions for adiabatic cooling electrons in an expanding blob with constant injection.Numerical solutions for two choices of the time step are shown as indicated in the legend.The evolution of the particle distribution in time is indicated by the different colors.

Fig. B. 3 .
Fig. B.3.Comparison of numerical (dashed and dotted lines) resulted by solving eq.B.5 and analytical (solid lines) solutions for adiabatic cooling electrons in an expanding blob with constant injection.Each numerical result was calculated using a different time step as indicated in the legend.

Fig. C. 2 .
Fig. C.2. Corner plot showing the posterior distributions of the parameters of the PS model of 3HSP J095507.9+355101.Dashed lines in the histograms indicate the median value of each parameter and the 68 % range of values.The parameter values used in the PS from Petropoulou et al. (2020) are overplotted (magenta square) for comparison.

Table 1 .
Parameter values for the code performance tests in Sec.3.1.

Table 2 .
Parameter values for scenarios used to test the accuracy and performance of LeHaMoC.
(Blumenthal 1970oss-section(Blumenthal 1970) and all energies above are normalized to the electron rest mass energy.The last integral in Eq.A.21, involving the cross-section, can be computationally challenging and can increase the computation time of a simulation significantly.To overcome this problem, we calculate the integral for various combinations of γ e , γ p , and ω, from a large sample, given the parameters of each simulation, and store them in an array.In Fig. A.1, we illustrate the third integral from Eq. A.21 for several values of E −,min ≡ in Mannheim & Schlickeiser 1994): pp,inel (E p )Θ(E p − E p,th ), (A.32)where E p,th = γ p,th m p c 2 and γ p,th = 1+ (m π /m p )(1/2 + m π /(2m p )).The energy loss term in Eq. 2 can be written as