Issue 
A&A
Volume 635, March 2020



Article Number  A148  
Number of page(s)  16  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201937355  
Published online  24 March 2020 
Unbiased Monte Carlo continuum radiative transfer in optically thick regions
University of Kiel, Institute of Theoretical Physics and Astrophysics, Leibnizstrasse 15, 24118 Kiel, Germany
email: akrieger@astrophysik.unikiel.de
Received:
19
December
2019
Accepted:
11
February
2020
Radiative transfer describes the propagation of electromagnetic radiation through an interacting medium. This process is often simulated by the use of the Monte Carlo method, which involves the probabilistic determination and tracking of simulated photon packages. In the regime of high optical depths, this approach encounters difficulties since a proper representation of the various physical processes can only be achieved by considering high numbers of simulated photon packages. As a consequence, the demand for computation time rises accordingly and thus practically puts a limit on the optical depth of models that can be simulated. Here we present a method that aims to solve the problem of high optical depths in dusty media, which relies solely on the use of unbiased Monte Carlo radiative transfer. For that end, we identified and precalculated repeatedly occuring and simulated processes, stored their outcome in a multidimensional cumulative distribution function, and immediately replaced the basic Monte Carlo transfer during a simulation by that outcome. During the precalculation, we generated emission spectra as well as deposited energy distributions of photon packages traveling from the center of a sphere to its rim. We carried out a performance test of the method to confirm its validity and gain a boost in computation speed by up to three orders of magnitude. We then applied the method to a simple model of a viscously heated circumstellar disk, and we discuss the necessity of finding a solution for the optical depth problem with regard to a proper temperature calculation. We find that the impact of an incorrect treatment of photon packages in highly optically thick regions extents even to optically thin regions, thus, changing the overall observational appearance of the disk.
Key words: methods: numerical / radiative transfer / scattering / polarization / protoplanetary disks
© ESO 2020
1. Introduction
Nowadays, the most widely used method to perform radiative transfer simulations in astrophysics is the Monte Carlo approach. The area of application covers a wide range of astrophysical objects, such as protoplanetary or circumbinary disks, Bok globules, filaments, or planetary atmospheres, which are subjects of studies of various codes, for example, MC3D (Wolf et al. 1999), Mol3D (Ober et al. 2015), and POLARIS (Reissl et al. 2016). During these simulations, the photon packages that are generated travel through the simulated model space and interact with matter, for example dust, by scattering, absorption, and reemission. The particular path of the photon package as well as its interactions are determined in a probabilistic way. Throughout this process, thermal energy is deposited in the dust, which increases its temperature. At every point of interaction, the photon package has a chance to be absorbed or scattered and thus continue its path through the model space. Therefore, every photon package has to be tracked from its first emission to the moment it leaves the simulated space. The number of interactions it undergoes increases strongly with the optical depth of the medium, resulting in a high computational demand (e.g., Baes et al. 2016; Gordon et al. 2017; Camps & Baes 2018). This problem is usually handled by imposing a maximum number of interaction whose exceedance results in the removal of the photon package. This generally leads to an underestimation of the ambient temperature and consequently to an altered simulated observational appearance. Additionally, the estimation of its caused error is difficult at the very least.
Being able to simulate optically thick regions is of great relevance, in particular, if the radiation stems from these regions. For example in the case of protoplanetary disks, the heating of dust grains by deeply embedded protoplanets offers a great challenge to the Monte Carlo approach. These young accreting protoplanets are embedded in an optically thick circumplanetary disk or envelope, which effectively shields their radiation from distant regions as well as observers. In order to infer properties of these dense regions, a proper temperature calculation is crucial. Multiple approximative methods have been proposed and implemented in order to solve the problem of high optical depths, for example, the partial diffusion approximation (Min et al. 2009), the modified random walk (Min et al. 2009; Robitaille 2010), or biasing techniques (Baes et al. 2016), each of which has its merits and drawbacks.
The diffusion approximation corrects the temperature of an optically thick region after the Monte Carlo simulation. It assumes that the probability for a photon package to leave this region without an interaction is zero and it obtains a temperature by solving the timeindependent radiative diffusion equation using the temperature at the boundary of this region (Wehrse et al. 2000; Min et al. 2009). By using this method, a low number of photon packages can already lead to a smooth temperature distribution, whereas otherwise, in a regular Monte Carlo simulation a low photon count results in a high level of noise. This method gives approximated temperature values and is typically used if the smoothness of the temperature distribution is required for further analysis, for example, when iteratively solving for a vertical hydrostatic density distribution of a circumstellar disk (Min et al. 2009).
A different method to handle the problem of high optical depths is the modified random walk. In a region with constant density and optical properties, a sphere with a certain radius with the position of the photon package marking its center can be defined. If the optical depth of the sphere is high, the photon package scatters multiple times before it leaves the sphere for the first time. The modified random walk uses the results from the work of Fleck & Canfield (1984) for a random walk procedure representing a large number of scattering, absorption, and reemission events. Thus, it is possible to immediately move the photon package to the rim of the sphere and skip the time consuming calculation of the scattering events that the photon package would undergo on its way toward the rim. This method can greatly enhance the performance of Monte Carlo simulations (e.g., Min et al. 2009; Robitaille 2010). It was originally developed for isotropic scattering (Fleck & Canfield 1984) and can be extended to nonisotropic scattering assuming that the phase function for scattering only depends on the angle between the incident and outgoing particle direction (Ishimaru 1978). Unfortunately, there is no way to apply this method to Mie scattering (Mie 1908), which is anisotropic and whose outcome of a scattering event depends on the polarization state of the incident photon. Additionally, the exact properties of a photon package, which is emitted at the rim of the sphere, are not defined; this includes the package’s stokes vector and direction of emission, in particular.
Lastly, the biasing technique can solve the problem of high optical depth by favoring high optical depths and in turn reducing the weight of the photon package by a corresponding factor (Baes et al. 2016). This method is very powerful since it gives the possibility of altering the pathway of a photon package such that it caters to the individual goals of a simulation. This freedom comes with the difficulty of quantifying the actual error of the result and thus should be applied with caution. If the photon package scatters multiple times, the weighting procedure has to be applied after every scattering event, which can result in particularly high weighting factors. Furthermore, this method clearly demotes certain paths that the photon package would have traveled along by using unbiased Monte Carlo radiative transfer (Baes et al. 2016).
In the following, we describe a new method to solve the problem of high optical depth, which is (i) Monte Carlo based, (ii) can be used during a regular Monte Carlo simulation, (iii) does not interfere with multiple other prominent optimization techniques (e.g., continuous absorption of photon packages, immediate reemission, peeloff), (iv) does not introduce an additional bias, and (v) is (theoretically) applicable to arbitrarily high optical depths. Section 2 describes the basic principles of a Monte Carlo radiative transfer code. It follows a derivation of the necessary variables needed for the implementation of our method as well as a proof of their scalability, which is crucial since it ensures the viability of the method. We proceed to describe the method and discuss its limitations as well as upgrades before we measure its performance in a testbed. In Sect. 3 we apply our method to a simple model of a viscously heated circumstellar disk. We compare the results of the basic Monte Carlo method with the results of our method and study the impact of a limit on the number of interactions on the calculated disk temperature. We then point out the necessity of using a method that is capable of dealing with high optical depths. We discuss the short as well as the long range impact of an inaccurate treatment of high optical depths on the resulting disk temperature and thus on simulated observations of stateoftheart instruments, such as ALMA (Kurz et al. 2002), PIONIER (Le Bouquin et al. 2011), and MATISSE (Lopez et al. 2014). In Sect. 4, we summarize the results of this paper.
2. Monte Carlo random walk
For the numerical implementation and performance test of the new method, we used the threedimensional Monte Carlo radiative transfer Code Mol3D (Ober et al. 2015), which is a successor of MC3D (Wolf et al. 1999; Wolf 2003). It achieves a high computational efficiency by combining a locally divergence free continuous absorption of photon packages (Lucy 1999) as well as immediate reemission according to a temperature corrected emission spectrum (Bjorkman & Wood 2001). Details about the code can be found in Ober et al. (2015).
In this work, we describe a method that has been implemented in Mol3D, which uses precalculated results of Monte Carlo simulations of photon packages traveling through a homogeneous sphere of constant density, temperature, and optical properties in order to solve the problem of high optical depths in a dusty medium. We discuss the applicability of the method to arbitrarily high optical depths in arbitrary media as well as the feasibility in deriving an error estimate.
The basic concept of this method is to precalculate the random walk of a photon package traveling from the center of a sphere to its rim while keeping track of essential variables that are needed during a radiative transfer simulation. Since this process is repeated multiple times during a Monte Carlo radiative transfer simulation, we can choose a randomly selected outcome from our precalculation. The difficulty with this approach is to find a meaningful and efficient way to store the data of possible outcomes for this process that are additionally quickly accessible. The most intuitive way is to store the energy deposited in the sphere as well as the complete list of properties of the photon package at the moment it penetrates the rim of the sphere. This would include the wavelength λ, the Stokes vector S = (I, Q, U, V)^{T}, as well as the angle of penetration θ_{out}. Certainly, the outcome of this simulation depends on the temperature as well as the radius and density of the sphere. These variables a priori depend on each other, thus, they cannot be stored for each variable separately. When binning the outcome and storing their emerging probability distributions, the memory demand easily becomes enormous depending on the accuracy we intend to achieve, thus, making a different approach desirable.
2.1. General aspects of a Monte Carlo radiative transfer simulations
During a Monte Carlo radiative transfer simulation photon packages travel through a grid composed of cells for which each has constant physical properties. After the emission of a photon package, the direction and the optical depth it travels is chosen in a probabilistic manner determining its next location of interaction. Between two events of interaction, energy is stored in every dust grain inside a traversed cell according to the absorption probability of the photon package by a single grain. This probability depends on the path length l the photon package travels inside the cell. The deposited energy is given by (Lucy 1999)
where C_{abs} is the wavelengthdependent absorption crosssection, V_{cell} is the volume of the cell, and E_{γ} is the energy the photon package carries. Throughout this process, the energy of the photon package is kept constant. When traveling through an optically thick cell, the photon package may interact multiple times before crossing one of the boundaries of the cell. Immediately after an absorption event, the photon package is reemitted isotropically with a randomly assigned direction and wavelength (λ) according to its ambient dust temperature (T_{i}) dependent spectrum. Under the assumption of a radiative equilibrium, the probability density function (PDF) P(λ  T_{i}) for the wavelength is given by
in which B_{λ} is the Planck function and is a normalization factor (Bjorkman & Wood 2001). For better performance during a simulation, this PDF and its corresponding cumulative distribution function (CDF) are precalculated in Mol3D for a set of wavelengths and temperatures. Despite the discrete sampling of reemission spectra, the resulting temperature of dust in a cell is a continuous function. We note that throughout the course of this paper, we refer to this general description of Monte Carlo radiative transfer as the basic method. Furthermore, since we do not make use of any of the aforementioned biasing techniques, we also refer to this method as unbiased.
2.2. Precalculated random variables
For our method, we precalculated the outcome of a simulation of photon packages traveling through a sphere of specific (constant) properties and used it to generate multidimensional binned CDFs, which account for the different possible outcomes of the simulation. During the actual Monte Carlo radiative transfer simulation, we then made use of the precalculated CDFs in order to quickly move photon packages through optically thick regions. The quantities we tracked during the precalculation are the total absorbed energy of the sphere (), the location of the last event of absorption before the photon package leaves the sphere for the first time (r_{la}), as well as its wavelength when penetrating the rim of the sphere (λ). For a medium of constant and homogeneous optical properties, the distribution of these variables solely depends on the radius (r_{sph}), temperature (T), and number density (n_{dust}) of the sphere. The photon package is assumed to be unpolarized after reemission at the last location of absorption. Therefore, the distribution of polarization states is trivial and does not have to be tracked. However, over the course of our precalculations, we made use of Mie scattering including the proper treatment of the polarization states of photon packages. In the following, we introduce three random variables that are crucial for our method, the effective optical depth , the minimal missing absorption optical depth , and the relative total absorbed energy X.
2.2.1. Effective optical depth
We performed the calculations for spheres of specific optical depths () and the set of temperatures that have already been used previously to precalculate the PDF from Eq. (2). For the simulated spheres, we chose radii that correspond to optical depths of
where n_{dust} is the dust number density and Ĉ_{ext} is a temperaturedependent effective extinction crosssection. We define the latter by
where C_{ext} is the wavelengthdependent extinction crosssection and is a normalization factor.
For illustrative purposes and to stress the meaning of the optical depth , Fig. 1 displays properties of reemitted photon packages and their experienced optical properties. The top plot displays the difference spectrum dB_{λ}/dT that is normalized to a maximum value of 1 for each temperature value T_{dust}. It shows that at higher temperatures, the peak of the spectrum moves toward shorter wavelengths. In particular, the maximum of this spectrum follows a law that is similar to Wien’s displacement law given by λ_{max} T ≃ 2410 μm K, see Appendix A.3 for a short derivation. Compared to Wien’s displacement law, the maximum of the difference spectrum is redshifted, which is a necessary condition for the temperature correction technique for which it is used. Additionally, in the bottom plot, we show the temperature dependence of the effective extinction crosssection Ĉ_{ext} defined in Eq. (4). Starting from T_{dust} = 2.73 K, the crosssection Ĉ_{ext} increases strongly with the dust temperature, except for a local maximum with a subsequent dip, which is a consequence of a silicate feature^{1}. Since Ĉ_{ext} is proportional to the optical depth , see Eq. (3), both quantities share the same temperature dependence. To illustrate the corresponding value of in this plot (right axis), we chose the arbitrary values n_{dust} = 7.8 × 10^{7} m^{−3} and r_{sph} = 1 au for the dust number density and the sphere radius, respectively. Altogether, both plots show that as the dust temperature rises in a radiative transfer simulation, the probability of reemitting photon packages with shorter wavelengths increases and, as a consequence, the effective optical depth, that is, the opaqueness of the medium, which is experienced by these photon packages, increases as well. Thus, in order to achieve a certain level of opaqueness at lower temperatures, the number density needs to be much higher compared to the case of high temperatures. Overall, the reason for using this temperaturedependent extinction optical depth rather than an optical depth for a specific wavelength is that it is a much better measure for the spheres optical thickness with regard to the radiation that it produces at its particular temperature, that is, similar values of at different temperatures correspond to a similar computational effort.
Fig. 1. Visualization of the temperature dependence of reemitted photon packages when using the (difference) spectrum dB_{λ}/dT. Top: difference spectrum dB_{λ}/dT that is normalized for each temperature value T_{dust} to a maximum value of 1 is displayed. Bottom: effective extinction crosssection Ĉ_{ext} (see Eq. (4); left axis) and the optical depth (see Eq. (3); right axis) as a function of the dust temperature T_{dust}. For details, see Sect. 2.2. 

Open with DEXTER 
We used 501 logarithmically sampled temperatures T ranging from 2.7 K to 3000 K and 132 wavelengths λ between 50 nm to 2 mm and kept the radius of the sphere fixed at the arbitrary value r_{sph} = 1 au. In Sect. 2.3 we show the scalability of our simulations with regard to the sphere radius and dust density.
2.2.2. Minimal missing absorption optical depth
The radius of the last event of absorption r_{la} strongly depends on the optical depth of the sphere. Higher values of push the last location of absorption closer toward the rim of the sphere. We chose the minimal missing absorption optical depth that the photon package has to overcome in order to leave the sphere as a measure for r_{la}. It is given by
We note that C_{abs} depends on the wavelength λ the photon package has when escaping the sphere, which also corresponds to the wavelength it acquires at its last event of reemission. The only means of interaction between these two events is scattering, which leaves the wavelength of the photon package unaffected. We used 100 logarithmically sampled bins with a maximum value of . We note that the use of an analogously defined missing extinction optical depth instead of the missing absorption optical depth is conceivable; however, shows to be a reliable measure and we do not expect any advantages in using .
In Fig. 2 the probability distributions (upper row) for three different temperatures 2.73 K, 45 K, and 1508 K and different values of are presented. The underlying probability distributions for show, in accordance with our expectation, striking similarities among all temperatures and spheres of sufficiently high values of . In the case of , the resulting distributions differ the most. This effect stems from the wavelengthdependent optical properties of the dust. The last location of absorption can only be as deep as the sphere’s center, resulting in a strict upper limit for , which falls below in certain wavelength regimes and thus causes the differences seen in Fig. 2. After performing ∼10^{11} simulations for different sphere sizes and temperatures, we find that not a single photon package originated from an optical depth of . Since the probability distribution is a bin size dependent function, we calculated its corresponding probability density displayed in the second row of Fig. 2. Photon packages leaving the sphere have the highest probability of being absorbed and reemitted from an optical depth of about . The average optical depth, nonetheless, is close to . Due to the method of immediate reemission (Bjorkman & Wood 2001) and the fact that the photon package starts its path at the center of a sphere, high values of are favored. A photon package may only stem from a low value of if all previous reemission events from further inside the sphere failed to transport it through the rim of the sphere. However, the fact that a photon package may scatter counteracts this trend as the geometrical reach of a photon package from one event of emission to a subsequent event of absorption decreases.
Fig. 2. Result for the distribution using 10^{6} simulations per temperature and a value of . Three different temperatures 2.73 K, 45 K, and 1508 K are displayed in the first, second, and third columns, respectively; the probability is shown in the first row and the probability density is illustrated in the second row. Different sphere sizes exhibit notable similarities with regard to the shape and average value of their corresponding distributions. This shows that radiation, which is emitted by dense spheres of different sizes but with identical temperatures, thus, originates from a similar distance from the rim of each sphere. 

Open with DEXTER 
2.2.3. Relative total absorbed energy X
The total absorbed energy of the sphere due to a single photon package depends on the pathway segments the photon package travels along inside the sphere and is given by
where λ_{i} denotes the wavelength of the photon package after its ith event of interaction and l_{i} is its corresponding pathway segment.
Figure 3 shows the average relative energy a photon package deposits in a sphere for different temperatures and sphere sizes. All depicted sphere sizes feature a similar behavior, especially toward higher values of . Furthermore, due to the temperaturedependent definition of , each curve is located at a distinct level. Denser spheres, that is spheres with a higher value of , generally result in higher absorbed energies and show a nearly constant shift in the loglog plot. In Fig. 4 we can see that the gray area, marking the region for possible values of , exhibits a clear dependence of , in particular,
Fig. 3. Average relative deposited energy using 10^{4} simulations per and temperature value. 

Open with DEXTER 
Fig. 4. Extinction optical depth dependence of the average total relative absorbed energy per photon package. The red line exemplarily shows the behavior for a T ≈ 1500 K sphere. The gray area marks the region for found for the complete range of temperatures between 2.73 and 3000 K. 

Open with DEXTER 
The latter relation in Eq. (7) describes the average energy a single dust grain in the sphere acquires through a single escaping photon package. In order to mitigate the problem of binning potentially arbitrary high values of , we instead binned the quantity X, which is defined as follows:
This particular definition of the quantity X serves a few purposes: (i) It is a measure for the total deposited energy in a sphere, (ii) it is independent of the photon energy E_{γ} (for a comparison, see Eq. (6)), and (iii) its factor of results in a rather weak dependence of the sphere size (for a comparison, see Eq. (7)).
In Fig. 5 we show the result of performing 10^{6} simulations per temperature and sphere size for the normalized distribution of the quantity X. We used 400 logarithmically sampled bins for X ranging from 10^{−7} to 10 with an additional bin for X = 0. We find that in all cases, the probability density of X peaks strongly at around X ≈ 10^{−2}…10^{−1} and only in the case of , the gray curve, does the distribution exhibit significant contributions from particularly low values of X. In addition to that, Fig. 5 shows a lognormal fit that we performed. We find that the distribution of X is well described by a lognormal distribution for all cases of high optical depth. Additionally, toward higher values of , the distributions show indications of convergence for each individual temperature. A remarkable common property among all distributions is the lack of a X > 10 contribution, that is, for all preformed simulations we find
Fig. 5. Results for the distributions of the variable X using 10^{6} simulations per temperature and value. Outcomes for six different temperatures are displayed, which correspond to the resulting distributions (histograms) as well as their lognormal distribution fits (dashed lines) using corresponding colors. For details, see Sect. 2.2. 

Open with DEXTER 
Overall, per calculated sphere size , the PDF and the CDF each have a total of 501 × 400 × 100 × 132 ( where M_{x} is the number of bins for a random variable x) entries that have to be stored and quickly accessed during an actual Monte Carlo radiative transfer simulation. Using single precision (4 byte per entry), this corresponds to ∼10 gigabytes per sphere size . In realistic simulations, the dust density of cells differs and thus also the spatial extent of spheres with equal values of varies on a cell basis. To avoid costly precalculations for each individual cell, which would lead to arbitrarily high memory requirements, we made use of the scalability of our simulations, which is the topic of the following subsection. Additionally, we performed a series of further tests to make sure that the number of bins, their logarithmic scaling, and full range result in acceptable coverage of the specifics of all distributions of X and . We find that this specific sampling (i) gives reliable results and good coverage for all distributions while (ii) also keeping the memory requirements reasonably low. In particular, firstly, using a linear sampling instead increases the error of the distributions averages, secondly, a higher number of bins does not justify the resulting memory consumption, and thirdly, a lower number of bins leads to a a higher error of the average value of distributions of both random variables X and .
2.3. Scalability
In a Monte Carlo radiative transfer simulation, the path a photon package travels in a homogeneous environment, with nonzero density, is determined by and thus equivalent to a sequence of random numbers. The optical depth τ a photon package travels is a random variable that solely depends on one of these random numbers. During our precalculations, we kept the optical properties of the environment fixed. Therefore, the path length l that a photon package travels only depends on the random variable τ and the ambient dust number density n_{dust} according to l ∼ τ/n_{dust}. Since it does not appear elsewhere in the path determination, the number density of dust only effects the path by scaling its length. Thus, given a realization of the aforementioned sequence of random numbers, the derived quantity l ⋅ n_{dust} takes the same value independent of n_{dust}. Consequently, the precalculated distributions of , , and X (see Eqs. (3), (5), and (8)) are also independent of n_{dust}. This is one crucial property that makes our method viable in the first place.
In Fig. 6 we show results of a scalability test for the distribution of X. For the calculation of the distribution, we used sphere sizes of r_{sph} = 1 au and adjusted the density such that reached the specific values from Eq. (3). We then performed the same simulations, but we altered r_{sph} values by a factor of 100 and 0.01 while keeping fixed and readjusting the number density n_{dust}. Figure 6 exemplarily shows the probability differences between simulations with a different sphere size and density for three different temperatures 2.73 K, 45 K, and 1508 K. The first row compares the simulations using r_{sph} = 100 au with the simulations using r_{sph} = 1 au. The second and third rows compare r_{sph} = 1 au with r_{sph} = 0.01 au and r_{sph} = 100 au with r_{sph} = 0.01 au, respectively. We find that the differences are on the order of ∼1%. Additionally, for every subplot and value of , we calculated the relative differences of the average value of X. The maximum value of the relative difference is displayed in each subplot and is also on the order of ∼1%. Considering the stochastic nature of this process and the fact that the number of counts per bin follows a Poisson distribution, this value equals the expected S/R of . The differently scaled models are thus in agreement with our expectation. As an additional test, we increased the number of simulated photon packages by two orders of magnitude and we obtain a higher level of agreement for the distributions of one order of magnitude, which can be seen in Fig. A.1. This shows that the error of the precalculated distributions behaves as expected and that scalability is given.
Fig. 6. For three different temperatures 2.73 K, 45 K, and 1508 K in the first, second, and third columns, respectively, show the probability differences between distributions of X using simulations with different sphere sizes r_{sph} but equal values of . For every temperature and sphere size, 10^{4} simulations were performed. Text boxes contain information about the compared sphere sizes. For example, “100–1” denotes the comparison of the distribution using a sphere size of r_{sph} = 100 au with the distribution using a sphere size of r_{sph} = 1 au. In addition to that, the relative difference of the averages of two compared distributions is given by δ_{⟨X⟩} in the upper left corner of each plot. Low values of δ_{⟨X⟩} indicate a higher level of agreement. 

Open with DEXTER 
2.4. Method
During a Monte Carlo radiative transfer simulation, a photon package travels through a grid composed of cells with homogeneous physical properties. Succeeding an event of emission and given that the photon package is in a sufficiently large distance from the cells boundaries, we can make use of the results of our precalculations. For the highest performance of this method, we have to find the largest possible precalculated sphere that (i) fits into the cell and (ii) leaves the cell’s physical conditions unchanged when the photon is transfered from the sphere’s center to the last point of absorption. In order to meet the first (spatial) requirement, we calculated the minimal distance l_{min} to the next cell border. Then the corresponding optical depth was calculated as follows:
To meet the requirements, the chosen size of the sphere has to satisfy the condition . The second condition requires the constancy of all physical properties that effect the path of a photon package during its way to the sphere’s rim. The physical conditions inside the cell (optical properties, reemission spectrum, dust number density) are homogeneous but not necessarily constant since the cell temperature changes during the run of a simulation and as do its spectrum of reemitted photon packages and effective optical properties, see for example Eq. (4). However, the effective optical properties were precalculated for each of the 501 temperature values and they only change when reaching the next temperature bin. Therefore, there is a maximum energy that can be deposited in a single dust grain before the next temperature bin is reached. Using Eqs. (1) and (8), the second (temperature) condition thus requires
In making use of the result from Eq. (9), that is the value of X has an upper limit, this requirement implies a maximum optical depth that must not be exceeded by a chosen sphere in order to leave the cell’s optical properties unaffected. The maximum optical depth is thus given by
Therefore, for the highest efficiency possible, we chose a precalculated sphere size given as follows:
If no such sphere has been calculated, it either means that the photon package is too close to a cell boundary or that the next temperature level is close. This implies that the method provides the greatest performance boost in the case of optically thick cells, that is, when the efficiency of the basic Monte Carlo method is low.
Given a value for and the cells temperature, we used the precalculated multidimensional CDF to randomly choose values for X, , and λ. The value of X was used to immediately increase the deposited energy of every dust grain in the corresponding cell. Using and , the distance r_{la} can be determined. Since this method was used after an event of isotropic reemission occurs, the direction in which the photon package is displaced was chosen isotropically. During the precalculation, the quantity r_{la} marks the last point of absorption before the photon package left the sphere. Therefore, we had to launch the photon package at that position and transport it through the sphere’s rim without another absorption on its way. In a real simulation, this is done by basic Monte Carlo transfer. This process has to be restarted until the photon package leaves the sphere without being absorbed. Only after a photon package is successful can the transfer of it continue. If the optical density of the cell is high, this method can be used multiple times to displace the photon package before eventually leaving the cell. The most timeconsuming part of the method is its hitormiss nature. Even though it is already launched close to the rim, it can take many attempts before the photon package manages to leave it. Nonetheless, every time the method can be used prevents having to calculate a potentially high number of interactions and thereby computation time. If it were possible to remove either of the method’s limitations, including grid induced or temperature induced limits, it would greatly benefit the effectiveness of our method. The grid induced restrictions are difficult to overcome without the loss of spatial resolution of the simulation. The temperature induced limitation, though, may potentially be removed by adjusting the reemission spectrum (see Eq. (2)) of previously absorbed photon packages during our precalculations. However, this is not part of this paper and may very well be part of a followup study in the future.
2.5. Optimization – performance boosts
One step of this method involves the absorptionfree transfer of a photon package from its last location of absorption through the rim of the sphere. If the photon package is absorbed before leaving the sphere, it has to be reset to the previous location of reemission. Two major issues arise that significantly lower the chance of a successful escape of the photon package: (i) the optical depth and (ii) the initial direction of the photon package. The probability distribution of the distance a photon package travels before its absorption is given by an exponential function. It becomes increasingly difficult to transport a photon package over a large distance, which can lead to a high number of failed attempts. The direction of emission of the photon package is chosen randomly according to an isotropic distribution. The chance for a successful escape, however, may strongly depend on the direction of emission. Thus, a large portion of photons are initially sent in a direction with a very low escape rate. In general, it is crucial to implement optimization techniques that alleviate this problem. In fact, the viability of our method is largely grounded in their utilization. However, there are simple solutions that help to speed up this costly procedure significantly. In the following, we present three different techniques that serve this purpose.
2.5.1. Optical depth problem – splitting scattering and absorption length determination
A photon package with wavelength λ is emitted and follows a path along which it may encounter several points of interaction. The type of interaction (absorption or scattering) is determined randomly by using the albedo A = C_{sca}/C_{ext}. Thus, if A ≠ 1, it may happen at any point of interaction during the photons path that it is absorbed and does not escape the sphere successfully. Therefore, a large portion of photons may not even have a chance to escape the sphere as their traveled distance before absorption does not reach the minimum distance between the sphere’s rim and the location of emission. In order to avoid sending out these photon packages, we split the determination of scattering and absorption events. Instead of using the extinction crosssection C_{ext} as a measure for the interaction rate and subsequently deciding the type of interaction, we used the absorption and scattering crosssections C_{abs} and C_{sca}, respectively, and determined the traveled distances that lead to events of absorption and scattering independently. A short proof for the viability of this approach can be found in Appendix A.1.
During the simulation, the photon package was emitted inside the sphere. Its location of last absorption was determined according to its absorption optical depth . Subsequently, another absorption optical depth τ_{abs} was randomly chosen according to the absorption crosssection and dust number density. Escaping photon packages with can immediately be discarded. Thus, rather then limiting the probability density of τ_{abs} to values , we instead used the sum of and τ_{abs} as a measure for the propagated path length l_{abs}. In doing so, no photon package was created at the point of last absorption, which does not have a chance of leaving the sphere successfully. Until traversing the distance l_{abs}, the only means of interaction is scattering. If the photon package reaches l_{abs} before leaving the sphere, it has to be reset. Additionally, whenever a photon package arrives at a scattering point and before it scatters, it is determined whether the photon package can potentially reach the sphere’s rim after the forthcoming scattering event. If its remaining path length before absorption does not suffice in overcoming the remaining minimal distance to the rim of the sphere, the photon package is immediately reset to its point of emission. The importance of this step becomes apparent when considering a high optical depth of, for example, . In this case, the probability for , which are certainly unsuccessful attempts, is ≳95%. This step of optimization is crucial for our method.
2.5.2. Initial direction problem – precalculated direction distribution
When the photon package inside the sphere is reemitted after an absorption event, the direction of emission follows an isotropic distribution. If the position of the last location of emission r_{la} coincides with the center of the sphere described by r_{c}, the escape probability p_{esc} for the photon package, which generally depends on the direction emission, is the same in any direction. If this is not the case, that is, if r_{la} ≠ r_{c}, the setup is axially symmetric along the axis defined by r_{la} − r_{c}. Thus, the directiondependence of p_{esc} reduces to a dependence of the launching angle θ alone. The launching angle θ is the angle about which the direction of emission differs from the direction of the shortest distance to the rim of the sphere and is defined by the following equation:
where v_{γ} is the direction of emission of the photon package. When launching a photon package close to the rim of a sphere with a relatively high value of , the probability may peak strongly at θ = 0 since shorter distances have a higher chance of being traversed without absorption. When randomly choosing θ according to an isotropic distribution, a large portion of launched photon packages therefore have low escape probabilities. This problem can be solved by precalculating the probability distribution for a successful escape for every possible sphere size, wavelength, and value . We used 181 bins for the launching angle θ, ranging from 0 (forward emission) to π (backward emission), and set up simulations to determine the probability distribution. Compared to the size of the already precalculated CDF, which is described in Sect. 2.2, this CDF increases the memory consumption by ∼0.1%. In order to calculate the CDF properly, it is important to consider that the escape probability for every θbin has to be weighted with the emission probability of a photon in this bin. In the case of isotropic emission, the PDF for cos(θ) is constant.
During the radiative transfer simulation, the above distribution is used whenever the photon package is emitted at its last point of absorption r_{la}. It is important to keep the randomly chosen value of θ fixed during all escape attempts. In order to avoid performance drops, we restarted this process with newly assigned random values when a high number of attempts was exceeded. Since this number was barely reached, we were not able to find a measurable impact of this additional parameter on the resulting temperature as well as on the emission properties of the sphere. Nontheless, the distributions that are obtained from precalculated spheres are completely generated without it.
2.5.3. Infinite beam splitting
Whenever our method is used and a photon package is transported from the center of the sphere r_{c} to its last location of absorption r_{la}, a value for X is randomy chosen. By doing so, a single randomly chosen value for the amount of deposited energy is determined and increases the deposited energy of the corresponding cell. Instead of choosing a random value for X, it is possible to choose the mean ⟨X⟩ instead. This corresponds to a scenario in which the photon package is split at the center of the sphere into an infinite number of photon packages, which all follow their individual paths through the sphere until they eventually meet and combine at the location of last absorption. Since we assume that Monte Carlo radiative transfer simulations converge to the proper solution when increasing the number of photon packages, this approach decreases the level of noise of a Monte Carlo simulation. While the computation speed is not necessarily boosted significantly, the memory requirements for the results of our precalculations decrease strongly, in our case, by a factor of 400.
2.6. Performance test
In this section, we showcase the performance of this method and compare it to the basic Monte Carlo method, see Sect. 2.1, before we eventually apply it to the exemplary case of a circumstellar disk in Sect. 3. The testbed that we use consists of a dense isothermal sphere with a radius of r_{sph} = 1 au and a constant density corresponding to an effective extinction optical depth of at a temperature of T = 1500 K. The bottom plot of Fig. 1 shows the resulting temperature dependence of . We chose this particular setup for the following reasons. Firstly, the value of exceeds the optical depth of our precalculated spheres (see Eq. (3)). Secondly, in a realistic simulation, the precalculated sphere sizes ideally cover the full range of potentially encountered optical densities. Thus, this value of only exceeds the largest precalculated sphere size by half an order of magnitude.
The initial temperature of the dense sphere is 2.73 K. Throughout the simulation, photon packages were emitted isotropically at the center of the sphere. The initial wavelengths were randomly chosen according to the latest temperature during the simulation run. Launched photon packages move inside the dense sphere until they penetrate its rim. Since the temperature of the sphere is low at the start of the simulation, the spectrum of emitted photons initially peaks at a relatively long wavelength and thus these photon packages encounter a rather low dust extinction crosssection (for a comparison, see Fig. 1). While the temperature increases with the number of photon packages that were created, the dense sphere becomes increasingly opaque for the newly created photon packages on average. Eventually, it reaches its maximum effective optical depth at a temperature of 1500 K. During the whole process, we tracked the properties of escaping photon packages in terms of X, , and λ as we did with the precalculated spheres. Additionally, we tracked the number of photon packages needed to heat up the dense sphere from its initial temperature to any temperature up to 1500 K. The photon package energy E_{γ} together with the tracked number of photon packages corresponds to a central heating luminosity, which is required to sustain the corresponding sphere temperature. This simulation was performed using the basic Monte Carlo method with E_{γ} = 10^{−5} as well as by using our method for highly optically thick regions with E_{γ} ∈ {10^{−5},10^{−7},10^{−9}}, where E_{γ} is given in units of L_{⊙}s. For the latter case, we made use of the performance boosts described in Sects. 2.5.1 and 2.5.2, involving absorptionscattering splitting and an improved initial direction, respectively. Results of our comparison are displayed in Fig. 7.
Fig. 7. Viability and performance test – comparison between the basic Monte Carlo method and our method. Setup: a dense sphere with a radial effective extinction optical depth is heated by a central source from 2.73 K to 1500 K. Upper left: heating curves of the sphere show high level of agreement. Upper right: performance comparison. Bottom: emitted photon statistic shows high level of agreement. 

Open with DEXTER 
We find that the heating curves in the upper left plot of Fig. 7 greatly overlap. The required heating power to reach the final temperature among all four different simulations is in good agreement and only differs by <0.1%. Generally, relative differences between the heating curves at arbitrary temperatures are within the expected level of inherent noise due to the use of the Monte Carlo method. The three lower plots show the distributions of the tracked quantities X, , and λ from left to right, respectively. Distributions with higher values for E_{γ} require a lower number of emitted photon packages to reach any temperature level, which in turn results in a higher level of noise. Nonetheless, it becomes apparent that for all tracked quantities, the distributions overlap, thus also showing agreement between our method and the basic Monte Carlo method. We note that the shape of these normalized distributions is strongly affected by the particular bin sizes that have been used. For the sampling of wavelengths, for example, we used a catalog which samples certain domains with higher density than others, resulting in this particular shape. These tests show agreement between the heating process as well as agreement between the spectrum of the spheres and the properties of emitted photon packages. We note that in both cases, we were able to simulate the full radiative transfer, including the changing polarization state of photon packages due to scattering. The only striking difference between both methods can be seen in the upper right plot of Fig. 7. This plot shows the number of photons that we were able to transport from the center of the dense sphere outward and through its rim per second of real computation time. Certainly, the specific numbers in this plot vary for different computational architectures; the overall trends, nonetheless, are representative. The orange curve corresponds to the basic Monte Carlo method and achieves an overall rate of emitted photon packages of ∼1 photon package per second. The three other curves correspond to simulations with our newly introduced method. We find that for these three cases, the overall rate of emitted photon packages reaches ∼10^{3} photon packages per second, thus a boost of three orders of magnitude. Except for the overall boost in performance, we find that this method has the positive effect of stabilizing the number of transferred photon packages during the whole run. As the temperature gradually increases, the performance of the basic method decreases. The reason for this effect is the overall increase in effective optical thickness of the sphere with temperature and its accompanying increased number of interactions. Our method, however, simply transfers the photon package close to a precalculated sphere’s rim, independent of the temperature. In other words, we transport the photon package at a consistent pace in terms of spatial units while the basic Monte Carlo method transports in terms of units of the extinction optical depth.
One interesting feature of this plot is the decreased efficiency at low temperatures for high values of E_{γ}. The reason for this drop is the fact that our method is limited in its choice of usable precalculated sphere sizes, in this case, by the criterion of constancy of the dense spheres optical properties. The higher the energy of the photon package, the more energy is deposited in the cell, thus, resulting in a quicker increase in temperature and change of optical properties of the sphere (for a comparison, see Eq. (12)). Since the temperature levels are sampled logarithmically, this leads to a drop of the efficiency at low temperatures. Nonetheless, the method outperforms the basic Monte Carlo approach significantly, especially toward higher temperatures when dust becomes increasingly opaque. Additionally, most of the computation time is usually spent at higher temperatures, that is when our method is at its peak performance. Another feature of these curves can be seen at about 800 K. Here we find a peak in performance for all three curves. This is due to the fact that when starting from this temperature, the effective extinction optical depth exceeds and thus the previously unutilized largest precalculated sphere, , is used.
3. Application
In this section we apply our method to the situation of the radiative transfer in a viscously heated circumstellar disk and compare the results to a simulation based on the basic Monte Carlo method. Subsequently, we present a study of the impact of N_{int} on Monte Carlo radiative transfer simulations of systems with high optical depth.
3.1. Model description
We use the following threedimensional parametrized circumstellar disk density distribution (Shakura & Sunyaev 1973)
where r is the radial distance from the zaxis at the center and z is the spatial coordinate perpendicular to the midplane. The surface density distribution Σ(r) and density scale height h(r) are given by (LyndenBell & Pringle 1974; Hartmann et al. 1998)
where Σ_{0} (h_{0}) is the surface density (scale height) at the reference radius r_{0}. The dimensionless parameters α and β are model parameters that shape the radial density profile and flaring of the disk, respectively. The total mass of the disk M_{disk} is obtained by the integration of Eq. (15) and is composed of gas and dust with a mass ratio of M_{gas} : M_{dust} = 100 : 1. In dense regions of a circumstellar disk, viscosity may become an important source of heating. Assuming a geometrically thin, optically thick accretion disk, the dissipated energy per unit area Ė_{ν} generated through viscosity is given by (Hartmann 1998)
where R_{*}, M_{*}, and Ṁ are the stellar radius, mass, and its accretion rate, respectively, and G is the gravitational constant. In order to show the applicability of our method, we performed radiative transfer simulations of solely viscously heated disks, meaning that the radiation of the central star does not contribute.
We used a spherical grid with 300 logarithmically sampled cells in the rdirection and 91 linearly sampled cells in the θdirection as well as one cell in the ϕdirection. The disk has an inner radius r_{in}, which is defined by the sublimation of dust at its sublimation temperature T_{subl}. The outer radius r_{out} was chosen such that the disk extents to optically thin regions that are far from the star where the density has dropped significantly.
In our simulation, dissipated energy originates purely from the midplane cell layer and is determined by integrating Eq. (17) over the intersection of every cell with the midplane. For most model parameters, we chose commonly observed values that are typical of circumstellar disks around T Tauri stars (Galli et al. 2015; Andrews et al. 2009), for the specific values see Table 1. To generate a model with efficient viscous heating, we chose a rather high value for the dust mass M_{dust}. This model results in an overall dissipated luminosity of ∼1.58 × 10^{−3} L_{⊙}. In our model, the sources of radiation are dust grains, whose spectrum is temperaturedependent. We assume spherical dust grains composed of 62.5% silicate and 37.5% graphite with a ratio of parallel : ortho = 1 : 2 (Draine & Malhotra 1993) and use their corresponding mixed optical properties (Draine & Lee 1984; Laor & Draine 1993; Weingartner & Draine 2001). Dust grains have a bulk density of ρ_{bulk} = 2.5 g cm^{−3} and radii a, ranging from 5 nm to 250 nm following a grain size distribution given by dn ∼ a^{−3.5}da (Mathis et al. 1977). Additionally, we make use of mean optical properties of dust grains (Wolf 2003). The process for scattering is described by the Mie theory.
Model parameters for the viscously heated disk.
Finally, two important parameters that limit the accuracy and have a significant effect on the computation time are the number of photon packages N_{γ} and the maximum number of interactions N_{int}. The first parameter determines the energy of a single photon package E_{γ} = L_{*}/N_{γ} Δt for the stellar radiation. Increasing the number of photon packages N_{γ} leads to a lower level of noise in the temperature distribution of the circumstellar disk, but this comes with an increase in computation time. The parameter N_{int} limits the number of interactions a single photon package can encounter before being removed from the model space. In the best case, this number is set to infinity, thus, having no undesired impact on the simulation. We note that the code was designed to ensure energy conservation and that, in the case of a viscously heated disk, every dissipating cell emits an integer number of photon packages. Consequently, the energy per photon package depends on the emission rate of the cell from which it originates and is chosen such that it is closest to the previously mentioned value, that is, E_{γ} ≲ L_{*}/N_{γ} Δt. The way we treat radiation that is produced through viscosity is by first determining the required dust temperature of each cell in order to provide its specific luminosity. We use that temperature as an initial cell temperature. During the run of a simulation, emitted photon packages heat up the disk. During the simulation, photon packages are created according to the gradually increasing latest temperature of their corresponding cells.
3.2. Results
We performed simulations of the viscous circumstellar disk introduced in Sect. 3.1 using our method, while imposing no restriction on the maximum number of interactions, that is, N_{int} → ∞ and N_{γ} = 10^{8} photon packages. The left plot of Fig. 8 shows the resulting vertical cut of the temperature distribution through the midplane. In this case, within the first 0.1 au, multiple cells reach temperatures above the sublimation limit. As a consequence, the dust sublimates and the inner cavity size increases. As can be seen, the midplane is heated stronger than regions in the disk at higher scale heights. Looking at even higher scale heights, however, the circumstellar disk becomes optically thin and is heated by the edge of the disk’s innermost region, which, overall, results in the specific temperature distribution seen in Fig. 8. This setup is symmetric with regard to the midplane with differences that arise, inherently, due to the stochastic nature of the Monte Carlo method.
Fig. 8. Comparison of the resulting temperature distributions of a viscously heated circumstellar disk. Left: temperature distribution for N_{γ} = 10^{8} and N_{int} → ∞ using our method. Right: relative difference between the basic Monte Carlo method and our method using N_{int} = 10^{9}. 

Open with DEXTER 
Since the basic Monte Carlo method relies on N_{int} < ∞ for many problems of high optical density and in order to ensure comparability, we limit our method by using the same value of N_{int} = 10^{9} as well. To do that, we have to use an estimator for the number of interactions undergone by a photon package whenever we make use of our method, that is, when transferring a photon package to the rim of a precalculated sphere. This number is not tracked during the precalculation since it increases the required memory consumption. Furthermore, its only purpose is to make sure that the number of interactions of a photon package does not exceed the limiting value N_{int}. In Appendix A.2, we derive an estimator for the number of interactions that we use for the comparison between both methods. The results of these simulations are shown in the right plot of Fig. 8. The plotted quantity δ_{T} is the relative difference between the temperature distribution obtained by using our method T_{MC+} and by using the basic Monte Carlo method T_{MC}. It is defined by
We find that the difference between both methods is on the order of ∼2% for the largest part of the plot, which is represented by dark red colors. The highest differences of ∼5%, corresponding to bright yellow regions in the plot, are only found in single cells and can be explained by a low photon count in these cells. We thus find that the results of both methods are in agreement with each other. For this specific setup, we achieved a boost in computation speed by about one order of magnitude.
3.3. Impact of imposing a maximum interaction number limit
In this section we study the impact of the parameter N_{int} on the resulting temperature distribution of a circumstellar disk, as described in Sect. 3.1. Due to its reliability and the achieved boost in computation speed, the following simulations were performed using our newly developed method. In Fig. 9 we see the results of a comparison between a reference simulation with N_{int} → ∞ and N_{γ} = 10^{8} with simulations of a finite number of N_{int}. In the left (right) plot, we limited the number of interactions to N_{int} = 10^{7} (N_{int} = 10^{9}) and plotted the relative difference to the reference model. Positive values of δ_{T} correspond to bright regions in the plots where the simulation with N_{int} → ∞ reached higher temperatures. We find that the differences are larger for the lower limit N_{int} = 10^{7}, which is in accordance with our expectation. If the number of interactions of a photon package reaches its maximum (N_{int}), the photon package was removed. This reduces the number of photons emitted from that cell, thus, leading to an underestimation of its radiation field and consequently its dust temperature. The effect of this reduced temperature is most prominent in the midplane, resulting in relative differences of up to ∼1/3. Regions in the plot with negative values, that is, higher resulting temperatures for models with a lowered value of N_{int}, are rare. The highest differences greatly exceed the level of noise seen in the right plot of Fig. 8. The right plot of Fig. 9, which has a much higher limit of N_{int} = 10^{9}, shows the greatest differences in cells that are close to the star, which are slightly off the midplane cell layer. These differences stem from the fact that a high number of photon packages are created in the innermost part of the disk and, in the case of a limited number of interactions, a part of them are removed from the model space without entering the regions off the midplane. On the contrary, in the reference model, these regions are heated up partly with photon packages that have undergone a number of interactions that exceed N_{int}. Nonetheless, it can be seen that the differences in the largest part of the circumstellar disk are compareable to the noise level (see Fig. 8) and only exceed this level in the innermost region of the disk.
Fig. 9. Studying the impact of N_{int}. Comparison of temperature distributions of viscously heated circumstellar disks. The reference model uses N_{int} → ∞. All displayed results stem from simulations using our method. Left: comparison with a model using N_{int} = 10^{7}. Right: comparison with a model using N_{int} = 10^{9}. 

Open with DEXTER 
To discuss the consequences and highlight the significance of our results, Fig. 10 shows the radial midplane temperature profile of these disks. This layer of cells produces radiation due to viscous heating. We show results for our reference model with N_{int} → ∞ and N_{γ} = 10^{8} as well as for three models, each of which has one modified parameter. We find that our reference model as well as the model with a reduced number of photon packages N_{γ} = 10^{7} and the model with a reduced value of N_{int} = 10^{9} all produce comparable results for the midplane temperature profile. Only in the case of a further restricted value of N_{int} = 10^{7} do we find strong deviations from all other simulations. We identify two major differences: (i) The innermost temperature profile and the sublimation radius are both underestimated and (ii) the temperature of the outer part of the midplane is strongly affected by the radiation originating from the inner part of the disk. The first difference occurs since the density in this part of the disk is high and photon packages may reach their interaction limit even before leaving the cell, which results in an underestimation of the temperature in this region. Consequently, a finite value of N_{int} results in an underestimation of the inner cavity size. This effect may change simulated nearinfrared long baseline interferometry observations, for instance, by MATISSE and Pioneer, and therefore impacts the interpretation of protoplanetary disk observations. The second temperature difference seen at higher radii also arises due to the removal of photon packages originating from the inner region of the disk. In the case of stellar radiation, outer regions are effectively shielded from direct irradiation. On the contrary, a viscously heated disk generates radiation throughout the extent of the disk. Thus, regions with strong viscous dissipation are shielded less efficiently form viscous dissipation than they are from stellar radiation. Due to a combination of relatively high luminosity and reduced shielding, photon packages from warm regions of the midplane reach further out laying regions and lead to a significant increase in the temperature. A limit on the number of interactions therefore underestimates the resulting midplane temperature, even at higher radii. This effect may very well be crucial, for example, in the case of simulated millimeter and submillimeter observations (e.g., by ALMA) and impact our understanding and interpretation of real observations. It is generally difficult to estimate the impact of photon packages that are removed too early and our study shows that the impact of these photon packages may very well extend to regions of a circumstellar disk, which are far away from the optically thick region from which they were emitted.
Fig. 10. Impact of N_{int} and N_{γ} on the midplane temperature of a viscously heated circumstellar disk. 

Open with DEXTER 
An investigation of the impact of stellar irradiation on the resulting temperature distribution shows a rather weak dependence of N_{int}, in particular, when choosing N_{int} = 10^{7} results in a midplane temperature profile that is already comparable to the case of N_{int} → ∞, independent of whether the basic method or our new method was used. Furthermore, the level of noise in the optically thick regions is dominated by the inherent noise of a Monte Carlo simulation. In order to reduce the level of noise, an increased number of emitted photon packages N_{γ} is required. However, the midplane temperature in our specific setup is significantly dominated by heating through viscous dissipation. Therefore, a proper treatment of optically thick regions is especially relevant when dealing with radiation that originates from deep inside these regions, such as in the case of viscous dissipation or young accreting protoplanets.
4. Summary
In this work, we present a newly developed method to solve the problem of high optical depths in Monte Carlo radiative transfer simulations. More precisely, this method reduces the computation time with which a photon package is transported through a dense dusty medium in order to calculate the temperature distribution. We assumed homogeneously distributed dust grains and used the radiative transfer code Mol3D as a testbed for this method. We made use of precalculated results from simulations of photon packages that travel from the center of a sphere to its rim. During the precalculation, we performed Monte Carlo radiative transfer simulations to determine properties of the emitted radiation originating from a homogeneous sphere in dependence of its temperature and size. We track the relative total deposited energy in the sphere (X), the last location of absorption (using ), as well as the wavelength of emitted photon packages (λ). The results are binned and stored in a multidimensional CDF. The CDF is then used to quickly transport photon packages over long distances in a probabilistic manner. We note that the precalculation was performed using basic radiative transfer, which includes wavelengthdependent optical properties, a temperaturedependent dust emission spectrum, as well as Miescattering with a proper treatment of the polariazation state of the photon package. Therefore, the results of these simulations are unbiased. Next, we provide a summary of our findings as well as key features of our method.

Precalculations were performed for a set of discrete values of , see Eq. (3). Apart from the wavelength λ, we chose to track the variables X and , which depend on the product of the sphere radius r_{sph} and dust density n_{dust} and not on any one of these variables separately. In Sect. 2.3, we show that the results for distributions of any value of , as a consequence, can be scaled and adjusted to the problem at hand, making this method viable in the first place.

We find that the tracked distributions show hints for convergence for increasing . Interestingly, this leads to a simple relation for the mean deposited energy per dust grain and simulated photon package: . Furthermore, we find that among all of the performed simulations, the probability for X > 10 is very low. In fact, not even a single transported photon package among all of the performed simulations reached that value, see Sect. 2.2.3. This serves as a great constraint for the maximally deposited energy per photon package, which is dependent on the temperature, density, and size of the spheres.

In Sect. 2.4, we derived and discuss the limitations of our method with regard to the change of the medium’s optical properties as well as spatial limits due to a model grid, see Eq. (13). Subsequently, in Sect. 2.5 we also present upgrades to the method that significantly boost the effectiveness – the splitting of scattering and absorption length (see Sect. 2.5.1), the utilization of our prior knowledge of the initial direction (see Sect. 2.5.2), as well as the infinite beam splitting (see Sect. 2.5.3).

We carried out a performance test using, as a testbed, a dense sphere of optical depth and achieve a boost of about three orders of magnitude in computational speed, see upper right plot in Fig. 7. In even denser environments, for instance, around young planets or stars, this value is easily exceeded and our method is expected to perform even better than that.

Simulations for increasingly high values of are timeconsuming when using the basic Monte Carlo method. In the bottom plots of Fig. 7, we show that the statistic of emitted photon packages as well as the statistic of the deposited energy of the basic method and our method are in remarkable agreement. Therefore, the calculation of larger spheres can be sped up significantly when using the set of already precalculated smaller spheres. Its error has been shown to be in agreement with the variance of a Poisson distribution.

We applied our method to a simple model of an optically thick viscously heated circumstellar disk and show the validity of the method. We study the impact that the imposed maximum number of interactions N_{int} has on the resulting temperature profile of the disk. We find that the choice of N_{int} is decisive throughout the full extent of the disk and not only in the optically thick regions where it is generated, see Sect. 3.3. The impact of a value of N_{int} that is too low is generally difficult to predict.
High optical depths pose a well known problem to Monte Carlo radiative transfer simulations. This study presents a novel method that solves this problem of Monte Carlo radiative transfer simulations and thus enables us to properly predict temperatures of objects with optically thick regions.
For details on the optical properties, see Sect. 3.1.
Acknowledgments
We thank all the members of the Astrophysics Department Kiel for helpful discussions and remarks. We acknowledge the support of the DFG priority program SPP 1992 “Exploring the Diversity of Extrasolar Planets (WO 857/171)”.
References
 Andrews, S. M., Wilner, D. J., Hughes, A. M., Qi, C., & Dullemond, C. P. 2009, ApJ, 700, 1502 [NASA ADS] [CrossRef] [Google Scholar]
 Baes, M., Gordon, K. D., Lunttila, T., et al. 2016, A&A, 590, A55 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bjorkman, J. E., & Wood, K. 2001, ApJ, 554, 615 [NASA ADS] [CrossRef] [Google Scholar]
 Camps, P., & Baes, M. 2018, ApJ, 861, 80 [NASA ADS] [CrossRef] [Google Scholar]
 Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89 [NASA ADS] [CrossRef] [Google Scholar]
 Draine, B. T., & Malhotra, S. 1993, ApJ, 414, 632 [NASA ADS] [CrossRef] [Google Scholar]
 Fleck, J. A. J., & Canfield, E. H. 1984, J. Comput. Phys., 54, 508 [NASA ADS] [CrossRef] [Google Scholar]
 Galli, P. A. B., Bertout, C., Teixeira, R., & Ducourant, C. 2015, A&A, 580, A26 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Gordon, K. D., Baes, M., Bianchi, S., et al. 2017, A&A, 603, A114 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hartmann, L. 1998, Accretion Processes in Star Formation [Google Scholar]
 Hartmann, L., Calvet, N., Gullbring, E., & D’Alessio, P. 1998, ApJ, 495, 385 [NASA ADS] [CrossRef] [Google Scholar]
 Ishimaru, A. 1978, Wave Propagation and Scattering in Random Media. Volume 1  Single Scattering and Transport Theory, 1 [Google Scholar]
 Kurz, R., Guilloteau, S., & Shaver, P. 2002, The Messenger, 107, 7 [NASA ADS] [Google Scholar]
 Laor, A., & Draine, B. T. 1993, ApJ, 402, 441 [NASA ADS] [CrossRef] [Google Scholar]
 Le Bouquin, J. B., Berger, J. P., Lazareff, B., et al. 2011, A&A, 535, A67 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Lopez, B., Lagarde, S., Jaffe, W., et al. 2014, The Messenger, 157, 5 [NASA ADS] [Google Scholar]
 Lucy, L. B. 1999, A&A, 344, 282 [NASA ADS] [Google Scholar]
 LyndenBell, D., & Pringle, J. E. 1974, MNRAS, 168, 603 [NASA ADS] [CrossRef] [Google Scholar]
 Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425 [NASA ADS] [CrossRef] [Google Scholar]
 Mie, G. 1908, Ann. Phys., 330, 377 [NASA ADS] [CrossRef] [Google Scholar]
 Min, M., Dullemond, C. P., Dominik, C., de Koter, A., & Hovenier, J. W. 2009, A&A, 497, 155 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Ober, F., Wolf, S., Uribe, A. L., & Klahr, H. H. 2015, A&A, 579, A105 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Reissl, S., Wolf, S., & Brauer, R. 2016, A&A, 593, A87 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Robitaille, T. P. 2010, A&A, 520, A70 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 [NASA ADS] [Google Scholar]
 Wehrse, R., Baschek, B., & von Waldenfels, W. 2000, A&A, 359, 780 [NASA ADS] [Google Scholar]
 Weingartner, J. C., & Draine, B. T. 2001, ApJ, 548, 296 [NASA ADS] [CrossRef] [Google Scholar]
 Wolf, S. 2003, ApJ, 582, 859 [NASA ADS] [CrossRef] [Google Scholar]
 Wolf, S., Henning, T., & Stecklum, B. 1999, A&A, 349, 839 [NASA ADS] [Google Scholar]
Appendix A: Derivation of expected optical depths
A.1. Interactions and absorptionscattering
The following derivation shows that the determination of absorption lengths and scattering lengths can be done independent of each other. The use of extinction lengths as well as interactions with a randomly assigned type of interaction (absorption or scattering) is equivalent to that.
A.1.1. Splitting absorption and scattering
The probability density for an absorption optical depth τ_{abs} is given by f(τ_{abs}) = exp(−τ_{abs}). The probability density for reaching a distance interval l is then given by
By expanding f(τ_{abs}) at τ_{abs} = C̃_{abs}l where C̃_{abs} is the specific crosssection for absorption, we find that
A.1.2. Using interactions
The probability density for an interaction optical depth τ_{ext} is given by f(τ_{ext}) = exp(−τ_{ext}). Subsequent to a reemission event, a photon package is moved along a straight line until it reaches a randomly selected extinction optical depth. At that point, another random number determines whether the subsequent interaction is an event of absorption or scattering. The probability for a scattering event is given by C̃_{sca}/C̃_{ext} where C̃_{sca} () is the specific crosssection for scattering (extinction).
In the scattering free case, the probability density for traveling the distance l after traversing the extinction optical depth τ_{ext} = C̃_{ext}l is, in analogy to the derivation of Eq. (A.1), given by C̃_{ext}exp(−C̃_{ext}l. For the probability density of a subsequent event of absorption, we thus find
In order to calculate the probability for n > 0 events of scattering before the eventual event of absorption at distance l, we have to integrate over the probability density of all possible paths of this kind. Let l_{i} ≥ 0 denote the path length the photon package travels before its ith event of scattering. It follows that after its nth and final scattering event, it travels a distance of before being absorbed. For the ith event of scattering, the probability density is, in analogy to the derivation of Eq. (A.2), given by C̃_{sca}exp(−C̃_{ext}l_{i}. Due to the fact that the individual path lengths are constrained by , the probability density p_{n} is given by
In integrating Eq. (A.3), one arrives at
This implies, that with the highest probability, the photon package scatters n_{max} = ⌊l · C̃_{sca}⌋ times. Eventually, we have to take the sum of p_{n} over all possible numbers of scattering events and find for the probability density:
Since this result and the result obtained in Eq. (A.1) are the same, we can split the calculation of absorption and scattering length. ▫
A.2. Estimator for N_{int}
For comparison purposes between the basic Monte Carlo method and our newly developed method, we kept the maximum number of interactions N_{int} equal in both simulations. In order to keep track of the number of interactions when using the new method, we estimated it every time the method was used. It is possible to derive an exact estimate when assuming for the case of , where ⟨ ⋅ ⟩ denotes the average of all possible paths of the photon package:
For the latter equation, we changed the sum over interaction path lengths to a sum over absorption path lengths. This is possible since the wavelength of the photon package does not change between two consecutive absorption events. Since in the limit, any two consecutive absorption events are independent of each other, we can exchange the average of the sum by the sum of averages. Let ⟨⋅⟩_{τabs, λ} denote the average of all possible paths between two consecutive absorption events. It follows that
For the latter equation, we used τ_{abs} = n_{dust}C_{abs}l and replaced the sum by the total number of absorption events N_{abs}. The average is simply ⟨τ_{abs}⟩_{τabs, λ} = 1. Therefore, we arrive at
Fig. A.1. For three different temperatures 2.73 K, 45 K, and 1508 K in the first, second, and third columns, respectively, show the probability differences between distributions of X using simulations with different sphere sizes r_{sph} but equal values of . For every temperature and sphere size, 10^{6} simulations were performed. Text boxes contain information about the compared sphere sizes. For example, “100–1” denotes the comparison of the distribution using a sphere size of r_{sph} = 100 au with the distribution using a sphere size of r_{sph} = 1 au. In addition to that, the relative difference of the averages of two compared distributions is given by δ_{⟨X⟩} in the upper left corner of each plot. Low values of δ_{⟨X⟩} indicate a higher level of agreement. 

Open with DEXTER 
In this limiting case, the total number of interactions N_{int} = N_{abs} + N_{sca} converges to since one interaction occurs per unit of τ_{ext} on average. Analogously to the derivation of Eq. (A.4) and using C_{ext} = C_{abs} + C_{sca}, we thus arrive at
Since λ and τ_{abs} are independent random variables, we can split the average into a product of two separate averages. Thus,
where ⟨⋅⟩_{λ} denotes the average over all possible wavelengths using the corresponding temperaturedependent PDF. With Eq. (A.4), we find that
Therefore, we estimate the number of undergone interactions during the transport of a photon package from the center of a sphere to its last location of absorption with
A.3. Displacement law of the difference spectrum
In this section we study the difference spectrum dB_{λ}/dT introduced by Bjorkman & Wood (2001). We find that (i) dB_{λ}/dT is positive for all λ, T > 0, (ii) it is unimodal, that is, it has exactly one (global) maximum, and (iii) the maximum follows a law similar to Wien’s displacement law, that is, λ_{max}T=const. The difference spectrum is given by
which is strictly positive for all λ, T > 0. For a fixed temperature T, its derivative with respect to the wavelength λ satisfies
where and s(T, λ) > 0 for all λ, T > 0. Thus, an extremal point requires
This equation has exactly one positive solution x_{max} > 0 since (i) the lefthand side (l.h.s.) is smaller than the righthand side (r.h.s.) as x → 0^{+} and (ii) the slope of the l.h.s (m_{l.h.s.}) is given by m_{l.h.s.} = 6, while the slope of the r.h.s (m_{r.h.s.}) satisfies 0 < m_{r.h.s.} < 2 for all x > 0. Furthermore, we find that
thus requiring the extremal point at x_{max} to be a maximum and, consequently, dB_{λ}/dT to be unimodal. A numerical calculation shows that the maximum is located at
Therefore, for a given temperature T, the unique (global) maximum of the difference spectrum dB_{λ}/dT, which is located at the wavelength λ_{max}, satisfies:
All Tables
All Figures
Fig. 1. Visualization of the temperature dependence of reemitted photon packages when using the (difference) spectrum dB_{λ}/dT. Top: difference spectrum dB_{λ}/dT that is normalized for each temperature value T_{dust} to a maximum value of 1 is displayed. Bottom: effective extinction crosssection Ĉ_{ext} (see Eq. (4); left axis) and the optical depth (see Eq. (3); right axis) as a function of the dust temperature T_{dust}. For details, see Sect. 2.2. 

Open with DEXTER  
In the text 
Fig. 2. Result for the distribution using 10^{6} simulations per temperature and a value of . Three different temperatures 2.73 K, 45 K, and 1508 K are displayed in the first, second, and third columns, respectively; the probability is shown in the first row and the probability density is illustrated in the second row. Different sphere sizes exhibit notable similarities with regard to the shape and average value of their corresponding distributions. This shows that radiation, which is emitted by dense spheres of different sizes but with identical temperatures, thus, originates from a similar distance from the rim of each sphere. 

Open with DEXTER  
In the text 
Fig. 3. Average relative deposited energy using 10^{4} simulations per and temperature value. 

Open with DEXTER  
In the text 
Fig. 4. Extinction optical depth dependence of the average total relative absorbed energy per photon package. The red line exemplarily shows the behavior for a T ≈ 1500 K sphere. The gray area marks the region for found for the complete range of temperatures between 2.73 and 3000 K. 

Open with DEXTER  
In the text 
Fig. 5. Results for the distributions of the variable X using 10^{6} simulations per temperature and value. Outcomes for six different temperatures are displayed, which correspond to the resulting distributions (histograms) as well as their lognormal distribution fits (dashed lines) using corresponding colors. For details, see Sect. 2.2. 

Open with DEXTER  
In the text 
Fig. 6. For three different temperatures 2.73 K, 45 K, and 1508 K in the first, second, and third columns, respectively, show the probability differences between distributions of X using simulations with different sphere sizes r_{sph} but equal values of . For every temperature and sphere size, 10^{4} simulations were performed. Text boxes contain information about the compared sphere sizes. For example, “100–1” denotes the comparison of the distribution using a sphere size of r_{sph} = 100 au with the distribution using a sphere size of r_{sph} = 1 au. In addition to that, the relative difference of the averages of two compared distributions is given by δ_{⟨X⟩} in the upper left corner of each plot. Low values of δ_{⟨X⟩} indicate a higher level of agreement. 

Open with DEXTER  
In the text 
Fig. 7. Viability and performance test – comparison between the basic Monte Carlo method and our method. Setup: a dense sphere with a radial effective extinction optical depth is heated by a central source from 2.73 K to 1500 K. Upper left: heating curves of the sphere show high level of agreement. Upper right: performance comparison. Bottom: emitted photon statistic shows high level of agreement. 

Open with DEXTER  
In the text 
Fig. 8. Comparison of the resulting temperature distributions of a viscously heated circumstellar disk. Left: temperature distribution for N_{γ} = 10^{8} and N_{int} → ∞ using our method. Right: relative difference between the basic Monte Carlo method and our method using N_{int} = 10^{9}. 

Open with DEXTER  
In the text 
Fig. 9. Studying the impact of N_{int}. Comparison of temperature distributions of viscously heated circumstellar disks. The reference model uses N_{int} → ∞. All displayed results stem from simulations using our method. Left: comparison with a model using N_{int} = 10^{7}. Right: comparison with a model using N_{int} = 10^{9}. 

Open with DEXTER  
In the text 
Fig. 10. Impact of N_{int} and N_{γ} on the midplane temperature of a viscously heated circumstellar disk. 

Open with DEXTER  
In the text 
Fig. A.1. For three different temperatures 2.73 K, 45 K, and 1508 K in the first, second, and third columns, respectively, show the probability differences between distributions of X using simulations with different sphere sizes r_{sph} but equal values of . For every temperature and sphere size, 10^{6} simulations were performed. Text boxes contain information about the compared sphere sizes. For example, “100–1” denotes the comparison of the distribution using a sphere size of r_{sph} = 100 au with the distribution using a sphere size of r_{sph} = 1 au. In addition to that, the relative difference of the averages of two compared distributions is given by δ_{⟨X⟩} in the upper left corner of each plot. Low values of δ_{⟨X⟩} indicate a higher level of agreement. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.