Free Access
Issue
A&A
Volume 547, November 2012
Article Number A54
Number of page(s) 13
Section Interstellar and circumstellar matter
DOI https://doi.org/10.1051/0004-6361/201219501
Published online 25 October 2012

Online material

Appendix A: SED fitting and photometry

For our two-component models, we modelled each set of data as the sum of a blackbody and a greybody envelope: Fν = B(Tb)eτΩb + G(Tg,λ0g, where (A.1)with τ, the optical depth, parametrized as a power-law, i.e., (A.2)where λ0 is the wavelength at which τ = 1, and Ωg and Ωb are the solid angles of each respective component. The independent, unknown parameters are then Tb,Tg,λ0, and β, while the solid angles are computed by scaling the model fluxes to the data. This model describes our sources in terms of two components: a central source, the blackbody, embedded in a dusty envelope whose emission is modelled with a greybody. Since we do not impose that the envelope is optically thin at all wavelengths, the blackbody radiation is attenuated by a factor that describes the absorption caused by the dust opacity. The absorption term, strictly speaking, implies that the envelope cannot be isothermal, therefore, as explained in Sect. 3, the set of parameters Tg,λ0,β,Ωg should be considered only as describing the average physical conditions of the envelope. The density profile in the cores usually follows a power law, ρ ∝ rq, with typically q ≲ 2, so that the mass, and then the flux, increases with the radius. Tg should then be indicative of the temperature in the outer region of the envelope.

The best-fit model was found inside a grid prepared by varying the parameters in the following intervals: 5 ≤ T(K) ≤ 50 in steps of 1 K; 0 ≤ β ≤ 5 in steps of 0.5; and 10 ≤ λ0(μm) ≤ 600 in steps of 10 μm2. These long intervals, longer than physically plausible, were chosen to ensure that the best-fit parameters would not fall on the border of the grid. The best fit was then chosen as the one with the smallest χ2. During the search, we applied a few constraints: a) Tb > Tg, i.e., the greybody envelope must be colder than the blackbody component; b) Ωb < Ωg, the blackbody must be smaller than the greybody; c) the model must predict a 24 μm flux lower than the upper limit; d) the predicted flux at 1.1 mm must be lower than the measured value; and e) the envelope mass must be in the range of 0.1–10 M.

Once the minimum χ2 was found, the 1σ uncertainties in the parameters were found by considering all models with a χ2 in the range (Andrae 2010). For B1-bS, only eight models were found in this χ2 range. For B1-bN, not having a 70 μm flux had the consequence that 242 models, out of a total of 27646, had . Among these 242 models, the best fit was the one reported in Table 3, but with very large uncertainties. To decrease the uncertainties, we added another constraint and selected only the 107 models with λ0 ≤ 200 μm, in analogy with the results of the selection for B1-bS. Finally, assuming that the dust properties are the same in the two sources, we considered only the models with β = 2, discarding 70 models with β = 1.5.

For the Bonnor-Ebert mass, we used the formula reported in Könyves et al. (2010)(A.3)where a is the sound speed corresponding to the temperature found from the fit, assuming a molecular weight μ = 2.33μH = 3.90 × 10-24   g. For R we used the radius found from the fit, i.e., .

A.1. Photometry corrections

Before comparing the model fluxes with the observed data, two steps were performed. For the first step, a colour correction was made; the flux calibration of PACS and SPIRE was performed under the usual assumption that the SED of a source displays a flat νFν spectrum. For all other kinds of SED, the derived fluxes must be colour-corrected according to the intrinsic source spectrum. To correct the fluxes properly, however, we need to know the spectrum a priori, but that is what we aim to derive from the data themselves. To overcome this circular problem, we computed a correction the other way round: a set of greybody models were computed over a wide frequency range and the fluxes were derived as (A.4)where RSRFν stands for the relative spectral response function of each of the PACS or SPIRE filters, computed by the instruments control teams. To obtain the corresponding flux density, Fcc was divided by the appropriate filter width that we computed by imposing the condition that the colour corrections at the effective wavelengths is 1 for an SED of constant νFν. The derived Δλ and Δν are reported in Table A.1. As a consistency check, we compared the colour corrections we obtained for a blackbody with the values found by the instrument teams3. For PACS, the agreement is better than 1% in all the bands for T ≥ 7 K. The agreement at lower temperatures is the same at 100 μm and 160 μm, while it starts to diverge at 70 μm, in particular our correction for T = 6 K is 1.3% higher, at T = 5 K is 37% higher. At such extremly low temperatures, however, the colour corrections are quite difficult to be estimated since the intrinsic spectrum differs notably from the calibration spectrum. For SPIRE the comparison is less obvious because the colour corrections for a blackbody are not tabulated as a function of the temperature, but only in the limiting case of a blackbody in the Rayleigh-Jeans regime. For a blackbody at 100 K, we found that our colour correction is less than 1.3% higher at 250 μm, and less than 1% for the other two bands.

Table A.1

Effective wavelengths and filter widths used to compute the colour corrections.

The colour corrections for the attenuated blackbody were found by inverting the definition of the greybody, i.e., Bν(T)eτ = Bν(T) − Gν(T), from which (Bν(T)eτ)cc = Bcc(T) − Gcc(T). In this way, the same grid of models could be used.

Once the best fit was found, we computed the true colour correction factors, i.e., the amount by which the observed fluxes must be multiplied to derive the instrumental corrected fluxes. For the best-fit models reported in Table 3, the colour corrections fcc are given in Table A.2.

Table A.2

Colour-correction factors for the best-fit models reported in Table 3.

For the second step we made before comparing model fluxes with observed fluxes, we took into account the Gaussian fit used to derive the photometric flux. For example, it is known that the PACS PSF is not a Gaussian, therefore an error is introduced in the photometry because of the Gaussian fit. To derive these errors, we reduced a set of PACS observations of flux calibrators and the results of aperture photometry and of synthetic photometry (Gaussian fitting) were compared. This exercise was repeated for a set of isolated compact sources in the Perseus field with different integrated fluxes. For 70 μm, 100 μm, and 160 μm, we found correction factors of 1.6, 1.5, and 1.4, respectively. We are making additional tests to improve our knowledge of Gaussian fit errors. For SPIRE, the PSF is much closer to a Gaussian profile, therefore we did not need correction factors for SPIRE fluxes.

In summary, Table 2 reports the measured fluxes multiplied by the factors that correct for the Gaussian fit. In Fig. 3, the Herschel data are the fluxes from Table 2 multiplied by the colour-correction. For instance, the flux at 70 μm of B1-bS resulting from the Gaussian fit is 0.138 Jy, which becomes 0.22 Jy after the multiplication by 1.6. This is the value reported in Table 2. After being multiplied by 0.79, see Table A.2, the flux becomes 0.17 Jy, which is the value used in Fig. 3.

We do not yet have estimates of the uncertainties in the maps, therefore it is not possible to give reliable uncertainties to each flux. The uncertainties given in Table 2 were found after a comparison of CuTEx fluxes with those found with the getsources algorithm (Men’shchikov et al. 2012).

A.2. Photometry in SCUBA maps

The effective beam of SCUBA at 450 μm is a Gaussian of FWHM 173 (Di Francesco et al. 2008), but the actual beam consists of two different spatial components: an inner one with an FWHM of 11′′, and an outer one with an FWHM of 40′′. The sizes we derived for B1-bS and B1-bN are smaller than 173. Since CuTEx is more sensitive to compact sources than to extended sources, we conclude that what CuTEx fitted was just the inner component of the sources, while the extended component, larger than the sources’ separation, was seen by CuTEx as background. To estimate the total flux of the individual sources, we used the following approach. If a source has an intrinsic size θ, then where and similarly for θ4. From the analysis of Di Francesco et al. (2008), we know that the peak fluxes are P1 = 0.88P and P2 = 0.12P, where P is the peak flux of the 173 FWHM Gaussian, from which P2 = P1 × 0.12/0.88. Finally, expressing θ4 as a function of θ1, we have (A.5)At 850 μm, CuTEx finds sizes that are larger than the effective beam of 229, but since this beam is again the combination of two Gaussians (Di Francesco et al. 2008) of 195 and 40′′FWHM, it is likely that also in this case CuTEx fits just the inner Gaussian and sees the second one as a background. The formula we used to find the total flux is the same as Eq. (A.5), with 195 instead of 11′′. Since our sources are slightly larger than the effective beam, however, the applicability of Eq. (A.5) to the 850 μm flux is less robust.

A.3. The mass of a greybody

Without Herschel data to define the SED in the FIR, the emitting mass is usually derived from the flux measured at long wavelengths where the envelope becomes optically thin. In this limit Eq. (A.1) then becomes (A.6)If the envelope is optically thin, we see the whole mass distribution M, so that (A.7)where A is the projected area and κref is the opacity at the reference wavelength λref. As in other papers of the Gould Belt consortium, e.g., Könyves et al. (2010), we adopted an opacity of κref = 0.1   cm2   g-1 at λref = 300 μm (Beckwith et al. 1990). With this choice, the GBS opacity law is very similar to the one by Hildebrand (1983), who adopted κref = 0.1   cm2   g-1 at λref = 250 μm and β = 2 for λ ≥ 250 μm. For this work, only κref is important because β was derived directly from the fit. λref was shifted to 300 μm for consistency with older results obtained with ground-based facilities, such as IRAM or JCMT. By comparing the column density map of IC 5146 obtained from Herschel observations and the GBS dust opacity law with that obtained from SCUBA (sub)mm data and with a different opacity law, Arzoumanian et al. (2011) found that the GBS dust opacity law works well for the Herschel data. The exact value for κref is in any case uncertain and likely dependent on the dust temperature. An uncertainty in the derived mass of a factor  ~2 cannot be excluded.

Substituting Eq. (A.7) into (A.6) and writing the solid angle as A/D2 = πR2/D2, with R equal to the outer radius of the envelope, assumed spherical, and D equal to the distance to the source, we arrive at the well-known formula (A.8)Since from our best-fit models λ0 and β are also found, we derived a different formula to estimate the mass of a greybody envelope.

At wavelengths where the envelope is optically thin, we find by combining Eqs. (A.2) and (A.7) that from which (A.9)This new equation, equivalent for a greybody to the unnumbered equation on page 274 in Hildebrand (1983), gives the mass inside a sphere of radius R that, for the given opacity law, has optical depth 1 at λ0. Of course, Eq. (A.9) can be applied only if we know β and λ0, but these are what Herschel data allow us to find. The mass of the envelopes, discussed in Sect. 3, were found using the above formula where the greybody parameters are those derived from the fit that are reported in Table 3.

Appendix B: Sigma clipping with a threshold dependent on the sample size

Before generating the final maps, the bolometers’ time series have to be corrected for glitches caused by the impact of high-energy particles falling onto the detectors. To achieve this correction, we exploited the spatial redundancy provided by the telescope movement, which ensures that each sky pixel of the final map has been observed several times with different bolometers. Outlier detection was then made with a sigma-clipping algorithm. Namely, given a sample of N values, estimates for the mean and standard deviations are derived. All values that differ from the mean by more than α standard deviations are considered outliers and removed from the sample.

One problem with the sigma-clipping algorithm is the choice of the number of standard deviations above which a datum is considered an outlier. To avoid making an arbitrary choice of α, a formula has been derived that makes α dependent on the size of the sample. Below, we give some details of the method.

For a Gaussian distribution with mean m and standard deviation σ, the probability to derive a value x such that m − ασ < x < m + ασ, is given by erf( where erf(y) is the error function For example, if α = 3, the probability is erf(. Obviously, 1 − erf gives the probability to derive a value outside the same interval. This probability tends to zero when α → ∞ (by definition erf(+ ∞) = 1).

In a sample of N data, the probability given by the error function, multiplied by N, can be interpreted as a the expected number of points inside or outside a certain interval around the mean. For any value of α, the number p of points that differ more than ασ from the mean is then For an ideal Gaussian, any value of α is allowed. In a real experiment, however, p is an integer number and data differing from the mean by ασ, such that p(α;N) ≪ 1, are suspicious. To implement this condition in the sigma-clipping algorithm, we have to define a precise value of α, say , above which observed data are considered outliers and removed from the sample. To this aim, we define as or (B.1)All the points, if any, that are outside the interval are considered outliers and removed from the sample. This is equivalent to assume that instead of .

When N → ∞ then too. The mathematical property of the Gaussian distribution that for an infinitely large sample any value is allowed is preserved.

It is not possible to analytically invert Eq. (B.1) for a given N, but it is possible to derive an approximate analytical expression. Indeed, we found that in the range Eq. (B.1), written in terms of log (N), can be approximated with the parabola (B.2)In Fig. B.1 we show the ratio between N as given by Eqs. (B.1) and (B.2). The ratio is always lower than 3%.

thumbnail Fig. B.1

The ratio between as given by Eq. (B.1), and approximated with the parabola given in Eq. (B.2), as a function of the number of standard deviations. The difference between the two curves is always less than 3%. Fitting a cubic instead of a parabola does not improve the approximation significantly and makes the relation harder to invert.

Open with DEXTER

Inverting Eq. (B.2) is trivial, the result is (B.3)For a given N, the value of is found from the above equation. The number of expected points distant from the mean by more than is zero. If one or more outliers are found instead, they are removed. New values of m and σ are recomputed and the new N′ < N is used in Eq. (B.3). This process is repeated until no other outliers are present in the sample. The procedure may not converge, for instance, if the noise is not Gaussian. For this reason, we did not iterate the formula. Instead, for each point of the map the detection of outliers was performed only once. There can be more than 1 outliers found at the first iteration, however.

In the central region of the map, where the coverage (and thus N) is high, the threshold for clipping is higher than in the outskirts of the map where the coverage is low. For instance, if a sky pixel has been observed with 40 bolometers, the above formula gives . Accordingly, once we have estimated the mean m and the standard deviations σ, all values xi such that ABS(xi − m) > 2.25σ are flagged as outliers. If instead a pixel has been observed with 20 bolometers, the threshold is lowered to 1.96σ.


© ESO, 2012

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.