Issue 
A&A
Volume 541, May 2012



Article Number  A92  
Number of page(s)  18  
Section  Planets and planetary systems  
DOI  https://doi.org/10.1051/00046361/201118541  
Published online  04 May 2012 
Online material
Appendix A: Data reduction
In this appendix, we describe the reduction process applied to the PACS data in order to obtain what we call “single” and “combined” maps.
A.1. General PACS photometer data reduction
As explained in the main text (see Sect. 3.1) the data reduction of the maps was performed within HIPE (version 6.0.2044). We modified and adapted the standard HIPE scripts for the needs of our programme. Thus, the PACS photometer data is reduced from Level0 to Level2 using a modified version of the standard miniscanmap pipeline. A major difference of our reduction chain compared with the standard pipeline is the application of frame selection on scan speed rather than on building block identifier (also known as BBID, which is a number that identifies the building block – a consistent part – of an observation). It has been widely tested that the application of the scan speed selection increases the number of useable frames, and finally increases the SNR of the final maps by 10–30% (depending on the PACS observation requests (AOR) setup and on the band used), which is especially important for our faint targets.
Since our targets move slowly (a few ″/h), and the total observation time per scanning direction in one visit is between 10−25 min we do not correct for the proper motion. Performing this kind of correction would smear the background, and would make it impossible to make a correct background subtraction in the later stages (see Sect. A.2).
We use a twostage highpass filtering procedure, as in the standard mini scanmap pipeline. In the first stage a “naive” map is created. This map is used in the second stage to mask the neighborhood of the source and the high flux pixels. In the second stage the vicinity of the source has been masked (2 times the FWHM of the beam of the actual band), as well as all pixels with fluxes above 2.5 × the standard deviation of the map flux values. We used the standard highpass filter width parameters of 15, 20 and 35. The final maps are created by the PhotProject() task, using the default dropsize parameter.
These first steps lead to the production of one “single” map per visit, filter, and scan direction (i.e. in total 8 maps per object in the red, and 4 maps in the blue/green created using the MosaicTask() in HIPE). The sampling of the single maps generated with these scripts are 1.1″/pixel, 1.4″/pixel, and 2.1″/pixel for the blue (70 μm), green (100 μm) and red (160 μm) bands respectively.
A.2. Building final maps for photometry
We then use these single maps to generate the final “combined” maps on which the photometry will be performed. Essentially, to generate the combined maps, we determine the background map, subtract it from each single map, and finally coadd all the backgroundsubtracted maps. A detailed description of this process is as follows:

i)
We have initially a set of 8 (red) or 4 (blue and green) single maps taken on different dates. Considering the red maps, let us call t1 to t4 (t5 to t8) the dates corresponding to the first (second) visit. t1t4 and t5t8 are separated by typically 1–2 days so that the target motion (typically 30–50″ as described in Sect. 2 of the main text) produces a significant change in the observed sky field (see Fig. A.1).
Fig. A.1 Left four panels: Typhon red band 1st visit single maps (t1 to t4). Right four panels: Typhon 2nd visit single maps (t5 to t8). t1, t3, t5, and t8 were observed with a scan angle of 110°, and t2, t4, t6, and t7 with a scan angle of 70° with respect to the detector array. The green circle marks the object position.
Open with DEXTER 
ii)
A background map is generated using the single maps. To do this we “mask” the target in each of the 8 (4) single maps and coadd the maps in the sky coordinate system. This step produces a background map with high SNR without the target (see the right panel in Fig. A.3 for the Typhon case at 160 micron).

iii)
The background map is subtracted from the single maps (Fig. A.1), producing 8 (4) single maps with background removed. We call these images “backgroundsubtracted maps” (Fig. A.2).
Fig. A.2 Same as Fig. A.1 for the backgroundsubtracted maps.
Open with DEXTER 
iv)
Finally the backgroundsubtracted maps are coadded in the target frame (center panel of Fig. A.3), producing the final combined map on which photometry is performed.
An alternate, simpler method, to obtain final maps is to coadd directly the original single maps (i.e. not backgroundcorrected) in the target frame. We call this the simple coaddition method (left panel of Fig. A.3). This method is obviously less optimal than the previous one in terms of SNR, but provides a useful test to demonstrate that the background subtraction is not introducing any spurious effects.
A more detailed description about the whole PACS data reduction process will be published in Kiss et al. (in prep.a).
Fig. A.3
Left: Typhon simple coadded map in red band. Note that in this case the background is the average (centered in the target) of backgrounds in maps t1t8 of Fig. A.1, so that the bright background sources appear twice. Green circle marks the object position. Center: Typhon backgroundsubtracted coadded map in red band. Right: background map (target masked). 

Open with DEXTER 
Fig. A.4
Aperturecorrected curveofgrowth obtained from Typhon backgroundsubtracted coadded map in red band (Fig. A.3center). 

Open with DEXTER 
Appendix B: Statistical aspects
B.1. “Rescaled error bar” approach
As indicated in the main text (see Sect. 4.3), error bars on the fitted parameters (diameter, albedo and beaming factor) are determined through a MonteCarlo approach which essentially consists of randomly building synthetic datasets using uncertainties in the measured fluxes. However, when the measurement errors bars are too small, or similarly when the model is not completely adequate (both cases being indicated by a “poor model fit”), the described MonteCarlo approach will underestimate the uncertainties on the solution parameters. In this case, we adopted a “rescaled error bar” approach described hereafter.
Quantitatively, the fit quality is determined by the value of the reduced χ^{2}: (B.1)where O, M and σ are the observed, modelled and error bar flux values, and ν is the degree of freedom (ν = N − 1 for fixedη, and ν = N − 2 for floatingη, where N is the number of thermal wavelengths available). While indicates a good fit, indicates a poor fit. The idea is therefore to empirically “rescale” (uniformly, in the lack of a better choice) all error bars σ by before running the MonteCarlo approach. This method leads to much more realistic error bars on the solution parameters. However, the range of for which this approach is warranted depends on the number of data points, as for few observations does not have to be closely equal to 1 to indicate a good fit. Specifically, the variance of the variance estimator of a distribution of N points picked from a Gaussian distribution with dispersion = 1 is itself a Gaussian with a mean equal to unity and a standard deviation equal to . If we had for example N = 1000 observations, 68.2% of the good fits have a reduced χ^{2} between 0.955 and 1.045, i.e. is strongly constrained to 1; for N = 5 (resp. 3) observations in contrast, this range is much broader, 0.368–1.632 (resp. 0.184–1.816). This means for example that for 3 data points (PACS only), a reduced χ^{2} of e.g. 1.6 is acceptable. We considered these Ndependent limits as the thresholds beyond which the fits are statistically bad and the error bars need to be rescaled. Note finally that the rescaling approach is merely an “operative” method to avoid unrealistically low error bars on the fitted diameter, albedo and beaming factor, and that error bars shown in Figs. 1 and 2 of the main text are the original, not rescaled measurement uncertainties.
B.2. Spearmanrank correlation with error bars
As described in the main text the Spearmanρ correlation test is distributionfree and less sensitive to outliers than most other methods but, like others, treats datapoints as “exact” and does not take into account their possible error bars. Any variations in the measured data points within their error bars, may change the correlation coefficient, however. Furthermore, each correlation coefficient has its own confidence interval, which depends on the number of data points and on the magnitude of the correlation. To account for these effects we used the following three procedures:

1.
To estimate the most probable correlation coefficient given the error bars, we generate 1000 samples of data points, building each synthetic dataset from its associated probability function. When errors bars are symmetric this probability function is considered to be Gaussian and the error bar correspond to one standard deviation. When errors bars are asymmetric, we fit a lognormal probability function such that the observed value corresponds to the distribution’s mode, and the interval between the upper and lower limits of the observation corresponds to the shortest interval containing 68.2% of all possible values. The resulting distribution of correlation values is not Gaussian, but its Fishertransform z = argtanh(ρ) is. So, we can determine the most probable correlation value, ⟨ ρ ⟩ from our MonteCarlo simulations, and its upper and lower limits (+ σ_{MC}/−σ_{MC}), which are not necessarily symmetric after the reconversion using ρ = tanh(z) (see Peixinho et al. 2004, for more details). It is noticeable (and expected) that observational error bars “degrade” the correlation. The approximate significance pvalue, or confidence level (CL), of ρ can be computed from , which follows a tStudent distribution with n − 2 degrees of freedom. However, when n ≲ 15, using t provides a slight overestimation of the significance. We have compared the previous calculation of the significance with exact values tabulated by Ramsey (1989) and computed adjustments required to obtain a more accurate approximation of the ρ’s and pvalues in the case of low n.

2.
The confidence level of a given ρ results from testing the hypothesis “the sample is correlated” against the hypothesis “the sample correlation is zero”. Knowing the confidence interval (CI) within which the correlation ρ value of the parent population lies may be more informative. For example, suppose we have ρ = 0.7 with a 3σ confidence level; we would conclude that the sample is correlated. But if the 68% confidence interval of the correlation were say [0.3,0.9] , we would be unsure if the correlation was very strong or rather weak. To estimate the shortest (+ σ_{B}/−σ_{B}) interval containing 68.2% of the population’s ρ (i.e. the equivalent to the Gaussian 1σ interval), we used 1000 bootstrap extractions from the datasample computing this range as we did for the previous item (e.g. Efron & Tibshirani 1993).

3.
Finally, to be fully correct, one should perform the 1000 bootstraps on each one of our 1000 MonteCarlo simulations to obtain the true 68% confidence interval of ⟨ ρ ⟩ , a computationnally heavy task. Fortunately, the combination of the two effects can be obtained by quadratically adding the standard deviations of the Gaussian distributions of the Fishertransform of the MonteCarlo simulations and the bootstraps, i.e. , which, after reconversion to ρ, will give our final best estimate for the 68% confidence interval noted as .
© ESO, 2012
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.