A&A 377, 1023-1034 (2001)
M. Lombardi1 - J. Alves2
1 - Institüt für Astrophysik und Extraterrestrische Forschung, Auf dem Hügel 71, 53121 Bonn, Germany
2 - European Southern Observatory, Karl-Schwarzschild-Straße 2, 85748 Garching bei München, Germany
Received 29 May 2001 / Accepted 30 July 2001
We generalize the technique of Lada et al. (1994) to map dust column density through a molecular cloud (NICE) to an optimized multi-band technique (NICER) that can be applied to any multi-band survey of molecular clouds. We present a first application to a subset of the Two Micron All Sky Survey (2MASS) data and show that when compared to NICE, the optimized NICER technique (i) achieves the same extinction peak values, (ii) improves the noise variance of the map by a factor of 2 and (iii) is able to reach dust extinction measurements as low as , better than or equivalent to classical optical star count techniques and below the threshold column density for the formation of CO, the brightest tracer in radio-spectroscopy techniques. The application of the NICER techniques to near-infrared data obtained with a 8 meter-class telescope with a state-of-the-art NIR camera, such as the VLT-ISAAC combination, will be able to achieve dynamic ranges from below to over (AV in the range [0.3, 60]) and spatial resolutions <10'', making use of a single and straightforward column density tracer, extinction by interstellar dust.
Key words: ISM: clouds - ISM: dust, extinction - ISM: structure - ISM:
Orion molecular complex, Mon R2 molecular complex - methods: data analysis
Dark clouds remain one of the least understood objects in the Universe. Although they are ubiquitous in galaxies like the Milky Way, we still do not have a basic understanding of how they relate to the more diffuse interstellar medium (ISM), of how some of them form stars and planets, or how they vanish. We do know today that they are the coldest objects known ( ), which unfortunately implies that they are also one of the hardest to observe and study. Before the discovery of emission from molecules in dark clouds (Wilson et al. 1970, after which these clouds became known as molecular clouds), the general technique used to study the density structure of these objects relied on statistical analysis of numbers of dust extincted stars in the background of a cloud. This approach, introduced by Wolf (1923), is known as the star count method and was later refined and intensively applied by Bok (1937, 1956). In this technique, the main mass component of a molecular cloud, , is traced by the dust in the cloud, an assumption later proved to be valid by observations. The main feat of the star count method is its straightforward essence and its sensitivity to low column density regions with AV in the range [0.5, 3] magnitudes, i.e., from column densities between and (Rossano 1978; Dickman 1978; Mattila 1986; Gregorio Hetem et al. 1988; see Arce & Goodman 1999 for a multi-technique approach).
Almost everything we know today about molecular cloud structure has been derived from radio observations of molecular tracers (e.g., CO, CS, ) of the undetectable (Lada 1996; Blitz & Williams 1999; Myers 1999) and in more recent years through dust thermal emission that can be detected by radio continuum techniques (Andre et al. 2000; Johnstone et al. 2000). Radio spectroscopy and continuum techniques are able to probe deeper into molecular clouds, well past column densities of , an order of magnitude improvement over classical star count techniques. Moreover, radio-spectroscopy offers an unique view of the dynamical structure of molecular clouds (e.g., Rosolowsky et al. 1999), something that none of the absorption or emission continuum techniques can, evidently, assess. However, the interpretation of results derived from radio observations is not always straightforward (e.g. Alves et al. 1999; Chandler & Richer 2000). Several poorly constrained effects inherent to these techniques (e.g., deviations from local thermodynamic equilibrium, opacity variations, chemical evolution, small-scale structure, depletion of molecules, unknown emissivity properties of the dust, unknown dust temperature) make the construction of an unambiguous picture of the physical structure of these objects a very difficult task. The price of being able to probe deeper into a molecular cloud with radio line and continuum techniques is a complicated data analysis, not free from ambiguities.
It has long been recognized that infrared color excess can be used to obtain a reliable estimate of the extinction through a molecular cloud (see, e.g., Bok 1977; Jones et al. 1980; Frerking et al. 1982; Jones et al. 1984; Casali 1986). The deployment of sensitive, large format infrared array cameras on large telescopes, however, has allowed the direct measurement of line-of-sight dust extinction toward thousands of individual background stars observed through a molecular cloud. Such measurements are free from the complications that plague molecular-line or dust emission data and enable detailed maps of cloud density structure to be constructed. In all fairness, all density tracer methods rely on the assumption of an universal gas-to-dust ratio, a fair assumption on theoretical grounds that still remains to be demonstrated for cold dense molecular clouds. Lada et al. (1994) pioneered a technique, the Near-Infrared Color Excess (NICE) technique, for measuring and mapping the distribution of dust through a molecular cloud using data obtained in large-scale, multi-wavelength, infrared imaging surveys. This method combines measurements of near-infrared color excess to directly measure extinctions (as opposed to the statistical measurement in the star count method) and map the dust column density distribution through a cloud. Moreover, the measurements can be made at significantly higher angular resolutions and substantially greater optical depths than previously thought possible. The efficacy of this technique was demonstrated with the study of the dark cloud L 977 (Alves et al. 1998), IC 5146 (Lada et al. 1999), and Barnard 68 (Alves et al. 2001) where through straightforward analysis of nearly 7000 infrared sources background to these clouds produced detailed maps of the extinction to optical depths and spatial resolution an order of magnitude higher than previously possible (AV in the range [1, 40] magnitudes, spatial resolution down to ).
Nearby galactic molecular cloud complexes represent our best chance to understand molecular cloud structure. These complexes have relatively large sizes (from a few to tens of parsec) and are hence fairly extended, typically stretching several to tens of degrees across the sky. A large view of entire molecular cloud complexes, with sufficient resolution and dynamic range, is the only way to put these clouds in their Galactic context and address the questions on their origin and fate. Census of entire molecular cloud complexes are rare (CO line emission: Dame et al. 2001; Fukui et al. 1999; IRAS opacity maps: Wood & Myers 1995; optical star counts: Cambrésy 1999), have typical dynamical ranges of to below or about , but suffer from the caveats inherent to the respective used techniques (see above). There is a clear need for a new large scale mapping technique of molecular cloud complexes, that makes use of a more reliable column density tracer, and is able to offer an extended dynamical range.
In this paper we generalize Lada's NICE technique to an optimized multi-band technique, the Near-Infrared Color Excess Revisited technique (NICER). Although inspired by the Two Micron All Sky Survey (2MASS) (Kleinmann et al. 1994), and by the possibility of all sky dust extinction mapping, this technique can be applied to any multi-band survey of molecular clouds (e.g., DENIS, Epchtein et al. 1997; SDSS, York et al. 2000, or any combination of multi-wavelength catalogs). We present a first application to 2MASS data, and show that NICER improves the noise variance of a map by a factor of 2 when compared to NICE, and is able to reach dust extinction measurements of magnitudes, better than or equivalent to optical star count techniques and below the threshold column density for the formation of CO under typical temperatures and densities (van Dishoeck & Black 1988; van Dishoeck 1992). This latter point is highly relevant as CO is the brightest tracer in radio-spectroscopy techniques. The application of NICER to near-infrared (NIR) data obtained with an 8 meter-class telescope outfitted with a state-of-the-art NIR camera, e.g. the VLT-ISAAC combination, will be able to achieve column density dynamic ranges from below to over (AV in the range [0.3,60]magnitudes), on spatial resolutions <10'', making use of a single and straightforward column density tracer, extinction by dust.
The paper is organized as follows: in Sect. 2 we review the NICE technique and discuss its limitations, in Sect. 3 we give a detailed presentation of NICER, in Sect. 4 we apply both techniques to the same 2MASS data subset, and present the conclusions in Sect. 5.
The difference between the observed and the intrinsic color of a
background star provides information on the optical depth of the
cloud. For example, for the near-infrared and assuming a
normal reddening law (Rieke & Lebofsky 1985), we have
Equation (1) would not be particularly useful if the
intrinsic star colors (which are generally unknown) would span a large
range of values. In reality, infrared colors of stars have a
relatively small scatter (e.g.,
), so that it is sensible to take the average of
Eq. (1), obtaining
It is not difficult to recognize that the two problems discussed above are intimately related. Suppose, for example, that we are investigating a cloud with low column density. Then, because of the intrinsic scatter of star colors and of photometric errors, we could take a foreground star as a background object just because its is observed redder than expected. The situation can be even more difficult for embedded stars. Clearly, a perfect discrimination between foreground and background objects is feasible only in the unrealistic case where the column density can be measured without any error. This is not possible, but at least we can try to take care of both problems in a uniform way. This is the point of view adopted by our optimized method for extinction estimate.
|Figure 1: JHK color-color of one hundred stars (sketch). The left plot shows the colors of stars which are not subject to extinction. The ellipse encloses a confidence region; note that the ellipse is almost exactly vertical oriented, i.e. J-H correlates only weakly with H-K. The right plot shows the same stars as observed through a cloud with AV = 5 magnitudes (a normal reddening law is assumed).
|Open with DEXTER
In this section we present NICER, the Near-Infrared Color Excess method Revisited, an optimized, multi-band method for extinction estimation based on the near infrared excess of background starlight. The method has been designed by considering separately the two steps needed to make an extinction map:
As discussed in Sect. 2, the cloud column
density AV is normally measured by comparing the H - K color for
stars observed through the cloud with the same color for stars for
which no reddening is expected (see Eq. (3)). Clearly, other
possibilities are also viable. For instance, we could choose the J -
H color and use the expression
|Figure 2: For each star, NICER obtains an estimate of the extinction AV using a maximum-likelihood technique. The star colors and the relative covariance (represented in the figure in the left panel by the small ellipse to the top right) are obtained from the observations. Note that correlation of colors is expected (see Eq. (10)). The net effect of photometric errors is a widening (solid ellipse) of the intrinsic color scatter (dashed ellipse) (Eq. (11) and following discussion). To estimate the extinction to the star NICER than finds the ellipse center along the reddening line (see figure to the right) that is closer to the observed colors of the star (Eq. (12)).
|Open with DEXTER
With the three bands at our disposal we can make two independent
c1 = J - H and
c2 = H - K. For each ci (with i = 1,
2), we can write the relationship between the observed color
and the true one
(i.e., the color
that would be observed if no extinction were present) as
The second matrix,
is related to photometric
errors, and thus changes for each star. This matrix can be easily
calculated provided that an estimate of the photometric errors for
each star is available. In the typical case of the three bands J,
H, and K considered here, we can write
We can now minimize Eq. (9) with the requirement that
Eq. (7) holds using Lagrange's multipliers. This way we
reduce our problem to the solution
(b1, b2) of the linear system
A nice feature of this technique is that it can be applied without significant modifications in the case where one of the bands is not observed. Suppose, for example, that the J band is not available. Then we can assume an arbitrary value for this band and set the corresponding error to an extremely large value. If we "blindly'' apply Eqs. (12) and (13), we obtain a matrix C-1 with only a single non-vanishing element at the position (2,2). In other words, the use of a large error on Jautomatically suppresses the use of this band in the evaluation of . Interestingly, the same technique can be used if the missing band is H, which contributes to both colors c1 = J - Hand c2 = H - K: in this case the estimator will be composed only of the combination c1 + c2 = J - K, thus avoiding the use of the missing H band.
So far we have considered only a single star. In order to obtain a smooth extinction map with high signal-to-noise ratio, NICER applies a spatial smoothing to angularly close stars. The smoothing is performed using three different techniques, described below in individual subsections.
The choice of the smoothing technique is important for two main reasons: (i) The final map has a signal-to-noise ratio that strongly depends on the smoothing used; (ii) The smoothing is responsible for the selection of background stars. As already pointed out, NICER tries to deal with both problems at the same time.
The simplest way to obtain a smooth mass map from individual estimates
for AV is to use a weighted mean. More precisely, we use the
obtained for stars close to a direction
on the sky to estimate the extinction on
the position on the sky of the nth star,
its estimated extinction, and
the corresponding estimated variance, we write
So far we have assumed that all stars are background objects with respect to the cloud. In reality, some of them could be in the foreground and could thus contaminate our extinction estimate. If the number of foreground stars is not too large, we can ignore them and proceed as if all stars were in the background. Because of the linearity of the estimator (14), we can correct for the dilution of the signal a posteriori by multiplying the map by the factor , where is the estimated number of background stars. Actually, this simple technique is often not very efficient and can also introduce biases for dense clouds (see below Sect. 3.2.4). For this reasons, it might be preferable to use other smoothing techniques described below.
Foreground stars do not contribute to the signal, still contribute to the noise. Hence, it is desirable to exclude in the average (14) those stars that are suspected to be in the foreground because of their low extinction. This can be easily obtained by using a sigma-clipped or by a median estimator (described below in Sect. 3.2.3).
In order to implement sigma-clipping, we make a first estimate of AV using Eq. (14) and calculate the expected variance from Eq. (16). Then we repeat the entire process using only those stars for which does not differ by more than a factor from the first estimate of AVevaluated at , i.e. . This procedure quickly converges to a pair of values, the measured extinction and the related error, which we take as fiducial values.
In order to verify the goodness of our procedure, we can evaluate the
observed variance on the data:
Alternatively, we can use a weighted median in order to
make the extinction map. This quantity is defined as the solution
of the non-linear equation
The use of the median has several advantages with respect to the simple mean. Probably, the most interesting is the fact that the median is a robust estimator, since it is basically unaffected by outliers. This property is particularly useful in our case, since it provides an efficient method to remove foreground objects from our map. On the other hand, the median has the significant disadvantage of having noise properties difficult to study. We also note that the implementation of the median is not as efficient as the mean, but this should be considered a minor problem.
In the previous subsections we have described three simple smoothing techniques used by NICER. An important point to note is that there is not really a single best method to perform the spatial smoothing. In some conditions, several subtle problems, in fact, may make each technique described above inappropriate, noisy, or even biased.
As already noted by Thoraval et al. (1997), the simple or weighted mean (as described in Sect. 3.2.1) can be severely biased in high-extinction regions. In fact, because of extinction, the density of background stars decreases as the cloud column density increases, while the density of foreground stars is left unchanged. As a result, in the dense regions foreground stars represent a larger fraction of the local number of stars with respect to less dense regions. This way, we could under-estimate the column density in the central regions of clouds. Such a bias can generally be ignored for very nearby clouds. In other cases, the bias might be reduced using the median or the sigma-clipping methods.
On the other hand, substructure on the scale of the weighting function can also be a problem. In case of the simple weighted mean, the smoothing procedure is straightforward to understand (at least at the simple level kept in the discussion here; see however Lombardi & Schneider 2001): the final map obtained is expected to be the original, true extinction of the cloud convolved with the weight function used. In the case of sigma-clipping or median smoothing, things are much more complicated. Suppose, for example, that the cloud does not present any significant substructure on the scale of the weighting function, except for small "holes'' with very low absorption. In this case, we would identify stars observed through these holes as foreground objects, although they are in reality in the background. As a result, we would completely miss the holes, and thus overestimate the cloud column density. Similarly, for a cloud with substructures consisting of dense globules, we would underestimate the column density (note that both sigma-clipping and median are "symmetric,'' in the sense that they discard outliers with both large or small AV).
In summary, each smoothing technique has advantages and disadvantages (this is the ultimate reason to include all of them in NICER). A key role is played by the fraction of foreground stars (and thus the distance of the cloud), and by the substructure of the cloud. It is in any case always advisable to use all smoothing techniques and to check the consistency of the maps obtained.
The Two Micron All Sky Survey (2MASS) offers a unique opportunity to test our algorithm. Conceived about a decade ago (Kleinmann et al. 1994), the 2MASS project is an all-sky-survey obtained from two 1.3-m telescopes in the J ( ), H ( ), and band ( ). Although the surveys has been completed, only about of the sky is at present publicly available in the Second Incremental Release.
We selected from the 2MASS archive a large area around the Orion and
MonR2 nebulae, with galactic coordinates
|Figure 3: Top: color-color diagram for stars in the control field, made from the colors of stars. The ellipse represents a confidence region. Bottom: color-color diagram for stars in the central region of the cloud. The reddening due to the dust in the cloud significantly stretches the locus of stars in the plot.
|Open with DEXTER
For each point source in this coordinate range we retrieved the magnitude on the three bands (fields x_mag), the relative total errors (x_msigcom), and some additional flags used to classify the goodness and the reliability of the detection (cc_flg and rd_flg). Possible artifacts (objects with cc_flg) were excluded from the analysis. We then applied our dedicated pipeline, as described in the following sections.
Given the area on the sky and the number of stars, the pipeline "suggested'' to perform the analysis using a grid of points, with scale 2.4' per pixel, and with Gaussian smoothing characterized by . By default, the pipeline uses a FWHM of 2 pixels, and about 15 "effective'' stars per point (i.e., about 15 stars contribute to the signal for each point). We then made a preliminary analysis using standard values for the average colors and color scatters, thus obtaining a first extinction map for the region. The map obtained was used only as a first check for the basic parameters chosen, to identify a control field, and to obtain the photometric properties of stars in our sample. In particular, inside the boundaries Eq. (9), we found a region in the South-East of the map which does not show any significant extinction and which thus has been selected as control field (see Fig. 4). We then used the color of stars in this control field to set the photometric parameters needed in our algorithm, namely the average colors of stars (i.e., and ) and the scatter in colors (or, more precisely, the covariance matrix, which describes also correlations between colors). This step was performed in our pipeline by flagging stars inside the control region (which can be defined using the standard SAOimage format for region files) and by making a statistical analysis on the photometric data of flagged stars. Note that here only the 2MASS photometric data were used, and not the preliminary map (this map was used only to select the control field). As a result, an inaccurate choice for the initial constants of Eq. (6) to make the preliminary map had no influence on the statistical parameters obtained.
|Figure 4: The total region mapped using NICER. The cuts used in AV emphasize the faint, diffuse halos around the main cores. The low noise of this map allows us to detect a AV = 0.5 extinction with confidence level. The higher noise observed in the southern part is due to the smaller star density.
|Open with DEXTER
We observe that the use of a large survey such as 2MASS has the significant advantage with respect to normal observations of letting us study in detail the halos of molecular clouds. As a result, the choice for the control field is very robust, in the sense that even regions with extremely low extinction can be avoided. We also stress that this is possible because of the availability of data on large areas. Figure 3 shows the color dispersion of stars in the control field, together with the elliptical confidence region.
|Figure 5: Zoom of the central region of the NICER extinction map. Cuts in this map emphasize the faint, diffuse halos around the main cores. Contours are displayed at magnitudes. Note that the maximum value obtained for AV is 17.7 magnitudes.
|Open with DEXTER
|Figure 6: The noise estimate (cf. Eq. (16)) for the central region shown in Fig. 5. The average noise ranges from about 0.15 to the top of the figure to about 0.25 to the bottom (where the density of background stars is smaller because of the increase in |b|); contours are plotted at . In the white regions the error diverges because of the lack of data. Increase in noise is observed in high absorption regions (because of the decrease in density of stars), and around bright stars in the top of the figure (which have been masked by the 2MASS pipeline). The white dot on the right is due to bright star zeta Ori (V = 1.79 mag). Except for these features, the overall noise level is rather uniform.
|Open with DEXTER
|Figure 7: The same region shown in Fig. 5 but with the extinction map obtained using the standard method (Lada et al. 1994). A quantitative analysis of the noise, carried out in regions with the lowest column density, shows that the variance of this map is about a factor two larger than the one shown in Fig. 5. Note also that, although the contours levels are the same as in Fig. 5, they appear much less smooth. This is another indication of the higher noise level of this map (rather than of intrinsic cloud structure; note that the resolution is the same for all images).
|Open with DEXTER
Figures 4 and 5 show the result obtained using a Gaussian smoothing characterized by with -clipping (other smoothing schemes gives similar results, and differences cannot generally be seen by eye). Note that use of our optimized method, combined with the quality of the 2MASS data, allows us to detect at an unprecedented level of detail the faint, extended halos on which the main cores are embedded. At the smoothing size used, we reach the excellent sensitivity AV = 0.13 at level.
Since the focus of this paper is mainly on the optimized reconstruction method, we will defer a discussion of the results obtained on this and on similar cloud complexes to a future paper. However, we want to point out a couple of peculiar features of the map obtained. First, we detect a single region with significantly bluer color (see Fig. 8, right). This region corresponds to the open cluster NGC 2204, whose blue stars contaminate our map thus simulating a "negative'' extinction. Note also the cometary structures observed south to the Orion cloud (Fig. 8, left). In these condensations we observe extinction AV > 6 on scales close to our smoothing length. Clearly, it would be worthwhile to study these regions at higher resolution.
In order to better appreciate the advantages obtained using the optimized technique described in this paper, we also carried out the analysis on the same field using only the weighted average on the H - K color for stars with relatively low photometric errors (we required errors on H and K to be smaller than 0.15 magnitudes). Figure 7 shows the result obtained with this standard method, and should be compared with Fig. 5.
In this paper we have presented an optimized technique to produce highly accurate extinction maps from multi-band near-infrared photometric data. The method, which is a natural generalization of the near infrared color excess method of Lada et al. (1994), is able to produce significantly less noisy (and thus more accurate) extinction maps taking advantage of all bands available. A first example of application of this new technique to 2MASS data has shown an improvement with respect to the standard NICE algorithm of a factor 2 on the noise variance. This way, we have been able to detect extended diffuse halos down to magnitudes.
We thank C. Lada and L. King for helpful discussions and observations on this work. We also thank the referee, Doug Johnstone, for useful suggestions that improved the paper. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation.
|Figure 8: Left: peculiar cometary structures are observed south of the main cloud. The size of the image is . Right: the only significant "negative'' extinction corresponds to the open cluster NGC 2204, an open cluster in the halo (Hawarden 1976), whose average color is bluer than the average galactic field color. The size of the image is .
|Open with DEXTER