Issue |
A&A
Volume 518, July-August 2010
Herschel: the first science highlights
|
|
---|---|---|
Article Number | L103 | |
Number of page(s) | 7 | |
Section | Letters | |
DOI | https://doi.org/10.1051/0004-6361/201014668 | |
Published online | 16 July 2010 |
Online Material
![]() |
Figure 2:
Composite 3-color images left of the sub-fields of Aquila ( top) and Polaris ( bottom)
produced from the high-contrast ``single-scale'' decompositions (red
comes from all SPIRE bands, green and blue correspond to the PACS bands
at 160 and 70 |
Open with DEXTER |
Appendix A: Extraction techniques
A.1 Existing source extraction algorithms
Here we summarize (very briefly) the concepts of different techniques, to place getsources described in Sect. 3 in a wider context. The algorithms trying to solve the same problem of source extraction originated from different ideas. Note that they have also been developed (oriented) for use in different areas of astronomy, thus their performance for a specific project must be carefully tested before an appropriate method can be chosen.
Stutzki & Guesten (1990)'s gaussclumps
(originally created for position-velocity cubes) fits a Gaussian
profile to the brightest peak, subtracting the fit from the image, then
fitting a new profile to the brightest peak in the image of residuals,
iterating until some termination criteria are met. Williams et al. (1994)'s clumpfind
contours an image at a number
of levels, starting from the brightest peak in the image and descending
down to a minimum contour level, marking as clumps along the way all
connected areas of pixels that are above the contour level. Bertin & Arnouts (1996)'s sextractor
estimates and subtracts background, then uses thresholding to find
objects, deblends them if they overlap, and measures their positions
and sizes
using intensity moments. CUPID's reinhold
identifies pixels within the image which mark the edges of clumps of
emission, producing a set of rings around the clumps. After cleaning
noise effects on the edges, all pixels within each ring are assumed to
belong to a single clump. CUPID's fellwalker ascends image
peaks by following the line of the
steepest ascent, considering every pixel in the image as a starting
point for a walk to a significant peak, marking along the way all
visited pixels with a clump identifier. Motte et al. (2007)'s mre-gcl combines cloud filtering techniques based on wavelet multi-resolution algorithms (e.g., Starck & Murtagh 2006) with gaussclumps. Molinari et al. (2010)'s derivatives
analyzes multi-directional second derivatives of the original image and
performs curvature thresholding to isolate compact objects, then fits
variable-size elliptical Gaussians (adding also a planar background) at
their positions. Another method that defines cores in terms of
connected pixels is csar, which was developed for use with BLAST and Herschel (Harry et al. 2010, in preparation).
Whereas clumpfind, reinhold, fellwalker, and csar merely partition the image between objects not allowing them to overlap, gaussclumps, sextractor, and mre-gcl can deblend overlapping objects, which is quite essential for obtaining correct results in crowded regions. None of the methods was designed to handle multi-wavelength data, making it necessary to match thecatalogs obtained at different wavelengths using an association radius as a parameter.
A.2 More details on the new method
In getsources the extraction of objects is performed in each
of the combined detection images by going from the smallest to the
largest scales and finding segmentation masks of the objects at each
scale using the tint fill algorithm (Smith 1979).
The masks are the areas of connected pixels in a segmentation image,
and the algorithm fills the pixels' values with the number of a
detected object and allows tracking of all pixels belonging to the
object across all scales. The segmentation masks expand toward larger
scales, and the evolution of each object's mask is followed, as is
appearance of new objects at any scale and disappearance of those which
become too faint at the current and larger scales. When two or more
objects touch each other in a single-scale
image, the segmentation masks are not allowed to overlap, but
overlapping does happen between objects of different scales. The
largest extent of any source defines its footprint,
and this is determined at the scale where the object's contrast above
the cut-off level is at maximum. The scale itself provides an initial
estimate for the object's FWHM size.
The positions of sources are computed from the first moments of
intensities in a combined detection image at a range of single scales,
from where an object first appeared and to the scale twice as large.
The objects' sizes are computed from the first and second intensity
moments in the original background-subtracted
image. The background subtraction is done by linearly interpolating
pixel intensities off the observed image under the footprints, in the
four main directions (two axes and two diagonals), based on the pixel
values just outside the footprints. Our iterative deblending algorithm
employs two-dimensional shapes with peak intensities and sizes of the
extracted objects in order to divide the intensity of a pixel between
surrounding objects according to the fraction of the shapes'
intensities at the pixel. For the shapes we adopted a
two-dimensional analog of
the Gaussian-like function
(Moffat 1969) with
.
The end result of the processing is an extraction catalog (one
line per object) containing coordinates of all detections (independent
of )
and estimates of the objects' S/N ratios,
peak and total fluxes (with their uncertainties), and sizes and
orientations for each wavelength. In addition, getsources
produces catalogs of all possible colors, as well as the
azimuthally-averaged intensity profiles (their full,
background-subtracted, and deblended versions) and deblended images for
each object.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.