Open Access
Issue
A&A
Volume 634, February 2020
Article Number A48
Number of page(s) 24
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201936345
Published online 05 February 2020

© M. Paillassa et al. 2020

Licence Creative CommonsOpen Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

Catalogs extracted from astronomical images are at the heart of modern observational astrophysics. Minimizing the number of spurious detections in these catalogs has become increasingly important because the noise added by such contaminants can, in many cases, compromise the scientific objectives of a survey. Properly identifying and flagging spurious detections yields substantial scientific gains, but it is complicated by the numerous types of contaminants that pollute images. Some of them stem from the detector electronics (e.g., dead or hot pixels, persistence, saturation), from the optics (diffraction along the optical path, scattered and stray light), from post-processing (e.g., residual fringes), while others are the results of external events (cosmic rays, satellites, tracking errors). The amount of data produced by modern astronomical surveys makes visual inspection impossible in most cases. For this reason, developing fully automated methods to separate contaminants from true astrophysical sources is a critical issue in modern astronomical survey pipelines.

Most current pipelines rely on a fine prior knowledge of their instruments to detect and mask electronic contaminants (e.g., Bosch et al. 2018; Morganson et al. 2018) and to some extent optical contaminants (e.g., Kawanomoto et al. 2016a,b). Cosmic ray hits can be identified by rejecting outliers in the timeline, provided that multiple consecutive exposures are available, by using algorithms sensitive to their peculiar shapes, such as Laplacian edge detection (e.g., LA Cosmic, van Dokkum 2001) or wavelets (e.g., Ordénovic et al. 2008). The Radon transform or the Hough transform have often been used to detect streaks caused by artificial satellites or planes in images (e.g, Vandame 2002; Nir et al. 2018).

In this work, we want to overcome some of the drawbacks of the above mentioned methods. First, the typical data volume produced by modern surveys requires that the software is largely unsupervised and as efficient as possible. Second, we aim to develop a robust and versatile tool for the community at large and therefore want to avoid the pitfall inherent in software that is tailored to a single or a handful of instruments, without compromising on performance. Third, we would like to have a unified tool able to detect many contaminants at once. Finally, we want to assign to each pixel a probability of belonging to a given contaminant class rather than Boolean flags. These constraints lead us to choose machine learning techniques and in particular supervised learning and convolutional neural networks (CNNs).

Supervised learning is a field of machine learning dealing with models that can learn regression or classification tasks based on a data set containing the inputs and the expected outputs. During the learning process, model parameters are adjusted iteratively to improve the predictions made from the input data. The learning procedure itself consists of minimizing a loss function that measures the discrepancy between model predictions and the expected values. Minimization is achieved through stochastic gradient descent. We recommend Ruder (2016) for an overview of gradient descent based optimization algorithms.

Convolutional neural networks (LeCun & Bengio 1995) are particulary well-suited for identifying patterns in images. Unlike previous approaches that would involve hand crafted feature detectors, such as SIFT descriptors (Lowe 1999), CNN models operate directly on pixel data. This is made possible by the use of trainable convolution kernels to detect features in images. Convolution is shift-equivariant, which allows the same features to be detectable at any image location.

CNNs are now widely used in various computer vision tasks, including image classification, that is assigning a label to a whole image (Krizhevsky et al. 2012; Simonyan & Zisserman 2014; Szegedy et al. 2015), and semantic segmentation, that is assigning a label to each pixel (Long et al. 2015; Badrinarayanan et al. 2017; Garcia-Garcia et al. 2017).

In this work, we propose to identify contaminants using both image classification and semantic segmentation.

In the following, we first describe the images that we used and how we built our data sets. Then, we focus on the neural network architecture that we used. Finally, we evaluate the models performance on test sets and on real data.

2. Data

In this section we describe the data used to train our two neural networks. We distinguish between two types of contaminants: On the one hand, local contaminants, that affects only a fraction of the image at specific locations. This includes cosmic rays, hot columns and lines, dead columns and lines, dead clustered pixels, hot pixels, dead pixels, persistence, satellite trails, residual fringe patterns, “nebulosity”, saturated pixels, diffraction spikes, and over scanned pixels. These add up to 12 classes. On the other hand, global contaminants, that affects the whole image, such as tracking errors.

2.1. Local contaminant data

For local contaminants, we choose to build training samples by adding defects to uncontaminated images in order to have a ground truth for each contaminant. In this section we first describe the library of astronomical images used for our analysis, then focus on the selection of uncontaminated images, and finally describe the way each contaminant is added.

2.1.1. Library of real astronomical images

In an effort to have the most realistic dataset, we choose to use real data as much as possible and take advantage of the private archive of wide-field images gathered for the COSMIC-DANCE survey (Bouy et al. 2013). The COSMIC-DANCE library offers several advantages. First, it includes images from many past and present optical and near-infrared wide-field cameras. Images cover a broad range of detector types and ground-based observing sites, ensuring that our dataset is representative of most modern astronomical wide-field instruments. Table 1 gives an overview of the properties of the cameras used to build the image database. Second, most problematic exposures featuring tracking/guiding loss, defocusing or strong fringing were already identified by the COSMIC-DANCE pipeline, providing an invaluable sample of real problematic images.

Table 1.

Instruments used in this study.

In all cases except for Megacam, DECam, UKIRT and HSC exposures, the raw data and associated calibration frames were downloaded and processed using standard procedures with an updated version of Alambic (Vandame 2002), a software suite developed and optimized for the processing of large multi-chip imagers. In the case of Megacam, the exposures processed and calibrated with the Elixir pipeline were retrieved from the CADC archive (Magnier & Cuillandre 2004). In the case of DECam, the exposures processed with the community pipeline were retrieved from the NOAO public archive (Valdes et al. 2014). UKIRT exposures processed by the Cambridge Astronomical Survey Unit were retrieved from the WFCAM Science Archive. Finally, the HSC raw images were processed using the official HSC pipeline (Bosch et al. 2018). In all cases, a bad pixel map is associated to every individual image. In the case of DECam and HSC, a data quality mask is also associated to each individual image and provides integer-value codes for pixels which are not scientifically useful or suspect, including in particular bad pixels, saturated pixels, cosmic ray hits, satellite tracks, etc. All the images in the following consist of individual exposures and not co-added exposures.

2.1.2. Non-contaminated images

None of the exposures in our library are defect-free. The first step to create the non-contaminated dataset to be used as “reference” images consists in identifying the cleanest possible subset of exposures. CFHT-Megacam (u, r, i, z bands), CTIO-DECam (g, r, i, z, Y bands) and Subaru-HSC (g, r, i, z, y bands) exposures are found to have the best cosmetics and are selected to create the non-contaminated dataset. The defects inevitably present in these images are handled as follows.

First, dead pixels and columns are identified from flat-field images and inpainted using Gaussian interpolation (e.g., Williams et al. 1998). Then, the vast majority of cosmic rays are detected using the Astro-SCRAPPY Python implementation (McCully et al. 2018) of LA Cosmic (van Dokkum 2001) and also inpainted using Gaussian interpolation. Finally, given the high performance of the DECam and HSC pipelines, the corresponding images are perfect candidates for our non-contaminated datasets. These two pipelines not only efficiently detect but also interpolate problematic pixels (in particular saturated pixels, hot and bad pixels, cosmic ray hits). Such interpolations being a feature of several modern pipelines (e.g., various NOAO pipelines, but also the LSST pipeline), we choose to treat these pixels as regular pixels so that the networks are able to work with images originating from such pipelines.

Patches of size 400 × 400 pixels are randomly extracted from the cleaned images. 75% of them are used to generate training data and the remaining 25% for test data.

The final non-contaminated dataset includes 50 000 individual images, ensuring that we have a sufficiently diverse and large amount of training data for our experiment.

A non-representative training set can severely impact the performance of a CNN and result in significant biases in the classification task. To prevent this, we measure a number of basic properties describing prototypical aspects of ground-based astronomical images to verify that their distributions in the uncontaminated dataset are wide enough and reasonably well sampled.

The measured properties include, for example, the average full-width at half-maximum (FWHM) of point-sources is estimated in each image using PSFEX (Bertin 2013). This allows us to ensure that the training set covers a broad range of ambient (seeing) conditions and point spread functions (PSFs) sampling. Also, the source density (number of sources in the image divided by the physical size of the image) is measured to make sure that our training set encompasses a broad range of source crowding, from sparse cosmological fields to dense, low-galactic latitude stellar fields.

Additionally, the background is modeled in all the images following the method used by SEXTRACTOR (Bertin & Arnouts 1996), i.e. using a combination of κ.σ-clipping and mode estimation. The background model provides important parameters such as the standard deviation of the background which is required in most of the data-processing operations that follow.

2.1.3. Cosmic rays (CR)

“Cosmic ray” hits are produced by particles hitting the detector or by the photons resulting from the decay of radioactive atoms near the detector. They appear as bright and sharp patterns with shapes ranging from dots affecting one or two pixels to long wandering tracks commonly referred to as “worm”, depending on incidence angle and detector thickness.

We create a library of real CRs using dark frames with long exposure times from the CFH12K, HSC, MegaCam, MOSAIC, and OmegaCam cameras. These cameras comprise both “thick”, red-sensitive, deep depletion charged-couple devices (CCDs), more prone to long worms, and thinner, blue-sensitive devices, more prone to unresolved hits. Dark frames are exposures taken with the shutter closed, so that the only contributors to the content of undamaged pixels are the offset, dark current, and CR hits (plus Poisson and readout noise). A mask M of the pixels affected by CR hits in a given dark frame D can therefore easily be generated by applying a simple detection threshold. We conservatively set this threshold to 3σD above the median value mD of D:

p , M p = { 1 if D p > m D + 3 σ D 0 otherwise . $$ \begin{aligned} \forall p,\ M_{\rm p}=\left\{ \begin{matrix} 1&\mathrm{if} \ D_{\rm p} > m_{\rm D} + 3\sigma _{\rm D} \\ 0&\text{ otherwise}. \end{matrix}\right. \end{aligned} $$(1)

Among all the dark images used, a bit more than 900 million cosmic ray pixels are detected after thresholding. Considering that the average footprint area of a cosmic ray hit is 15 pixels, this represents a richly diversified population of about 60 million cosmic ray “objects”.

Next we dilate M with a 3 × 3 pixel kernel to create the final M(D) mask. This mask is used both as ground truth for the classifier, and also to generate the final “contaminated” image C by adding CR pixels with rescaled values to the uncontaminated image U:

C = U + k C σ U σ D D M ( D ) , $$ \begin{aligned} \boldsymbol{C} = \boldsymbol{U} + k_{\boldsymbol{C}} \frac{\sigma _{\boldsymbol{U}}}{\sigma _{\boldsymbol{D}}} \boldsymbol{D} \odot \boldsymbol{M}^{(\boldsymbol{D})}, \end{aligned} $$(2)

where σU is the estimated standard deviation of the uncontaminated image background, ⊙ denotes the element-wise product and kC is a scaling factor empirically set to 1/8. D has been background-subtracted before this operation, using a SEXTRACTOR-like background estimation.

A typical CR hit added to an image and its ground truth mask are shown in Fig. 1.

thumbnail Fig. 1.

Examples of contaminants and their ground truth. Top row: cosmic ray hits, hot columns, bad columns. Bottom row: bad lines, persistence, satellite trails.

2.1.4. Hot columns and lines, dead columns, lines, and clustered pixels, hot pixels, and dead pixels (HCL, DCL, HP, DP)

These contaminants mainly come from electronic defects and the way the detectors are read. They correspond to pixels having a response very different from that of neighbors, either much lower (bad pixels, traps) or much noisier (hot pixels). These blemishes can be found as single pixels, in small clusters, or affecting a large fraction of a column or row. We treat single pixels and clumps, columns, and lines separately, although they may often share a common origin.

All these hot or dead pixels added to the uncontaminated images are simulated. The number of these pixels is set as follow.

For columns and lines, a random number of columns and lines is chosen with a uniform distribution over [1,4]. Each column or line has a uniform length picked between 30 and the whole image height or width. It has a uniform thickness in [1,3]. For punctual pixels, a random fraction of pixels is chosen with a uniform distribution between 0.0002 and 0.0005. Pixels are uniformly distributed over the image. Clustered pixels are given a rectangular or a random convex polygonal shape. The random convex shapes are constrained to have 5 or 6 edges and to fit in 20 × 20 bounding boxes.

The values of these pixels are computed as follows. For hot values, a uniformly distributed random base value v is chosen in the interval [15σU, 100σU]. Then hot values are generated according to the normal law 𝒩(v, (0.02v)2) so that hot values are randomly distributed over [0.9v, 1.1v]. For dead values, one of the following three equiprobable recipes is chosen at random to generate bad pixel values. Either all values are exactly 0. Either values are generated according to the normal law 𝒩(0,(0.02σU)2) so that these are close to 0 values but not exactly 0. Either a random base value v is chosen with a uniform distribution in the interval [0.1mU, 0.7mU], where mU is the median of the uncontaminated image sky background. In this case, dead pixel values are generated using the normal law 𝒩(v,(0.02v)2), so that values fall in the interval [0.9v, 1.1v].

Example of such column and line defaults are shown in Fig. 1.

2.1.5. Persistence (P)

Persistence occurs when overly bright pixels in a previous exposure leave a remnant image in the following exposures.

To simulate this effect in an uncontaminated image, we applied the so-called “Fermi model” described in Long et al. (2015). Persistence, in units of e.s−1), is modeled as a function of the initial pixel level xp and time t:

f ( x p , t ) = A p ( 1 exp ( x p x 0 δ x ) + 1 ) ( x p x 0 ) α ( t 1000 ) γ . $$ \begin{aligned} f(x_{\rm p}, t) = A_{\rm p}\left(\frac{1}{\exp {(-\frac{x_{\rm p}-x_0}{\delta x}})+1}\right) \left(\frac{x_{\rm p}}{x_0}\right)^{\alpha } \left(\frac{t}{1000}\right)^{-\gamma }. \end{aligned} $$(3)

The goal of Long et al. (2015) was to fit the model parameters x0, δx, α, γ using observations to later predict persistence for their detector. In our simulations, parameter values are randomized to represent various types and amounts of persistence (see Table 2). To compute the pixel value of the persistence effect, we derive the number of electrons emitted by the persistence effect during the exposure. In the following, we note T the duration of the exposure in which the persistence effect occurs, and Δt the delay between that exposure and the previous one. We obtain the number of ADUs collected at pixel p during the interval [Δt, Δt + T] by integrating Eq. (3) and dividing by the gain G:

P p = 1 G Δ t Δ t + T f ( x p , t ) d t $$ \begin{aligned} P_{\rm p}&= \frac{1}{G}\int _{\Delta t}^{\Delta t + T}f(x_{\rm p},t)\,\mathrm{d}t \end{aligned} $$(4)

= A p G ( 1000 γ exp ( x p x 0 δ x ) + 1 ) ( x p x 0 ) α ( ( Δ t + T ) 1 γ Δ t 1 γ 1 γ ) . $$ \begin{aligned}&= \frac{A_{\rm p}}{G}\left(\frac{1000^\gamma }{\exp {(-\frac{x_{\rm p}-x_0}{\delta x}})+1}\right) \left(\frac{x_{\rm p}}{x_0}\right)^{\alpha } \left(\frac{(\Delta t + T)^{1-\gamma } - \Delta t^{1-\gamma }}{1-\gamma }\right). \end{aligned} $$(5)

Table 2.

Parameters used for the generation of persistence.

These pixel values are then added to the uncontaminated image:

C = U + k P σ U P P min ( P max P min ) , $$ \begin{aligned} \boldsymbol{C} = \boldsymbol{U} + k_{\boldsymbol{P}}\,\sigma _{\boldsymbol{U}} \frac{\boldsymbol{P} - P_{\rm min}}{(P_{\rm max} - P_{\rm min})}, \end{aligned} $$(6)

where P are the persistence values computed in Eq. (5), Pmin and Pmax are the minimum and maximum of these values, and kP is a scaling factor empirically set to 5.

Images of saturated stars are simulated using SKYMAKER (Bertin 2009) and binarized to generate masks of saturated pixels. The masks define the footprints of persistence artifacts, within which the xp’s are computed (Table 2). An example is shown in Fig. 1.

2.1.6. Trails (TRL)

Satellites or meteors, and even planes crossing the field of view generate long trails across the frame that are quasi-rectilinear. We simulate these motion-blurred artifacts by generating close star images with identical magnitudes along a linear path using once again SKYMAKER. We also generate a second population of trails with magnitude changes to account for satellite “flares”. A random, Gaussian-distributed component with a ≈1 pixel standard deviation is added to every stellar coordinate to simulate jittering from atmospheric turbulence, so that the stars are not aligned along a perfect straight line. For meteors, defocusing must be taken into account (Bektešević et al. 2018). The amount of defocusing θ, expressed as the apparent width of the pupil pattern in arc-seconds, is:

θ = 180 π × 3600 × D d , $$ \begin{aligned} \theta = \frac{180}{\pi }\times 3600 \times \frac{D}{d}, \end{aligned} $$(7)

where D is the diameter of the primary mirror, and d the meteor distance, both in meters. D and d are randomly drawn from flat distributions in the intervals [2, 8] and [80 000, 120 000], respectively.

The ground truth mask is obtained by binarizing the satellite image at a small and arbitrary threshold above the simulated background. This mask is then dilated using a 7 × 7 pixel structuring element.

To avoid any visible truncation, we add the whole simulated satellite image multiplied by a dilated version M(S) of the ground truth mask to the uncontaminated image:

C = U + k T σ U σ T T M ( T ) , $$ \begin{aligned} \boldsymbol{C} = \boldsymbol{U} + k_{\boldsymbol{T}}\frac{\sigma _{\boldsymbol{U}}}{\sigma _{\boldsymbol{T}}} \boldsymbol{T} \odot \boldsymbol{M}^{(\boldsymbol{T})}, \end{aligned} $$(8)

where σS is the standard deviation of the satellite image background, σU the standard deviation of the uncontaminated image background, and kT is a scaling factor empirically set to 6. An example of a satellite trail is shown in Fig. 1.

2.1.7. Fringes (FR)

Fringes are thin-film interference patterns occurring in the detectors. The irregular shape of fringes is caused by thickness variations within the thin layers. To add fringing to images, we use real fringe maps produced at the pre-processing level by ALAMBIC for all the optical CCD cameras of Table 1. These reconstructed fringe maps are often affected by white noise, which we mitigate by smoothed using a top-hat kernel with diameter 7 pixels. The fringe pattern F can affect large areas in an image but not necessarily all the image. To reproduce this effect, a random 3rd-degree 2D polynomial envelope E that covers the whole image is generated. The final fringe envelope E(F) is computed by normalizing E over the interval [−5, 5] and flattening the result using the sigmoid function:

E p ( F ) = ( 1 + exp ( 5 2 E p E max E min E max E min ) ) 1 , $$ \begin{aligned} E^\mathrm{(F)}_{\rm p} = \left(1+\exp \left(-5\,\frac{2E_{\rm p} - E_{\rm max}-E_{\rm min}}{E_{\rm max}-E_{\rm min}}\right)\right)^{-1}, \end{aligned} $$(9)

where Emin and Emax are the minimum and maximum values of Ep, respectively.

The fringe pattern, modulated by its envelope, is then added to the uncontaminated image:

C = U + k F σ U σ F F E ( F ) , $$ \begin{aligned} \boldsymbol{C} = \boldsymbol{U} + k_{\boldsymbol{F}}\frac{\sigma _{\boldsymbol{U}}}{\sigma _{\boldsymbol{F}}} \boldsymbol{F} \odot \boldsymbol{E}^{\,\mathrm (F)}, \end{aligned} $$(10)

where σF is the standard deviation of the fringe pattern and kF is an empirical scaling factor set to 0.6. The ground truth mask is computed by thresholding the 2D polynomial envelope to −0.20.

An example of a simulated contamination by a fringe pattern can be found in Fig. 2.

thumbnail Fig. 2.

Examples of added fringes and nebulosities. Top: fringes; uncontaminated input exposure, smoothed fringe pattern, contaminated image, ground truth mask, polynomial envelope. Bottom: nebulosities; uncontaminated input exposure, Herschel 250 μm molecular cloud image, contaminated image, ground truth mask.

2.1.8. Nebulosity (NEB)

Extended emission originating from dust clouds illuminated by star light or photo-dissociation regions can be present in astronomical images. These “nebulosities” are not artifacts but they make the detection and measurement of overlapping stars or galaxies more difficult; they may also trigger the fringe detector. Hence, it is useful to have them identified and properly flagged. Because thermal distribution of dust closely matches that of reflection nebulae at shorter wavelength (e.g., Ienaka et al. 2013), we use far-infrared images of molecular clouds around star-forming regions as a source of nebulous contaminants. We choose pipeline-processed 250 μm images obtained with the SPIRE instrument (Griffin et al. 2010) on-board the Herschel Space Observatory (Pilbratt et al. 2010), which we retrieve from the Herschel Science Archive. The 250 μm channel offers the best compromise between signal-to-noise ratio and spatial resolution. Moreover, at wavelengths of 250 μm and above, low galactic latitude fields contain mostly extended emission from the cold gas and almost no point sources (apart from a few proto-stars and proto-stellar cores). Therefore, they are perfectly suited to being added to our optical and near-infrared wide-field exposures. We do not resize or reconvolve the SPIRE images, taking advantage of the scale-invariance of dust emission observed down to the arcsecond level in molecular clouds (Miville-Deschênes et al. 2016).

We add the nebulous contaminant data to our uncontaminated images in the same way we do for fringes, except that there is no 2D polynomial envelope. The whole nebulosity image is background-subtracted (using a SEXTRACTOR-like background estimation) to form the final nebulosity pattern N which is then added to the uncontaminated image:

C = U + k N σ U σ N N , $$ \begin{aligned} \boldsymbol{C} = \boldsymbol{U} + k_{\boldsymbol{N}}\frac{\sigma _{\boldsymbol{U}}}{\sigma _{N}} \boldsymbol{N}, \end{aligned} $$(11)

where kN is an empirical scaling factor set to 1.3. The ground truth mask is computed by thresholding N at one sigma above 0. This mask is then eroded with a 6 disk diameter structuring element to remove spurious individual pixels, and dilated with a 22 disk diameter structuring element. An example of added nebulosity is shown in Fig. 2. The light from line-emission nebulae may not necessarily exhibit the same statistical properties as the reflection nebulae targeted for training. However line-emission nebulae are generally brighter and in practice the classifier has no problem detecting them.

2.1.9. Saturation and bleeding (SAT)

Each detector pixel can accumulate only a limited number of electrons. Once the full well limit is reached, the pixel becomes saturated. In CCDs, charges may even overflow, leaving saturation trails (a.k.a bleeding trails) along the transfer direction. Such pixels are easily be identified in clean images knowing for each instrument the saturation level.

2.1.10. Diffraction spikes

Diffraction spikes are patterns appearing around bright stars and caused by light diffracting around the spider supporting the secondary mirror. Given the typical cross-shape of spiders, the pattern is usually relatively easy to identify. In some cases, the pattern can deviate significantly from a simple cross because it is affected by various effects, such as distortions, telescope attitude, the truss structure of spider arms, rough edges, or cables around the secondary mirror support, reflections on other telescope structures,... A specific strategy was put in place to build a spikes library to be used to train the CNN.

On the one hand, MegaCam and DECam are mounted on equatorial telescopes and the orientation of spikes is usually (under standard northeast orientation) a “+” for Megacam and an “x” for DECam1. On the other hand, HSC is mounted on the alt-az Subaru telescope, and spikes do not display any preferred orientation, making their automated identification more complicated. For this reason, we define a two-step strategy, in which, first, samples of “+”- and “x”-shaped spikes are extracted from DECam and Megacam images, and randomly rotated to generate a library of diffraction spikes with various orientations. The library is then used to train a new CNN that for identifying spikes in HSC images.

MegaCam and DECam analysis. We first identify the brightest stars using SEXTRACTOR and extract 300 × 300 pixel image cutouts around them. The cutouts are thresholded at three sigma above the background and binarized. Element-wise products are computed between these binary images and large “+”-shaped (Megacam) or “x”-shaped (DECam) synthetic masks to isolate the central stars. Each point-wise product is then matched-filtered with a thinner version of the same pattern and binarized using an arbitrary threshold set to 15 ADUs. The empirical size of the spike components is estimated in these masks by measuring the maximum extent of the resulting footprint along any of the two relevant spike directions (horizontal and vertical or diagonals). Finally, the maximum size of the two directions is kept and empirically rescaled to obtain the final spike length and width. If the resulting size is too small, we consider that there is no spike in order to avoid false positives (e.g., a star bright enough to be detected by SEXTRACTOR but without obvious spikes). Figure 5 gives an overview of the whole process.

HSC analysis. We train a new neural network to identify spikes in all directions. For that purpose, we build a new training set using the spikes identified in MegaCam and DECam images as described above and apply a random rotation between 0° and 360° to ensure rotational invariance. The neural network has a simple SegNet-like convolutional-deconvolutional architecture (Badrinarayanan et al. 2015), but it is not based on VGG hyper-parameters (Simonyan & Zisserman 2014). It uses 21 × 21, 11 × 11, 7 × 7 and 5 × 5 convolutional kernels in 8, 16, 32 and 32 feature maps, respectively. The model architecture is shown in Fig. 3. Activation functions are all ELU except on the last layer where it is softmax. It is trained to minimize the softmax cross entropy loss with the Adam optimizer (Kingma & Ba 2014). Each pixel cost is weighted to balance the disproportion between spike and background pixels. If ps is the spike pixel proportion in the training set, then spike pixels are weighted with 1 − ps, while background pixel are weighted with ps (this is the two-class equivalent of the basic weighting scheme described in Sect. 3.1). Once trained we run inferences on all the brightest stars detected with SEXTRACTOR in the HSC images. Output probabilities are binarized based on the MCC (see Eq. (22)) and the resulting mask is empirically eroded and dilated to obtain a clean mask. An example is given in Fig. 4.

thumbnail Fig. 3.

Neural network used specifically for spike detection.

thumbnail Fig. 4.

Example of a spike mask obtained by inference of the separate neural network.

thumbnail Fig. 5.

Empirical spike flagging process. From left to right: source image centered on a bright star candidate, the same image thresholded, the two pointwise products, the matched filtered pointwise products, the final mask drawed from the empirical size computed with the two previous masks.

2.1.11. Overscan (OV)

Overscan regions are common in CCD exposures, showing up as strips of pixels with very low values at the borders of the frame. To avoid triggering false predictions on real data, overscans must be included in our training set. Doing so, and although these are not truly contaminants, we find it useful to include an “overscan” class in the list of identified features. Overscan regions are simulated by including random strips on the sides of images. Pixel values in the strips are generated in the same way as bad pixel values.

2.1.12. Bright background (BBG) and background (BG)

The objects of interest in this study are the contaminants. Hence, following standard computer vision terminology, all the other types of pixels, including both astronomical objects and empty sky areas, belong to the “background”.

We find that defining a distinct class for each of these types of background pixels helps with the training procedure. We thus define the “bright background” (BBG) pixels as pixels belonging to astronomical objects2 (except nebulosity) present in the uncontaminated images, and background pixels (BG) as pixels covering an empty sky area.

Ground truth masks for bright background pixels are obtained by binarizing the image before adding the contaminants to 10σU. The remaining pixels are sky background pixels, which are not affected by any labeled feature.

2.2. Global contaminants

We now describe the data used to identify global contaminants.

Tracking errors happen when the telescope moves during an exposure due to, for instance, telescope guiding or tracking failures, wind gusts, or earthquakes. As illustrated in Fig. 6, this causes all the sources to be blurred along a path on the celestial sphere generated by the motion of the telescope. Because tracking errors affect the entire focal plane, the analysis is performed globally on the whole image. The library of real images affected by TR events is a compilation of exposures identified in the COSMIC-DANCE survey for the cameras of Table 1, and images that were gathered over the years at the UKIRT telescope, kindly provided to us by Mike Read.

thumbnail Fig. 6.

Examples of images affected by tracking errors.

2.3. Generating training samples

Both types of contaminants – global and local contaminants – must be handled separately: they require different neural network architectures, and different training data sets as well.

Figure 7 gives a synthetic view of the sample production pipeline and the various data sources.

thumbnail Fig. 7.

Schematic view of the sample production pipeline. All COSMIC-DANCe archive images have their background map computed. Clean images are built from the COSMIC-DANCe archives. Contaminants from diverse sources (COSMIC-DANCe archives, Herschel archives or simulations) are added to clean images; this step uses the background maps. The resulting local contaminant images are dynamically compressed (see Sect. 2.3.3) and ready to be fetched into the neural network. Global contaminant samples are directly obtained from the COSMIC DANCe archives and dynamically compressed.

The breakdown per imaging instrument of the COSMIC DANCe dataset is listed Table 3.

Table 3.

COSMIC-DANCE archive usage per imaging instrument.

The following subsections treat about some special features of the sample generation.

2.3.1. Local contaminants

The order in which local contaminants are added is important. Bad columns, lines, and pixels are added last because they are static defaults defining the final value of a pixel, no matter how many photons hit them.

In our neural network architecture contaminant classes do not need to be mutually exclusive. Each pixel can be assigned several classes as several defaults can affect a given pixel (e.g., fringes and cosmic ray hit). On the other hand, the faint background class that defines pixels not affected by any default excludes all other classes. A list of all the contaminants included in this study are presented in Table 4.

Table 4.

All the contaminants and their abbreviated names.

Figure 8 shows examples of local contaminant sample input images, each with its color-coded ground truth.

thumbnail Fig. 8.

Examples of input (left) and their ground truth (right). Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG). Pixels that belong to several classes are represented in black. In the interest of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.

2.3.2. Global contaminants

The global contaminant dataset contains images that have been hand labeled as affected by tracking errors or not. The images, taken from the COSMIC DANCe archives, are not cleaned, hence they are potentially affected by preexisting local contaminants. This is because the global contaminant detector is intended to be operated before the local one.

2.3.3. Dynamic compression

All images are dynamically compressed before being fed to the neural networks using the following procedure:

C = arsinh ( C B + N ( 0 , σ U 2 ) σ U ) . $$ \begin{aligned} \tilde{\boldsymbol{C}} = \text{ arsinh}{\left( \frac{\boldsymbol{C} - \boldsymbol{B} + \mathcal{N} (0, \sigma _{\boldsymbol{U}}^2)}{\sigma _{\boldsymbol{U}}}\right)}. \end{aligned} $$

The aim of dynamic compression is to reduce the dynamic range of pixel values, which is found to help neural network convergence. The image is first background subtracted. Then, a small random offset is added to increase robustness regarding background subtraction residuals. The resulting image is normalized by the standard deviation of the background noise and finally compressed through the arsinh function, which has the property to behave linearly around zero and logarithmically for large (positive or negative) values.

2.3.4. Data augmentation

We deploy data augmentation techniques to use our data to the maximum of its information potential. The two following data augmentation procedures are applied to the set of local contaminant training samples. First, random rotations, using as angles multiples of 90°, are applied to cosmic ray, fringe patterns, and nebulosity patterns. Secondly, some images are rebinned. When picking up a clean image, we check if the image can be 2 × 2 rebinned with the constraint that the FWHM remains greater than 2 pixels – the FWHM of the image was previously estimated using SEXTRACTOR (Bertin & Arnouts 1996). This value is chosen on the basis of the plate sampling offered by current ground-based imagers. If the image can be 2 × 2 rebinned while meeting the condition above, it has a 50% probability to be rebinned.

3. Convolutional neural networks

In this section, we describe the convolutional neural networks used for our analysis. The first one, MAXIMASK, classifies pixels (“local contaminants”) while the second one, MAXITRACK, classifies images (“global contaminants”).

3.1. Local contaminant neural network

3.1.1. Architecture

The model used for the semantic segmentation of the local contaminants, MAXIMASK, is based on Badrinarayanan et al. (2015) and Yang et al. (2018), which both rely on a VGG-like architecture (Simonyan & Zisserman 2014). It consists of three parts.

The first part contains single and double convolutional layers followed by max-pooling downsampling. This enables the network to compute relevant feature maps at different scales. During this step, max-pooling pixel indices are kept up for later reuse.

The second part also incorporates convolutional layers and recovers spatial resolution by upsampling feature maps using the max-pooling indices. An example of unpooling is given in Fig. 9. At each resolution level, the feature maps of the first part are summed with the corresponding upsampled feature maps to make use of the maximum of information.

thumbnail Fig. 9.

Example of an unpooling process. Indices of max-pooling are kept up and reused to upsample the feature maps.

The third part is made of extra unpool-convolution paths (UCPs) that recover the highest image resolution from each feature map resolution so that the network can exploit the maximum of information of each resolution. Thus, it results 5 pre-predictions, one for each resolution.

The 5 pre-predictions are finally concatenated and a last convolution layer builds the final predictions. The sigmoid activation functions in this last layer are not softmax-normalized, to allow non-mutually exclusive classes to be assigned jointly to pixels. All convolutional layers use 3 × 3 kernels and apply ReLU activations. The architecture is represented in Fig. 10 and hyperparameters are described more precisely in Table 5. The neural network is implemented using the TensorFlow library (Abadi et al. 2016) on a TITAN X Nvidia GPU.

thumbnail Fig. 10.

Scheme representation of the local contaminants neural network architecture.

Table 5.

Description of the local contaminants neural network architecture, including map dimensions.

3.1.2. Training and loss function

Training is done for 30 epochs on 50 000 images, with mini-batches shuffled at every epoch. The batch size is kept small (10) to maintain a reasonable memory footprint. The model is trained end-to-end using the Adam optimizer (Kingma & Ba 2014). The loss function L is the sigmoid cross-entropy (Rubinstein 1999) summed over all classes and pixels, and averaged across batch images:

L = 1 card ( B ) b B p P w p , b ω c C ( y b , p , c log y ̂ b , p , c + ( 1 y b , p , c ) log ( 1 y ̂ b , p , c ) ) , $$ \begin{aligned} L&= - \frac{1}{\mathrm{card} ({\mathcal{B} })} \sum _{b\in \mathcal{B} } \sum _{p\in \mathcal{P}} { w}^{\prime }_{p,b} \sum _{\omega _c\in \mathcal{C}}\biggl ( { y}_{b,p,c} \log \hat{{ y}}_{b,p,c} \nonumber \\&\quad + (1-{ y}_{b,p,c}) \log (1-\hat{{ y}}_{b,p,c})\biggr ), \end{aligned} $$(12)

where ℬ is the set of batch images, 𝒫 is the set of all image pixels, 𝒞 is the set of all contaminant classes, w p , b $ {\mathit{w}^{\prime}_{p,b}} $ is a weight applied to pixel p of image b in the batch (see below), y ̂ b , p , c $ \hat{\mathit{y}}_{b,p,c} $ is the sigmoid prediction for class ωc of pixel p of image b in the batch, and yb, p, c is the ground truth label for class ωc of pixel p of image b defined as:

y b , p , c = { 1 if ω c C p , b 0 otherwise , $$ \begin{aligned} { y}_{b,p,c} = \left\{ \begin{matrix} 1&\mathrm{if} \ \omega _c \in \ {\mathcal{C} }_{p,b}\,\\ 0&\text{ otherwise} \end{matrix}\right., \end{aligned} $$(13)

where 𝒞p, b ⊂ 𝒞 is the set of contaminant classes labeling pixel p of image b in the batch. In order to improve the back-propagation of error gradients down to the deepest layers, several losses are combined. In addition to the main sigmoid cross-entropy loss L computed on the final predictions, we can compute a sigmoid cross-entropy for each of the 5 pre-predictions. There are several ways to associate all of these losses. Like Yang et al. (2018), we find that adding respectively 33% or 50% of each of the 3 or 2 smallest resolution losses to the main loss works best. The two main rules here are that the additional loss weights should sum to 1 and that higher resolution pre-predictions become less informative as they get closer to the one at full resolution.

Basic training procedures are vulnerable to strong class imbalance, which makes it more likely for the neural network to converge to a state where rare contaminants are not properly detected. Contaminant classes are so statistically insignificant (down to one part in 106 with real data, typically) that the classifier may be tricked into assigning all pixels to the background class. To prevent this, we start by applying a basic weighting scheme to each pixel according to its class representation in the training set, that is each pixel p of batch image b belonging to classes in 𝒞p, b is weighted by wp, b defined as

w p , b = ω c C p , b w c , $$ \begin{aligned} { w}_{p,b} = \sum \limits _{\omega _c \in \mathcal{C} _{p,b}} { w}_c, \end{aligned} $$(14)

with

w c = ( P ( ω c | T ) i 1 P ( ω i | T ) ) 1 , $$ \begin{aligned} { w}_c = \left(P(\omega _c|T) \sum \limits _{i} \frac{1}{P(\omega _i|T)}\right)^{-1}, \end{aligned} $$(15)

where P(ωc|T) is the fraction of pixels labeled with class ωc in the training dataset T. The P(ωc|T)’s do not sum to one as many pixels belong to several classes and are thus counted several times. We find that the weighting scheme brings slightly better results and less variability in the training if weights are computed at once from the class proportions of the whole set, instead of being recomputed for each image. From Eq. (15) we have:

i C , j C , w i w j = P ( ω j | T ) P ( ω i | T ) and ω c C w c = 1 . $$ \begin{aligned} \forall i \in \mathcal{C} , \forall j \in \mathcal{C} ,\ \frac{{ w}_i}{{ w}_j} = \frac{P(\omega _j|T)}{P(\omega _i|T)}\mathrm{\ \ and\ }\sum \limits _{\omega _c \in \mathcal{C} } { w}_c = 1. \end{aligned} $$(16)

However, with this simple weighting scheme, background class pixels that are close to rare features are given very low weights, although they are decisive for classification. To circumvent this, weight maps are smoothed with a 3 × 3 Gaussian kernel with unit standard deviation so that highly weighted regions spread over larger areas. Other kernel sizes and standard deviations were tested but we find 3 and 1 to give the best results. The resulting weights of this smoothing are the w p , b $ {w^{\prime}_{p,b}} $ presented in the loss function of Eq. (12).

Finally, the solution is regularized by the l2 norm of all the N network weights, by adding the following term to the total loss:

L 2 reg = λ i N k i 2 , $$ \begin{aligned} L2_{\rm reg} = \lambda \sum \limits _i^{N} \Vert \boldsymbol{k}_i\Vert _2, \end{aligned} $$(17)

where the ki’s are the convolution kernel vectors. λ sets the regularization strength. We find λ = 1 to provide the best results.

3.2. Global contaminant neural network architecture

The convolutional neural network that detects global contaminants (tracking errors), MAXITRACK, is a simple network made of convolutional layers followed by max-pooling and fully connected layers. The architecture of the network is schematized in Fig. 11 and detailed in Table 6. Because the two classes are mutually exclusive (affected by tracking errors or not), we adopt for the output layer a softmax activation function and a softmax cross-entropy loss function (Rubinstein 1999). Training is done for 48 epochs on 50 000 images with a mini-batch size of 64 samples, using the Adam optimizer.

thumbnail Fig. 11.

Scheme representation of the global contaminants neural network architecture.

Table 6.

Description of the global contaminant neural network architecture, including map dimensions.

4. Results with test data and quality assessment

4.1. Local contaminants neural network

We evaluate the quality of the results in several ways. First, we estimate the performance of the network on test data, both quantitatively through various metrics, and qualitatively. We verify that there is no over-fitting by checking that performance on the test set is comparable to that on the training set. Next, we show that performance is immune to the presence or absence of other contaminants in a given image. We finally compare the performance of the cosmic ray detector to that of a classical algorithm.

4.1.1. Performance metrics

We first estimate classification performance on a benchmark test set comprising 5000 images. Because the network is a binary classifier for every class, we can compute a Receiver Operating Characteristic (ROC) curve for each of them. ROC curves represent the True Positive Rate (TPR) vs. the False Positive Rate (FPR):

TPR = TP P = TP T P + F N , $$ \begin{aligned} {TPR}&= \frac{{TP}}{P} = \frac{{TP}}{{TP+FN}}, \end{aligned} $$(18)

FPR = FP N = FP T N + F P , $$ \begin{aligned} {FPR}&= \frac{{FP}}{N} = \frac{{FP}}{{TN+FP}}, \end{aligned} $$(19)

where P is the number of contaminated pixels, TP is the number of true positives (contaminated pixels successfully recovered as contaminated), FN is the number of false negatives (contaminated pixels wrongly classified as non-contaminated), N is the number of non-contaminated pixels, FP is the number of false positives (non-contaminated pixels wrongly classified as contaminated), and TN is the number of true negatives (non-contaminated pixels successfully recovered as non-contaminated).

The accuracy (ACC) is subsequently defined as

ACC = T P + T N P + N · $$ \begin{aligned} {ACC} = \frac{{TP+TN}}{P+N}\cdot \end{aligned} $$(20)

The more the ROC curve bends toward the upper left part of the graph, the better the classifier. However with strongly imbalanced datasets, such as our pixel data, one must be very cautious with the TPR, FPR and ACC values for assessing the quality of the results. For example, if one assumes that there are 1000 pixels of the contaminant class (P) and 159 000 pixels of the background class (N) in a 400 × 400 pixel sub-image, a TPR of 99% and a FPR of 1%, corresponding to an accuracy of 99%, would actually represent a poor performance, as it would imply 990 true positives, 10 false negatives, 157 410 true negatives, and 1590 false positives. In the end, there would be more false positives FP (pixels wrongly classified as contaminated) than true positives TP.

For this reason the ROC curves in Fig. A.1 are displayed with a logarithmic scale on the FPR axis. We require the FPR to be very low (e.g smaller than 10−3) to consider that the network performs properly.

On the other hand, recovering the exact footprint of large, fuzzy defects is almost impossible at the level of individual pixels, which makes the classification performance for persistence, satellite trails, fringes, nebulosities, spikes and background classes look worse in Fig. A.1 than it really is in practice.

Also, two ROC curves are drawn for cosmic rays and trails. The second one (in green) is computed using only the instances of the class that are above a specific level of the sky background. These instances were defined by retaining those which had more than a half of their pixels above 3σ. These second curves shows that the network performs better on more obvious cases.

In addition to the FPR, TPR, ACC and AUC, we use two other metrics helpful for assessing the network performance: the purity (or precision), representing the fraction of correct predictions among the positively classified samples, and the Matthews correlation coefficient (MCC, Matthews 1975), which is an accuracy measure that takes into account the strong imbalance between classes.

PUR = TP T P + F P = Purity or Precision , $$ \begin{aligned}&{PUR} = \frac{{TP}}{{TP+FP}} = \text{ Purity} \text{ or} \text{ Precision}, \end{aligned} $$(21)

MCC = TP × T N F P × F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N ) . $$ \begin{aligned}&{MCC} = \frac{ {TP} \times {TN - FP \times FN }}{\sqrt{ ({TP + FP}) ( {TP + FN} ) ( {TN + FP} ) ( {TN + FN} ) } }. \end{aligned} $$(22)

In the above example, the purity would reach only 38% and the MCC only 61%, highlighting the classifier poor positive class discrimination.

Figure A.3 shows the true positive rate against the purity. Again, the purple curve represents how a random classifier would perform. In these curves the best classifier would sit in the top right (TPR = 1 and PUR = 1). The darkest points also represent lowest thresholds while the lighter are the highest ones.

Some qualitative results are presented in Fig. 12. A given pixel is assigned a given class if its probability to belong to this class is higher than the best threshold in the sense of the MCC.

thumbnail Fig. 12.

Examples of qualitative results on test data. Left: input; middle: ground truth; right: predictions. Each class is assigned a color so that the ground truth can be represented in one single image. Class predictions are done according to the threshold giving the highest MC coefficient. The color coding is identical to that of Fig. 8.

Finally, MCCs are represented in Fig. A.2, as a function of the output threshold. In each curve, the threshold giving the best MCC is annotated around the best MCC point. It is important to note that the best threshold depends on the modification of the prior that has been applied to the raw output probabilities. This update of the prior is explained in Sect. 5.

4.1.2. Robustness regarding the context

The MaxiMask neural network is trained using mostly images that include all contaminant classes. Hence, we must check if the network performs equally well independently of the context, that is if it delivers equally good results for images containing, for example, a single class of contaminant.

To this aim, for every contaminant class, we generate a dataset of 1000 images affected only by this type of contaminant (except saturated and background pixels), and another dataset of 1000 images containing only saturated and background pixels. We then compare the performance of MaxiMask for each class with the that obtain on the corresponding dataset. We find that performance (AUC) is similar or even slightly higher for the majority of the classes. This shows that the network is not conditioned to work only in the exact context of the training. The results are presented in Table 7.

Table 7.

AUC of each class depending on the test set context.

As it can be seen, for all classes but fringes and nebulosity, performance improves when a single type of contaminant is present. The slight improvement may come from the fact that ambiguous cases (when pixels are affected by more than one contaminant class, e.g., a cosmic ray or a hot pixel over a satellite trail) are not present in the single contaminant test set.

4.1.3. Cosmic rays: effect of PSF undersampling and comparison with LA Cosmic

Undersampling makes cosmic ray hits harder to distinguish from point-sources. To solve this issue, van Dokkum (2001) has developed LA Cosmic, a method based on a variation of Laplacian edge detection. It is largely insensitive to cosmic ray morphology and PSF sampling. LA Cosmic thus offers an excellent opportunity to test the performance of MaxiMask on undersampled exposures.

To do so, we generate two datasets containing only the cosmic ray contaminant class (plus object and background). A well sampled set of images with FWHMs larger than 2.5 pixels, and an undersampled image set with FWHMs smaller than 2.5 pixels. We run MaxiMask and the Astro-SCRAPPY Python implementation LA Cosmic. To make a fair comparison, LA Cosmic masks are dilated in the same way as the ground truth cosmic ray masks of MaxiMask. However, while MaxiMask generates probability maps that can be thresholded at different levels, LA Cosmic only outputs a binary mask. To compare the results we therefore build ROC curves for the neural network and over-plot a single point representing the result obtained with LA Cosmic.

Figure 13 shows that the neural network performs better than LA Cosmic in both regimes with our data.

thumbnail Fig. 13.

CR detection performance comparison with LA Cosmic.

4.2. Global contaminants neural network

The ROC curve for the global contaminant neural network is shown in Fig. 14. It is computed from a test set of 5000 images.

thumbnail Fig. 14.

Global contaminant neural network ROC curve; the steps are a consequence of limited statistics.

5. Modifying priors

If one knows what class proportions are expected in the observation data, output probabilities can be updated to better match these priors (e.g., Saerens et al. 2002; Bailer-Jones et al. 2008).

The outputs of a perfectly trained neural network classifier with a cross-entropy loss function can be interpreted as Bayesian posterior probabilities (e.g., Richard & Lippmann 1991; Hampshire & Pearlmutter 1991; Rojas 1996). Under this assumption and using Bayes’ rule, the output for the class ωc of the trained neural network model defined by a training set T writes:

P ( ω c | x , T ) = p ( x | ω c , T ) P ( ω c | T ) ω { ω c , ω c ¯ } p ( x | ω , T ) P ( ω | T ) , $$ \begin{aligned} P(\omega _c|\,\boldsymbol{x}, T) = \frac{p(\boldsymbol{x}|\omega _c, T) P(\omega _c|T)}{\sum \limits _{\omega \in \{\omega _c,\bar{\omega _c}\}} p(\boldsymbol{x}|\omega , T) P(\omega |T)}, \end{aligned} $$(23)

where x is the input image data around the pixel of interest, p(x|ωc, T) is the distribution of x conditional to class ωc in the training set T, and P(ωc|T) is the prior probability of a pixel to belong to the class ωc in the trained model.

As each output acts as a binary classifier, the sum is done on the class ωc (contaminant) and its complementary ω c ¯ $ \bar{\omega_c} $ (“not the contaminant”).

With the observation data set O we may similarly write:

P ( ω c | x , O ) = p ( x | ω c , O ) P ( ω c | O ) ω { ω c , ω c ¯ } p ( x | ω , O ) P ( ω | O ) , $$ \begin{aligned} P(\omega _c|\boldsymbol{x}, O) = \frac{p(\boldsymbol{x}|\omega _c, O) P(\omega _c|O)}{\sum \limits _{\omega \in \{\omega _c,\bar{\omega _c}\}} p(\boldsymbol{x}|\omega , O) P(\omega |O)}, \end{aligned} $$(24)

where P(ωc|O) is the expected fraction of pixels with class ωc in O.

Now, if the appearance of defects in O matches that in the training set T, we have p(x|ωc, T) = p(x|ωc, O), and we can rewrite (24) as:

P ( ω c | x , O ) = P ( ω c | x , T ) P ( ω c | O ) P ( ω c | T ) ω { ω c , ω c ¯ } P ( ω | x , T ) P ( ω | O ) P ( ω | T ) $$ \begin{aligned} P(\omega _c|\boldsymbol{x},O)&= \frac{P(\omega _c|\boldsymbol{x},T) \frac{P(\omega _c|O)}{P(\omega _c|T)}}{\sum \limits _{\omega \in \{\omega _c,\bar{\omega _c}\}} P(\omega |\boldsymbol{x},T) \frac{P(\omega |O)}{P(\omega |T)}} \end{aligned} $$(25)

= 1 1 + ( 1 P ( ω c | x , T ) 1 ) P ( ω c | T ) P ( ω c | O ) 1 P ( ω c | O ) 1 P ( ω c | T ) . $$ \begin{aligned}&= \frac{1}{1 + \left( \frac{1}{P(\omega _c|\boldsymbol{x},T)}-1\right)\frac{P(\omega _c|T)}{P(\omega _c|O)}\frac{1 - P(\omega _c|O)}{1 - P(\omega _c|T)}}. \end{aligned} $$(26)

If pixels were all weighted equally, the training priors P(ωc|T) would simply be the class proportions in the training set. However, this is not the case here, and pixel weights have to be taken into account. To do so, we follow Bailer-Jones et al. (2008)’s approach, by using as an estimator of P(ωc|T) the posterior mean on the test set T′ (which by construction is distributed identically to the training set):

P ̂ ( ω c | T ) = 1 card ( T ) x T P ( ω c | x , T ) . $$ \begin{aligned} \hat{P}(\omega _c|T) = \frac{1}{\mathrm{card} ({T^{\prime}} )} \sum \limits _{\boldsymbol{x} \in {T^{\prime}} } P(\omega _c|\boldsymbol{x},{T^{\prime}} ). \end{aligned} $$(27)

These corrected probabilities are used to compute the MC coefficient curves in Fig. A.2 (whereas the prior correction does not affect the ROC and purity curves).

MAXIMASK comes with the P(ωc|T) values already set, therefore one only needs to specify the expected class proportions in the data, that is the P(ωc|O)’s.

6. Application to other data

As a sanity check, we apply MAXIMASK to data obtained from different instruments not part of the training set. Examples of the resulting contaminant maps are shown in appendix.

Our first external check is with ZTF (Bellm et al. 2019) data. The MAXIMASK output for a science image featuring a prominent trail with variable amplitude is shown in Fig. A.4. We can note the ability of MAXIMASK to properly flag both the trail and overlapping sources.

Our second external check is with the ACS instrument onboard the Hubble Space Telescope (Fig. A.5 and A.6). This test illustrates MAXIMASK’s ability to distinguish cosmic rays from poorly sampled, diffraction-limited point source images.

Given the seemingly good performance of MAXIMASK on images from instruments not part of the training set, one question that may arise is whether MAXIMASK can readily be used on production for such instruments, without any retraining or transfer learning. Our limited experience with MAXIMASK seems to indicate that this is indeed the case, although retraining may be beneficial for specific instrumental features. As shown here, excellent performance can be reached by training with 50 000 400 × 400 images taken from three different instruments. We think that a minimum of 10 000 400 × 400 would be a good start to train on a single instrument. Assuming CCDs of approximately 2000 × 2000 pixels, thus containing 25 400 × 400 images, it would just need 400 CCDs, equivalent to 10 fields for a 40 CCD camera.

Our last series of tests is conducted on digital images of natural scenes (landscape, cat, human face), to check for possible inconsistencies on data that are totally unlike those from the training set. Reassuringly, the maps produced by MAXIMASK are consistent with the expected patterns. For instance, the cat’s whiskers are identified as cosmic ray impacts, and pixels with the lowest values as bad pixels.

7. Using MAXIMASK and MAXITRACK

MAXIMASK and MAXITRACK are available online3. MAXIMASK is a Python module that infers probability maps from FITS images. It can process a whole mosaic, a specific FITS image extension, or all the FITS files from a directory or a file list. For every FITS file being processed a new FITS image is generated with the same HDU (Header Data Unit) structure as the input. Every input image HDU has a matching contaminant map HDU in output, with one image plane per requested contaminant. The header contains metadata related to the contaminant, including the prior and threshold used. An option can be set to generate a single image plane for all contaminants, using a binary code for each contaminant. Such composite contaminant maps can easily be used as flag maps, for example, in SEXTRACTOR. Based on command line arguments and configuration parameters, one can select specific classes, apply updates to the priors and thresholds to the probability maps. The code relies on the TensorFlow library and can work on both CPUs or GPUs, although the CPU version is expected to be much slower: MAXIMASK processes about 1.2 megapixel per second with an NVidia Titan X GPU, and about 60 times less on a 2.7 GHz Intel i7 dual-core CPU. Yet, there is probabily room for improvement in processing efficiency for both the CPU and GPU versions.

MAXITRACK is used the same way as MAXIMASK, except that the output is a text file indicating the probability for the input image(s) to be affected by tracking errors (one probability per extension if the image contains several HDUs). It can also apply an update to the prior. It runs at 60 megapixels s−1 with an NVidia Titan X GPU and is 9 times slower on a 2.7 GHz Intel i7 dual-core CPU.

8. Summary and perspectives

We have built a data set and trained convolutional neural network classifiers named MAXIMASK and MAXITRACK to identify contaminants in astronomical images. We have shown that they achieve good performance on test data, both real and simulated. By delivering posterior probabilities, MAXIMASK and MAXITRACK give the user the flexibility to set appropriate threshold levels and achieve the desired TPR/FPR trade-offs depending on the scientific objectives and requirements. Both classifiers require no input parameters or knowledge of the camera properties.

Even though the mix of contaminants in the training set is unrealistic, being dictated by training requirements, we have checked that this does not impact performance. Output probabilities can be corrected to adapt the behavior of MAXIMASK to any mix of contaminants in the data.

We are aware that several types of contaminants and images are missing from the current version and may be added in the future.

Local contaminants include two particularly prominent classes of contaminants: optical and electronics ghosts. Unwanted reflections within the optics result in stray light in exposures. These reflections can produce spurious images from bright sources commonly referred to as “optical ghosts”. Sometimes, reflections from very bright stars outside of the field may also be seen. Detectors read through multiple ports also suffer from a form of electronic ghost known as cross-talk. Electronic cross-talk causes bright sources in one of the CCD quadrants to generate a ghost pattern in other quadrants. The ghosts may be negative or positive and are typically at the level of 1:104. Both effects are a significant source of nuisance in wide field exposures, especially in crowded fields and deep images, where they generate false, transient sources, and can affect high precision astrometric and photometric measurements.

Another category of common issues is defocused or excessively aberrated exposures, as well as trails caused by charge transfer inefficiency, all of which which could easily be implemented in MAXITRACK.

Also, the training set used in the current version of MAXIMASK and MAXITRACK does not include images from space-born telescopes nor, more generally, diffraction-limited imagers. Therefore, they are unlikely to perform optimally with such data, although limited testing indicates that they may remain usable for most features, an example of prediction on HST data is shown in Figs. A.5 and A.6.

Finally, MAXIMASK could be extended to not only detect contaminants, but also to generate an inpainted (i.e., “corrected”) version of the damaged image areas wherever possible.


1

DECam images sometimes also exhibit a horizontal spike of unknown origin (Melchior et al. 2016).

2

Including astrophysical sources in the “background” class can seem somewhat counter-intuitive in a purely astronomical context, but for consistency we choose to follow the computer vision terminology and meaning.

Acknowledgments

M. P. acknowledges financial support from the Centre National d’Etudes Spatiales (CNES) fellowship program. We are grateful to Mike Read, of the Royal Observatory, Edinburgh, for providing us with data from the UKIRT telescope, and to Vincent Lepetit for providing comments and suggestions that helped improve the paper. This research has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 682903, P.I. H. Bouy), and from the French State in the framework of the “Investments for the future” Program, IdEx Bordeaux, reference ANR-10-IDEX-03-02. We gratefully acknowledge the support of NVIDIA Corporation with the donation of one of the Titan Xp GPUs used for this research. This research draws upon data distributed by the NOAO Science Archive. NOAO is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. Based on observations made with the IsaacNewton Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. The Isaac Newton Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. This paper makes use of data obtained from the Isaac Newton Group Archive which is maintained as part of the CASU Astronomical Data Centre at the Institute of Astronomy, Cambridge. Based on data obtained from the ESO Science Archive Facility. This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency. Based in part on data collected at Subaru Telescope which is operated by the National Astronomical Observatory of Japan and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at http://dm.lsst.org. The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen’s University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE) and the Los Alamos National Laboratory. Based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan. This paper includes data gathered with the Swope telescope located at Las Campanas Observatory, Chile. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l’Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This research has made use of NASA’s Astrophysics Data System Bibliographic Services. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013, http://dx.doi.org/10.1051/0004-6361/201322068). The Herschel spacecraft was designed, built, tested, and launched under a contract to ESA managed by the Herschel/Planck Project team by an industrial consortium under the overall responsibility of the prime contractor Thales Alenia Space (Cannes), and including Astrium (Friedrichshafen) responsible for the payload module and for system testing at spacecraft level, Thales Alenia Space (Turin) responsible for the service module, and Astrium (Toulouse) responsible for the telescope, with in excess of a hundred subcontractors

References

  1. Abadi, M., Barham, P., Chen, J., et al. 2016, in Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16), 16, 265 [Google Scholar]
  2. Autry, R. G., Probst, R. G., Starr, B. M., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, eds. M. Iye, & A. F. M. Moorwood, 4841, 525 [NASA ADS] [CrossRef] [Google Scholar]
  3. Badrinarayanan, V., Kendall, A., & Cipolla, R. 2015, ArXiv e-prints [arXiv:1511.00561] [Google Scholar]
  4. Badrinarayanan, V., Kendall, A., & Cipolla, R. 2017, IEEE Trans. Pattern Anal. Mach. Intell., 39, 2481 [Google Scholar]
  5. Bailer-Jones, C. A., Smith, K., Tiede, C., Sordo, R., & Vallenari, A. 2008, MNRAS, 391, 1838 [NASA ADS] [CrossRef] [Google Scholar]
  6. Bektešević, D., Vinković, D., Rasmussen, A., & Ivezić, Ž. 2018, MNRAS, 474, 4837 [NASA ADS] [CrossRef] [Google Scholar]
  7. Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, PASP, 131, 018002 [NASA ADS] [CrossRef] [Google Scholar]
  8. Bertin, E. 2009, Mem. Soc. Astron. It., 80, 422 [NASA ADS] [Google Scholar]
  9. Bertin, E. 2013, Astrophysics Source Code Library [record ascl:1301.001] [Google Scholar]
  10. Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  11. Bosch, J., Armstrong, R., Bickerton, S., et al. 2018, PASJ, 70, S5 [NASA ADS] [CrossRef] [Google Scholar]
  12. Boulade, O., Charlot, X., Abbon, P., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, eds. M. Iye, & A. F. M. Moorwood, 4841, 72 [Google Scholar]
  13. Bouy, H., Bertin, E., Moraux, E., et al. 2013, A&A, 554, A101 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  14. Casali, M., Adamson, A., Alves de Oliveira, C., et al. 2007, A&A, 467, 777 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  15. Cuillandre, J. C., Luppino, G. A., Starr, B. M., & Isani, S. 2000, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, eds. M. Iye, & A. F. Moorwood, 4008, 1010 [NASA ADS] [CrossRef] [Google Scholar]
  16. Dalton, G. B., Caldwell, M., Ward, A. K., et al. 2006, Proc. SPIE, 6269, 62690X [CrossRef] [Google Scholar]
  17. Flaugher, B. L., Abbott, T. M. C., Annis, J., et al. 2010, in Ground-based and Airborne Instrumentation for Astronomy III, Proc. SPIE, 7735, 77350D [CrossRef] [Google Scholar]
  18. Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., & Garcia-Rodriguez, J. 2017, ArXiv e-prints [arXiv:1704.06857] [Google Scholar]
  19. Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3 [Google Scholar]
  20. Hampshire, II, J. B., & Pearlmutter, B. 1991, Connectionist Models (Elsevier), 159 [CrossRef] [Google Scholar]
  21. Ienaka, N., Kawara, K., Matsuoka, Y., et al. 2013, ApJ, 767, 80 [NASA ADS] [CrossRef] [Google Scholar]
  22. Ives, D. 1998, IEEE Spectrum, 16, 20 [NASA ADS] [Google Scholar]
  23. Kawanomoto, S., Komiyama, Y., & Yagi, M. 2016a, in Subaru Users’ Meeting FY2016 [Google Scholar]
  24. Kawanomoto, Y., Yagi, M., & Kawanomoto, S. 2016b, in Subaru Users’ Meeting FY2016 [Google Scholar]
  25. Kingma, D. P., & Ba, J. 2014, ArXiv e-prints [arXiv:1412.6980] [Google Scholar]
  26. Krizhevsky, A., Sutskever, I., & Hinton, G. E. 2012, in Advances in Neural Information Processing Systems, 1097 [Google Scholar]
  27. Kuijken, K., Bender, R., Cappellaro, E., et al. 2002, The Messenger, 110, 15 [NASA ADS] [Google Scholar]
  28. LeCun, Y., & Bengio, Y. 1995, The Handbook of Brain Theory and Neural Networks (Cambridge: MIT Press), 3361 [Google Scholar]
  29. Long, J., Shelhamer, E., & Darrell, T. 2015, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431 [Google Scholar]
  30. Long, K. S., Baggett, S. M., & MacKenty, J. W. 2015, Persistence in the WFC3 IR Detector: an Improved Model Incorporating the Effects of Exposure Time, Tech. rep. [Google Scholar]
  31. Lowe, D. G. 1999, ICCV’99: Proceedings of the International Conference on Computer Vision, 1150 [Google Scholar]
  32. Magnier, E. A., & Cuillandre, J.-C. 2004, PASP, 116, 449 [NASA ADS] [CrossRef] [Google Scholar]
  33. Matthews, B. W. 1975, Biochimica et Biophysica Acta (BBA)-Protein Structure, 405, 442 [Google Scholar]
  34. McCully, C., Crawford, S., Kovacs, G., et al. 2018, https://doi.org/10.5281/zenodo.1482019 [Google Scholar]
  35. Melchior, P., Sheldon, E., Drlica-Wagner, A., et al. 2016, Astron. Comput., 16, 99 [NASA ADS] [CrossRef] [Google Scholar]
  36. Metzger, M. R., Luppino, G. A., & Miyazaki, S. 1995, Bull. Am. Astron. Soc., 27, 1389 [Google Scholar]
  37. Miville-Deschênes, M.-A., Duc, P.-A., Marleau, F., et al. 2016, A&A, 593, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  38. Miyazaki, S., Komiyama, Y., Kawanomoto, S., et al. 2018, PASJ, 70, S1 [NASA ADS] [Google Scholar]
  39. Morganson, E., Gruendl, R. A., Menanteau, F., et al. 2018, PASP, 130, 074501 [NASA ADS] [CrossRef] [Google Scholar]
  40. Nir, G., Zackay, B., & Ofek, E. O. 2018, AJ, 156, 229 [NASA ADS] [CrossRef] [Google Scholar]
  41. Ordénovic, C., Surace, C., Torrésani, B., & Llébaria, A. 2008, Stat. Methodol., 5, 373 [NASA ADS] [CrossRef] [Google Scholar]
  42. Pilbratt, G. L., Riedinger, J. R., Passvogel, T., et al. 2010, A&A, 518, L1 [CrossRef] [EDP Sciences] [Google Scholar]
  43. Rheault, J. P., Mondrik, N. P., DePoy, D. L., Marshall, J. L., & Suntzeff, N. B. 2014, Spectrophotometric Calibration of the Swope and duPont Telescopes for the Carnegie Supernova Project 2 [Google Scholar]
  44. Richard, M. D., & Lippmann, R. P. 1991, Neural Comput., 3, 461 [CrossRef] [Google Scholar]
  45. Rojas, R. 1996, Neural Comput., 8, 41 [CrossRef] [Google Scholar]
  46. Rubinstein, R. 1999, Methodol. Comput. Appl. Probab., 1, 127 [CrossRef] [Google Scholar]
  47. Ruder, S. 2016, ArXiv e-prints [arXiv:1609.04747] [Google Scholar]
  48. Saerens, M., Latinne, P., & Decaestecker, C. 2002, Neural Comput., 14, 21 [CrossRef] [Google Scholar]
  49. Simonyan, K., & Zisserman, A. 2014, ArXiv e-prints [arXiv:1409.1556] [Google Scholar]
  50. Szegedy, C., Liu, W., Jia, Y., et al. 2015, ArXiv e-prints [arXiv:1409.4842] [Google Scholar]
  51. Valdes, F., Gruendl, R., & DES Project 2014, in Astronomical Data Analysis Software and Systems XXIII, eds. N. Manset, & P. Forshay, ASP Conf. Ser., 485, 379 [NASA ADS] [Google Scholar]
  52. van Dokkum, P. G. 2001, PASP, 113, 1420 [NASA ADS] [CrossRef] [Google Scholar]
  53. Vandame, B. 2002, in Astronomical Data Analysis II, eds. J. L. Starck, & F. D. Murtagh, SPIE Conf. Ser., 4847, 123 [Google Scholar]
  54. Williams, C. K. I. 1998, in Prediction with Gaussian Processes: From Linear Regression to Linear Prediction and Beyond, ed. M. I. Jordan (Dordrecht: Springer), 599 [Google Scholar]
  55. Wolfe, T., Armandroff, T., Blouke, M. M., et al. 2000, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, eds. M. M. Blouke, N. Sampat, G. M. Williams, & T. Yeh, 3965, 80 [NASA ADS] [Google Scholar]
  56. Yang, T., Wu, Y., Zhao, J., & Guan, L. 2018, Cognit. Syst. Res., 53, 20 [CrossRef] [Google Scholar]

Appendix A: Performance metric curves and qualitative tests

thumbnail Fig. A.1.

ROC curves: TPR vs. FPR. The FPR axis in in logarithmic scale so that very low FPR are best visualized. The ROC curve and the AUC are provided for each class.

thumbnail Fig. A.1.

continued.

thumbnail Fig. A.2.

MC coefficient curves: MC coefficient vs. detection threshold. On each curve is annotated the threshold for which the MC coefficient is the highest. These curves were computed using the probabilities corrected from priors using empirical training priors.

thumbnail Fig. A.2.

continued.

thumbnail Fig. A.3.

Purity curves: TPR vs. PUR.

thumbnail Fig. A.3.

continued.

thumbnail Fig. A.4.

Prediction example for an instrument not used in training: ZTF (Bellm et al. 2019). Left: a science image exposure. Top right: mask from the ZTF pipeline. Bottom right: flagging by MAXIMASK; the trail is correctly recovered. Also, MAXIMASK CNN is able to correctly flag pixels where the trail overlaps sources whereas in the ZTF pipeline, all pixels (i.e., pixels belonging only to the trail, pixels belonging only to sources, and pixels belonging to both the trail and sources) are flagged as both trail and source.

thumbnail Fig. A.5.

Example of a prediction for a space instrument (HST) not used in training (ACS exposure). Left: a calibrated (flat-fielded, CTE-corrected) individual exposure of a stellar field in the Pleiades. Top right: fully calibrated, geometrically-corrected, dither-combined image where cosmic rays and artifacts have been removed. Bottom right: MAXIMASK contaminant identification. Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG). Pixels that belong to several classes are represented in black. For the sake of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.

thumbnail Fig. A.6.

Same as Fig. A.5 at a different location in the field to illustrate the ability of MAXIMASK to differentiate poorly sampled stellar images from cosmic rays.

All Tables

Table 1.

Instruments used in this study.

Table 2.

Parameters used for the generation of persistence.

Table 3.

COSMIC-DANCE archive usage per imaging instrument.

Table 4.

All the contaminants and their abbreviated names.

Table 5.

Description of the local contaminants neural network architecture, including map dimensions.

Table 6.

Description of the global contaminant neural network architecture, including map dimensions.

Table 7.

AUC of each class depending on the test set context.

All Figures

thumbnail Fig. 1.

Examples of contaminants and their ground truth. Top row: cosmic ray hits, hot columns, bad columns. Bottom row: bad lines, persistence, satellite trails.

In the text
thumbnail Fig. 2.

Examples of added fringes and nebulosities. Top: fringes; uncontaminated input exposure, smoothed fringe pattern, contaminated image, ground truth mask, polynomial envelope. Bottom: nebulosities; uncontaminated input exposure, Herschel 250 μm molecular cloud image, contaminated image, ground truth mask.

In the text
thumbnail Fig. 3.

Neural network used specifically for spike detection.

In the text
thumbnail Fig. 4.

Example of a spike mask obtained by inference of the separate neural network.

In the text
thumbnail Fig. 5.

Empirical spike flagging process. From left to right: source image centered on a bright star candidate, the same image thresholded, the two pointwise products, the matched filtered pointwise products, the final mask drawed from the empirical size computed with the two previous masks.

In the text
thumbnail Fig. 6.

Examples of images affected by tracking errors.

In the text
thumbnail Fig. 7.

Schematic view of the sample production pipeline. All COSMIC-DANCe archive images have their background map computed. Clean images are built from the COSMIC-DANCe archives. Contaminants from diverse sources (COSMIC-DANCe archives, Herschel archives or simulations) are added to clean images; this step uses the background maps. The resulting local contaminant images are dynamically compressed (see Sect. 2.3.3) and ready to be fetched into the neural network. Global contaminant samples are directly obtained from the COSMIC DANCe archives and dynamically compressed.

In the text
thumbnail Fig. 8.

Examples of input (left) and their ground truth (right). Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG). Pixels that belong to several classes are represented in black. In the interest of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.

In the text
thumbnail Fig. 9.

Example of an unpooling process. Indices of max-pooling are kept up and reused to upsample the feature maps.

In the text
thumbnail Fig. 10.

Scheme representation of the local contaminants neural network architecture.

In the text
thumbnail Fig. 11.

Scheme representation of the global contaminants neural network architecture.

In the text
thumbnail Fig. 12.

Examples of qualitative results on test data. Left: input; middle: ground truth; right: predictions. Each class is assigned a color so that the ground truth can be represented in one single image. Class predictions are done according to the threshold giving the highest MC coefficient. The color coding is identical to that of Fig. 8.

In the text
thumbnail Fig. 13.

CR detection performance comparison with LA Cosmic.

In the text
thumbnail Fig. 14.

Global contaminant neural network ROC curve; the steps are a consequence of limited statistics.

In the text
thumbnail Fig. A.1.

ROC curves: TPR vs. FPR. The FPR axis in in logarithmic scale so that very low FPR are best visualized. The ROC curve and the AUC are provided for each class.

In the text
thumbnail Fig. A.2.

MC coefficient curves: MC coefficient vs. detection threshold. On each curve is annotated the threshold for which the MC coefficient is the highest. These curves were computed using the probabilities corrected from priors using empirical training priors.

In the text
thumbnail Fig. A.3.

Purity curves: TPR vs. PUR.

In the text
thumbnail Fig. A.4.

Prediction example for an instrument not used in training: ZTF (Bellm et al. 2019). Left: a science image exposure. Top right: mask from the ZTF pipeline. Bottom right: flagging by MAXIMASK; the trail is correctly recovered. Also, MAXIMASK CNN is able to correctly flag pixels where the trail overlaps sources whereas in the ZTF pipeline, all pixels (i.e., pixels belonging only to the trail, pixels belonging only to sources, and pixels belonging to both the trail and sources) are flagged as both trail and source.

In the text
thumbnail Fig. A.5.

Example of a prediction for a space instrument (HST) not used in training (ACS exposure). Left: a calibrated (flat-fielded, CTE-corrected) individual exposure of a stellar field in the Pleiades. Top right: fully calibrated, geometrically-corrected, dither-combined image where cosmic rays and artifacts have been removed. Bottom right: MAXIMASK contaminant identification. Each class is assigned a color so that the ground truth can be represented as a single image (red: CR, dark green: HCL, dark blue: BCL, green: HP, blue: BP, yellow: P, orange: TRL, gray: FR, light gray: NEB, purple: SAT, light purple: SP, brown: OV, pink: BBG, dark gray: BG). Pixels that belong to several classes are represented in black. For the sake of visualization, hot and dead pixel masks have been morphologically dilated so that they appear as 3 × 3 pixel areas in this representation.

In the text
thumbnail Fig. A.6.

Same as Fig. A.5 at a different location in the field to illustrate the ability of MAXIMASK to differentiate poorly sampled stellar images from cosmic rays.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.