Issue |
A&A
Volume 644, December 2020
|
|
---|---|---|
Article Number | A163 | |
Number of page(s) | 27 | |
Section | Cosmology (including clusters of galaxies) | |
DOI | https://doi.org/10.1051/0004-6361/202038219 | |
Published online | 15 December 2020 |
HOLISMOKES
II. Identifying galaxy-scale strong gravitational lenses in Pan-STARRS using convolutional neural networks⋆
1
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany
e-mail: rcanameras@mpa-garching.mpg.de
2
Physik Department, Technische Universität München, James-Franck Str. 1, 85741 Garching, Germany
3
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of ASMAB, No. 1, Section 4, Roosevelt Road, Taipei 10617, Taiwan
4
Technical University of Munich, Department of Informatics, Boltzmann-Str. 3, 85748 Garching, Germany
5
Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland
Received:
21
April
2020
Accepted:
4
June
2020
We present a systematic search for wide-separation (with Einstein radius θE ≳ 1.5″), galaxy-scale strong lenses in the 30 000 deg2 of the Pan-STARRS 3π survey on the Northern sky. With long time delays of a few days to weeks, these types of systems are particularly well-suited for catching strongly lensed supernovae with spatially-resolved multiple images and offer new insights on early-phase supernova spectroscopy and cosmography. We produced a set of realistic simulations by painting lensed COSMOS sources on Pan-STARRS image cutouts of lens luminous red galaxies (LRGs) with redshift and velocity dispersion known from the sloan digital sky survey (SDSS). First, we computed the photometry of mock lenses in gri bands and applied a simple catalog-level neural network to identify a sample of 1 050 207 galaxies with similar colors and magnitudes as the mocks. Second, we trained a convolutional neural network (CNN) on Pan-STARRS gri image cutouts to classify this sample and obtain sets of 105 760 and 12 382 lens candidates with scores of pCNN > 0.5 and > 0.9, respectively. Extensive tests showed that CNN performances rely heavily on the design of lens simulations and the choice of negative examples for training, but little on the network architecture. The CNN correctly classified 14 out of 16 test lenses, which are previously confirmed lens systems above the detection limit of Pan-STARRS. Finally, we visually inspected all galaxies with pCNN > 0.9 to assemble a final set of 330 high-quality newly-discovered lens candidates while recovering 23 published systems. For a subset, SDSS spectroscopy on the lens central regions proves that our method correctly identifies lens LRGs at z ∼ 0.1–0.7. Five spectra also show robust signatures of high-redshift background sources, and Pan-STARRS imaging confirms one of them as a quadruply-imaged red source at zs = 1.185, which is likely a recently quenched galaxy strongly lensed by a foreground LRG at zd = 0.3155. In the future, high-resolution imaging and spectroscopic follow-up will be required to validate Pan-STARRS lens candidates and derive strong lensing models. We also expect that the efficient and automated two-step classification method presented in this paper will be applicable to the ∼4 mag deeper gri stacks from the Rubin Observatory Legacy Survey of Space and Time (LSST) with minor adjustments.
Key words: gravitational lensing: strong / methods: data analysis / galaxies: distances and redshifts / surveys
Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/644/A163
© R. Cañameras et al. 2020
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open Access funding provided by Max Planck Society.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.