Issue |
A&A
Volume 611, March 2018
|
|
---|---|---|
Article Number | A2 | |
Number of page(s) | 9 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/201731201 | |
Published online | 13 March 2018 |
Deep convolutional neural networks as strong gravitational lens detectors
1
Institute of Physics, Laboratory of Astrophysics, École Polytechnique Frale de Lausanne (EPFL), Observatoire de Sauverny,
1250 Versoix,
Switzerland
e-mail: christophernstrerne.schaefer@epfl.ch
2
Aix-Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326,
13388 Marseille, France
Received:
19
May
2017
Accepted:
22
October
2017
Context. Future large-scale surveys with high-resolution imaging will provide us with approximately 105 new strong galaxy-scale lenses. These strong-lensing systems will be contained in large data amounts, however, which are beyond the capacity of human experts to visually classify in an unbiased way. Aims. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the strong-lensing challenge organized by the Bologna Lens Factory. It achieved first and third place, respectively, on the space-based data set and the ground-based data set. The goal was to find a fully automated lens finder for ground-based and space-based surveys that minimizes human inspection.
Methods. We compared the results of our CNN architecture and three new variations (“invariant” “views” and “residual”) on the simulated data of the challenge. Each method was trained separately five times on 17 000 simulated images, cross-validated using 3000 images, and then applied to a test set with 100 000 images. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score, and the recall with no false positive (Recall0FP).
Results. For ground-based data, our best method achieved an AUC score of 0.977 and a Recall0FP of 0.50. For space-based data, our best method achieved an AUC score of 0.940 and a Recall0FP of 0.32. Adding dihedral invariance to the CNN architecture diminished the overall score on space-based data, but achieved a higher no-contamination recall. We found that using committees of five CNNs produced the best recall at zero contamination and consistently scored better AUC than a single CNN.
Conclusions. We found that for every variation of our CNN lensfinder, we achieved AUC scores close to 1 within 6%. A deeper network did not outperform simpler CNN models either. This indicates that more complex networks are not needed to model the simulated lenses. To verify this, more realistic lens simulations with more lens-like structures (spiral galaxies or ring galaxies) are needed to compare the performance of deeper and shallower networks.
Key words: gravitational lensing: strong / methods: numerical / methods: data analysis / techniques: image processing / cosmology: observations / dark matter
© ESO 2018
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.