Issue |
A&A
Volume 683, March 2024
|
|
---|---|---|
Article Number | A26 | |
Number of page(s) | 17 | |
Section | Extragalactic astronomy | |
DOI | https://doi.org/10.1051/0004-6361/202347395 | |
Published online | 29 February 2024 |
Multimodality for improved CNN photometric redshifts
1
Aix Marseille Univ., CNRS, CNES, LAM, 13388 Marseille, France
e-mail: reda.ait-ouahmed@lam.fr
2
AMIS – Université Paul-Valéry – Montpellier 3, 34000 Montpellier, France
3
UMR TETIS – INRAE, AgroParisTech, Cirad, CNRS, 34000 Montpellier, France
4
Sorbonne Université, CNRS, UMR 7095, Institut d’Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France
Received:
7
July
2023
Accepted:
12
September
2023
Photometric redshift estimation plays a crucial role in modern cosmological surveys for studying the universe’s large-scale structures and the evolution of galaxies. Deep learning has emerged as a powerful method to produce accurate photometric redshift estimates from multiband images of galaxies. Here, we introduce a multimodal approach consisting of the parallel processing of several subsets of prior image bands, the outputs of which are then merged for further processing through a convolutional neural network (CNN). We evaluate the performance of our method using three surveys: the Sloan Digital Sky Survey (SDSS), the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS), and the Hyper Suprime-Cam (HSC). By improving the model’s ability to capture information embedded in the correlation between different bands, our technique surpasses state-of-the-art photometric redshift precision. We find that the positive gain does not depend on the specific architecture of the CNN and that it increases with the number of photometric filters available.
Key words: methods: data analysis / techniques: image processing / surveys / galaxies: distances and redshifts / galaxies: high-redshift / galaxies: photometry
© The Authors 2024
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.