Issue |
A&A
Volume 690, October 2024
|
|
---|---|---|
Article Number | A310 | |
Number of page(s) | 15 | |
Section | Numerical methods and codes | |
DOI | https://doi.org/10.1051/0004-6361/202449964 | |
Published online | 17 October 2024 |
Self-supervised learning on MeerKAT wide-field continuum images
1
Department of Computer Science, University of Geneva,
7 route de Drize,
1227
Carouge,
Switzerland
2
Department of Astronomy, University of Geneva,
51 Chemin Pegasi,
1290
Versoix,
Switzerland
★ Corresponding author; erica.lastufka@unige.ch
Received:
13
March
2024
Accepted:
9
August
2024
Context. Self-supervised learning (SSL) applied to natural images has demonstrated a remarkable ability to learn meaningful, low-dimension representations without labels, resulting in models that are adaptable to many different tasks. Until now, applications of SSL to astronomical images have been limited to Galaxy Zoo datasets, which require a significant amount of preprocessing to prepare sparse images centered on a single galaxy. With wide-field survey instruments at the forefront of the Square Kilometer Array (SKA) era, this approach to gathering training data is impractical.
Aims. We demonstrate that continuum images from surveys such as the MeerKAT Galactic Cluster Legacy Survey (MGCLS) can be successfully used with SSL, without extracting single-galaxy cutouts.
Methods. Using the SSL framework DINO, we experimented with various preprocessing steps, augmentations, and architectures to determine the optimal approach for this data. We trained both ResNet50 and Vision Transformer (ViT) backbones.
Results. Our models match state-of-the-art results (trained on Radio Galaxy Zoo) for FRI/FRII morphology classification. Furthermore, they predict the number of compact sources via linear regression with much higher accuracy. Open-source foundation models trained on natural images such as DINOv2 also excel at simple FRI/FRII classification; the advantage of domain-specific backbones is much smaller models trained on far less data. Smaller models are more efficient to fine-tune, and doing so results in a similar performance between our models, the state-of-the-art, and open-source models on multi-class morphology classification.
Conclusions. Using source-rich crops from wide-field images to train multi-purpose models is an easily scalable approach that significantly reduces data preparation time. For the tasks evaluated in this work, twenty thousand crops is sufficient training data for models that produce results similar to state-of-the-art. In the future, complex tasks like source detection and characterization, together with domain-specific tasks, ought to demonstrate the true advantages of training models with radio astronomy data over natural-image foundation models.
Key words: methods: data analysis / techniques: image processing / radio continuum: general
© The Authors 2024
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.