Open Access

Table 2

Purity, completeness, and F1-score.

Model Normalization Input Aggregation Metrics

Purity Completeness Fl
Diffusion γ = 1 dirty image single run 100.00 44.97 62.04
detect-aggregate 100.00 53.52 69.72
mean 99.60 93.11 96.24
median 99.83 90.84 95.12
Diffusion γ = 2 dirty image single run 97.45 96.71 97.08
detect-aggregate 98.33 97.88 98.10
mean 97.88 97.80 97.84
median 99.30 97.05 98.16
Diffusion γ = 10 dirty image single run 98.66 94.66 96.62
detect-aggregate 98.90 95.69 97.27
mean 98.51 95.23 96.84
median 99.29 94.78 96.98
Diffusion γ = 20 dirty image single run 98.26 94.43 95.8l
detect-aggregate 99.52 94.70 97.05
mean 99.52 93.64 96.49
median 99.52 93.79 96.57
Diffusion γ = 30 dirty image single run 98.58 93.46 94.93
detect-aggregate 99.20 94.10 96.58
mean 98.80 93.41 96.03
median 98.81 94.25 96.47
PyBDSF clean image - 72.18 20.82 32.31
Taran et al. (2023) reduced uv-samples - 91.02 74.14 81.72
Photutils localization sky model - 99.70 99.10 99.40

Notes. The localization is performed on predictions that have been renormalized to be within the same range as the original sky model. The model configurations vary in terms of normalization power used during training, and also in terms of the aggregation method employed. For the normalization with high root powers γ, the optimal aggregation method is detect-aggregate. This approach offers a good trade-off between purity and completeness, resulting in the best F1-score. For the lower γ, the best approach turned out to be aggregate-detect. We compare the results with Mohan & Rafferty (2015; PyBDSF) and Taran et al. (2023). We run the Photutils algorithm directly on sky models to quantify the error coming from this step, denoted as Photutils localization. Italic font indicates the optimal metric value across all conducted experiments.

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.