Table 3
Mean combined scores of all source-finding pipelines.
Real | Mock | Total | Mean | |
---|---|---|---|---|
SoFiA + random forest | 0.689 | 0.684 | 0.730 | 0.686 |
SoFiA | 0.651 | 0.703 | 0.747 | 0.677 |
V-Net + random forest | 0.472 | 0.763 | 0.773 | 0.610 |
V-Net | 0.449 | 0.752 | 0.764 | 0.600 |
MTO + random forest | 0.317 | 0.553 | 0.563 | 0.435 |
MTO | 0.201 | 0.421 | 0.420 | 0.311 |
Notes. The ‘Total’ score represents the pipeline’s ability to find all the real sources and mock galaxies together, while the ‘Mean’ score represents the mean of the ‘Mock’ and ‘Real’ scores, which evaluate the detections of mock galaxies and real sources separately. The pipelines with + random forest indicate that a random forest classifier was used to post-process the results.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.