Open Access

Table 1.

Table summarizing the different neural compression schemes used for weak-lensing applications.

Reference Loss function Inference strategy
Gupta et al. (2018) MAE Likelihood-based analysis

Fluri et al. (2018) GNLL Likelihood-based analysis

Fluri et al. (2019) GNLL Likelihood-based analysis

Ribli et al. (2019) MAE Likelihood-based analysis

Matilla et al. (2020) MAE Likelihood-based analysis

Jeffrey et al. (2021) MSE
VMIM
Likelihood Free Inference
(Py-Delfi)

Fluri et al. (2021) IMNN Likelihood Free Inference
(GPABC)

Fluri et al. (2022) IMNN Likelihood Free Inference
(GPABC)

Lu et al. (2022) MSE Likelihood-based analysis

Kacprzak & Fluri (2022) GNLL Likelihood-based analysis

Lu et al. (2023) MSE Likelihood-based analysis

Akhmetzhanova et al. (2024) VICReg Likelihood Free Inference
(SNPE)

Sharma et al. (2024) MSE, MSEPCA,
MSENP, VMIM
Likelihood-based analysis

Jeffrey et al. (2024) MSE Likelihood Free Inference
(Py-Delfi)

Notes. Gray boxes correspond to analyses performed on real data. Abbreviations used in the Table: MSE-mean squared error; MSENP-mean squared error in S8 space; MSEPCA-mean squared Error in PCA space; MAE-mean absolute error; GNLL- gaussian negative log likelihood; VMIM- variational mutual information maximization; VICReg: variance-invariance-covariance regularization; IMNN- information maximizing neural network; GPABC-gaussian processes approximate bayesian computation.

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.