Open Access
Table E.2
Terminology
term | explanation |
---|---|
spectral parameters, labels | These are parametric descriptions of a spectrum, providing quantifiable metrics that characterize its properties. In the context of our study, spectral parameters include elements such as radial velocity, temperature, metallicity, gravity, airmass, and water vapor (Gray & Kaur 2019; Gullikson et al. 2014). |
ML parameters | ML parameters refer to the elements of a ML model that we fit to the training data. In the context of neural networks, this typically includes the internal weights and biases that are adjusted through training to minimize loss function. |
ML hyperparameters | ML hyperparameters are the remaining parameters that complete the ML model description, which are not tuned during training. They include choices related to architecture, learning rate, batch size, and any other aspects not covered by the model parameters. We search for the optimal ML hyperparameters by comparing ML models with different sets of ML hyperparameters by evaluating the loss function on the validation dataset. |
factor | A factor refers to an individual, independent source of variation within the dataset. Our models aim to identify and isolate these factors for a more interpretable and comprehensible data representation. |
latent representation | A compressed representation of the input data. |
bottleneck | A narrow layer of AEs that provides latent representations. |
latent space | A space consisting of latent representations. |
label-aware | Models operating under supervised or semi-supervised paradigms. |
reconstruction | The goal of reconstruction task is to reconstruct an input s from latent representation b. |
disentanglement | Disentangled representation separates the underlying factors. |
factor injection | Directly supervising bottleneck with a known factors. |
out-of-distribution data | Data that have a different distribution than the training dataset. |
transfer learning | Learning from one task and applying it to another task. For example, training a model on the ETC HARPS dataset and applying it to a real HARPS dataset. |
one-shot learning | Transfer learning from a single example per class. This is similar to how humans learn. |
zero-shot learning | Transfer learning without any new training data. We train only on real HARPS parameters and apply it to the ETC HARPS dataset or vice versa. |
downstream learning | Using the learned representation b for another task (like source parameters prediction). Downstream learning can provide a metric to evaluate the effectiveness of the data compression. |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.