EDP Sciences
Free Access
Issue
A&A
Volume 589, May 2016
Article Number A95
Number of page(s) 7
Section Numerical methods and codes
DOI https://doi.org/10.1051/0004-6361/201425181
Published online 19 April 2016

© ESO, 2016

1. Introduction

Machine-learning (ML) algorithms are used to automatically extract patterns from datasets and to make predictions based on the acquired knowledge. They are often used when it is impractical or otherwise not advisable to proceed manually, either because of the size of the data or because of its complexity (e.g. extremely high dimensionality), which prevents the use of simple models and direct visualization. ML algorithms are usually divided into two classes: supervised (in which a set of training examples is used by the algorithm to build a predictive model to be used in a subsequent phase) and unsupervised (in which the algorithm identifies patterns directly from the data).

ML techniques are by now ubiquitous in Astronomy, where they have been successfully applied to photometric redshift estimation in large surveys such as the Sloan Digital Sky Survey (Tagliaferri et al. 2003; Li et al. 2007; Ball et al. 2007; Gerdes et al. 2010; Singal et al. 2011; Geach 2012; Carrasco Kind & Brunner 2013; Cavuoti et al. 2014; Hoyle et al. 2015a,b), automatic identification of quasi stellar objects (Yèche et al. 2010), galaxy morphology classification (Banerji et al. 2010; Shamir et al. 2013; Kuminski et al. 2014), detection of HI bubbles in the interstellar medium (Thilker et al. 1998; Daigle et al. 2003), classification of diffuse interstellar bands in the Milky Way (Baron et al. 2015), prediction of solar flares (Colak & Qahwaji 2009; Yu et al. 2009), automated classification of astronomical transients and detection of variability (Mahabal et al. 2008; Djorgovski et al. 2012; Brink et al. 2013; du Buisson et al. 2015; Wright et al. 2015), cataloguing of impact craters on Mars (Stepinski et al. 2009), prediction of galaxy halo occupancy in cosmological simulations (Xu et al. 2013), dynamical mass measurement of galaxy clusters (Ntampaka et al. 2015), and supernova identification in supernova searches (Bailey et al. 2007). Software tools developed specifically for astronomy are also becoming available to the community, still mainly with large observational datasets in mind (VanderPlas et al. 2012; Vander Plas et al. 2014; VanderPlas et al. 2014; Ball & Gray 2014).

The goal of this note is to test the feasibility of using ML methods for the automatic interpretation of gravitational simulations of star clusters. In particular, we want to check whether classification algorithms when applied to a set of mock-observations obtained from gravitational N-body simulations are able to extract useful information on the underlying physics of the simulations. To avoid unnecessary abstraction, we chose to address an actual question (with a yes/no answer) regarding the dynamical history of globular clusters (GCs): are some GCs the product of a merger of progenitors or did all GCs evolve in isolation from a monolithic protocluster?

According to the current consensus, galaxy mergers are quite frequent on the cosmological timescale and probably the main engine of galaxy evolution (see, e.g. Toomre & Toomre 1972; Toomre 1977; Mihos & Hernquist 1994). In principle, galaxy merging could also be studied by applying machine-learning to simulations. However the case of GCs is more familiar to the authors and mock-observations are easier to build because most GCs are resolved into stars, so for our discussion we do not need to convert the positions and masses of the simulated stars into a luminosity density profile. From the point of view of running simulations, the fact that GCs lack gas (though they may have contained considerable quantities of it in the past), partly justifies our choice of modelling only dissipationless dynamics with pure particle N-body simulations. On the other hand, to simulate a wet galaxy merger the dissipative evolution of the gas component needs to be modelled with hydrodynamical codes, and merger-induced star formation must be taken into account. It has already been suggested that some GCs may have a history of merging, which possibly resulted in massive objects with a strong metallicity spread, such as Omega Centauri (Sugimoto & Makino 1989; van den Bergh 1996; Catelan 1997; Thurl & Johnston 2002; Amaro-Seoane et al. 2013; Lee et al. 2013) or in nuclear star clusters (Capuzzo-Dolcetta et al. 2005; Miocchi et al. 2006; Capuzzo-Dolcetta & Miocchi 2008; Capuzzo-Dolcetta 2013). Observationally, mergers and monolithic clusters ought to be different, either in their sky-projected density distribution, or in their stellar kinematics, or both. Dynamical relaxation is bound to erase the initial conditions and the memory of a merger with them, but not necessarily quickly and completely, since it works on relaxation timescales of some Gyr for most GCs (Harris 1996; McLaughlin & van der Marel 2005). Besides an elongated shape owing to residual pressure-tensor anisotropy that was left over by the merger or lingering rotation, in the case of off-axis mergers, the clues are probably subtle and difficult to parameterize. Moreover, deviations from spherical symmetry in clusters are actually observed (White & Shawl 1987; Chen & Chen 2010) but there is currently limited consensus as to their explanation because rotation and tidal effects may result in elongated profiles without the need for a merger (Davoust 1986; Bertin & Varri 2008; Varri & Bertin 2009, 2012; Bianchini et al. 2013; Vesperini et al. 2014).

In Sect. 2 we discuss why the merging problem and similar problems related to the interpretation of numerical simulations of star-cluster dynamics are suitable to being tackled by machine-learning, and how we proceed to use machine-learning to interpret simulations. In Sect. 3 we describe our sets of simulations and provide further details on how the simulations are turned into mock observations and later classified by the chosen algorithms. In Sect. 4 we present our results and conclusions.

thumbnail Fig. 1

Applying supervised classification to the interpretation of simulations, with reference to the GC-merging question. See discussion in the text.

Open with DEXTER

2. Machine-learning: the why and the how

The approach we are testing in this note is motivated by looking at the usual accepted procedures employed for running numerical experiments. Namely:

  • 1.

    identify a specific astronomical question that can, in principle, be answered by observations, (H)

  • 2.

    run simulations for two alternative scenarios, in which the question is answered in the positive or in the negative, respectively, (A)

  • 3.

    identify a parameter that significantly differs between the two scenarios by examining mock observations obtained from the simulations, (H/A?)

  • 4.

    (if possible) measure the parameter on actual observations and draw conclusions as to which scenario is more likely, (H/A).

In parentheses we marked with an A the steps that are fully automated, and with an H the steps that require human intervention. The point of this note is to show that the third step in the procedure above can be automated by machine-learning instead of being carried out by manually finding a parameter that discriminates between the two scenarios. The advantage is that subjective, error-prone, and time-consuming human understanding is substituted by an automated procedure. This is an important advantage in the case of complex systems such as star-clusters, where an intuitive grasp of the underlying physics is difficult at best. It may be useful to compare this problem to face recognition, another field where machine-learning algorithms are quite successful, but humans are successful too: no explicit subject-matter knowledge (i.e. detailed knowledge of the human anatomy) is needed for the task of face recognition, neither by humans nor by machine-learning algorithms. On the other hand, humans cannot answer questions regarding the dynamics of star clusters at a glance, without resorting to detailed dynamical models that need to be developed on a case-by-case basis. This is a strong argument for applying machine-learning to this sort of dynamical problems: to automate the interpretation of simulations, which until now is always exclusively manual, despite the simulations themselves being fully automated.

2.1. Reducing our scientific question to a supervised classification problem

To pick up signs of past merging in simulations without an ad hoc modelling of the underlying physical process, we restated the issue as a supervised classification problem. This is applicable in general to any scientific question that requires us to discriminate between two (or more) scenarios that can be simulated, producing mock observations. Figure 1 presents the procedure that we applied in a visual way. The procedure starts by running simulations of two alternative scenarios, in this case merged GCs or a GC evolved in isolation (top center box). Later, mock observations are generated from snapshots and quantitative parameters are extracted, generating the so-called feature space (middle box, following the arrows). A datapoint in feature space corresponds to a mock observation. The feature space is randomly partitioned in two subsets (train set and test set) containing datapoints that correspond to both scenarios. A model is trained (i.e. has its parameters optimised) on the train set and used to make predictions by classifying the test set, which are later compared with the known ground truth. This is possible because the datapoints in the test set correspond to known simulated scenarios. The process enclosed in the dashed box is repeated several times with different random partitions of the feature space. This results in measures of specificity and sensitivity (true-negative and true-positive rates respectively; bottom right box). The trained model can also be applied to classify actual observations (left column of boxes) and the expected accuracy is known from the validation phase. These ingredients, together with some degree of human interpretation (if possible and/or necessary) may be used to answer the underlying science question (bottom box), provided that the simulations really capture the relevant processes that lead to the observables.

The merged GCs vs. isolated GC scenarios can both be easily simulated, as discussed later. The resulting snapshots are turned into mock observations (essentially the sky-projected positions of stars), losing the velocity and line-of-sight position information. This is similar to actual observations of GCs, except for incompleteness and limited field-of-view effects. From mock observations we extract features, i.e. quantitative parameters, as described in Sect. 3.1. The resulting N-dimensional space, where each mock observation generated from a given snapshot corresponds to a datapoint, is called feature space. The classification algorithms attempt to optimise the parameters of a model (the so-called training phase), essentially drawing a boundary in this multidimensional space. The training step is based on the true classification labels we provide, and the trained model can later be used to classify new datapoints using the learned boundary. There is a wealth of different algorithms to this end, each with different strengths and weaknesses. The algorithms we use are briefly discussed in Sect. 3.3, but see Hastie et al. (2001) for a more in-depth discussion. Any algorithm is likely to commit misclassification errors, at or above the theoretical bound set by the Bayes error rate. Hence, a validation step is required to measure accuracy. To this end, we chose to randomly partition the feature space into two subsets (called train set and test set), both containing datapoints corresponding to the two scenarios. A model is trained (i.e. has its parameters optimised) on the train set, and used to make predictions on the test set, which are later compared to the known true labels of the test-set datapoints. The process (called validation) is repeated several times with different random partitions of the feature space. This allows us to measure the likelihood of misclassification of each model on this particular dataset, also allowing us to obtain an estimate of the accuracy of classification in view of application to actual observations.

3. Simulations, mock observations, dimensionality reduction, and learning

As stated above, the point of this note is to provide a proof-of-concept application of machine-learning methods to simulated stellar systems, so that the complexity of the simulations can be reduced to better focus on the learning aspects of the problem. Therefore, we chose to only simulate dry merging of two equal-mass progenitors1.

We run 13 direct N-body simulations with the state-of-the art direct N-body code NBODY6 (Aarseth 1999). Initially, they all contain 64 000 equal-mass single stars, and no primordial binaries except for three simulations that contain only 32 000 stars. Five simulations correspond to the merging of two equal-mass King (1966) models of 32 000 stars each, and eight to the monolithic evolution of an isolated King model. All the simulations are evolved for at least 2000 N-body units of time (Heggie & Mathieu 1986), which corresponds to about two half-mass relaxation times for the 64 000-star monolitic models. In physical units, if we take the relaxation time of a multi-metallic GC such as ω-Cen to be ≈ 10 Gyr, this corresponds to about 20 Gyr, comfortably covering a Hubble time. We do not include tidal interactions with the parent galaxy nor stellar evolution. The central dimensionless potential of the King models is different across the simulations, to explore the effects of different concentrations on the dynamics. Initial conditions are summarized in Table 1.

Table 1

Summary of the initial conditions of our simulations.

In the merging simulations, the phase space of each of the two clusters is populated using a different random seed, so that the clusters are not exactly identical. The clusters are initially set apart by about 30 times their radius and allowed to fall onto each other with zero initial relative velocity. The reasoning is that leftover rotation from an off-axis merger would make it easier to distinguish a merger from a non-merger, so we want to present our classifiers with a worst-case scenario, while also limiting the parameter space that we need to probe2. Clearly, in this setup, the system’s angular momentum is initially 0, but some angular momentum can be acquired nonetheless later on, because the system may expel mass slightly asymmetrically. Head-on merging is, however, a simplifying assumption that we plan to relax in a future paper. Essentially, the mergers take place instantaneously (actually on a multiple of the crossing time) while the internal evolution takes place on the relaxation timescale, which is orders of magnitude longer. Therefore the pre-merger internal evolution of the progenitors does not affect the outcomes of the merger simulations. We extract 100 subsequent snapshots separated by one N-body unit of time for each simulation, starting at 1100N-body units of time for the 64 000-star models, corresponding to the relaxation time of the 64kW07 model, and at 600N-body units of time for the three monolithic, 32 000-star models, corresponding to the relaxation time of the 32kW06.5 model.

3.1. Mock observations and feature extraction

The snapshots extracted from each simulation are randomly rotated before 2D projection, because we expect that, in actual observations, the plane of the sky should not in any way be privileged in relation to the axis of the merging. Two-dimensional sky-projected positions of stars in each snapshot are obtained, ignoring radial velocities, proper motions, and the third spatial dimension. A finite field of view was imposed by discarding stars that fell outside a square of side four times the snapshot’s 2D half-mass radius, and observational incompleteness was simulated by randomly extracting only 75% of stars from the snapshots. In this context, it would be tempting to use directly as features the x and y sky-projected positions of each star, i.e. to represent each snapshot as 2N numbers where N is the number of stars. However this representation may be dependent on the way we sort stars, i.e. a snapshot would be represented differently if its stars underwent a permutation. Since our stars are indistinguishable (having the same mass) this is clearly not desirable. Consequently, we decided to instead use suitable quantiles of the x and y variables over the sample of stars of each snapshot. In particular, we chose to represent each snapshot with the deciles (10% to 90% quantiles) of x and y. We centred the deciles by subtracting the median and standardised them, dividing by the interquartile range, i.e. a measure of dispersion corresponding to the difference between the top and the bottom quartiles. As a result of this procedure, the median becomes identically 0, so we do not include it in the final feature space. The centring and standardisation remove any reference to the absolute position and size of the snapshot from the feature space. The standardization is carried out independently in the x and y directions, i.e. both the x and the y interquartile ranges are set to 1. Thus some additional information about the shape of the cluster is lost. We do this to simulate a realistic comparison between simulations and observations, where the absolute size of a cluster (e.g. in terms of parsecs) depends on initial conditions and cannot be predicted by simulations, which are scale-free. This procedure leaves us with a 16-dimensional feature space (eight standardized deciles for each of the two coordinates, which we will denote with x10% ... x90% and y10% ... y90%). Building the feature space in this way also has the advantage that, once the quantiles are calculated, the total number of stars in the snapshot does not enter the calculations any more, so that it is not more computationally expensive to analyze snapshots of simulations with a large number of stars. The visual appearance of the mock observations is displayed in Figs. 2 and 3 at different evolutionary stages for the 64kW08 and the 32kmerge7+7 models. These models have similar effective concentrations (with reference to the first principal component of the feature space; see discussion in Sect. 3.2) and thus were chosen to illustrate the difficulties of telling if a cluster is merged or monolithic by making a comparison merely by eye.

thumbnail Fig. 2

Comparison of snapshots taken from a 64 000-star single cluster (64kW08, left column) and a merger of two 32 000-star clusters (32kmerge7+7, right column). The rows correspond to different times in units of the single-star cluster half-mass relaxation time. In the plots each point represents a star in the plane of the sky. The x- and y-axis are in units of the projected half-mass radius. It is hard to visually tell the difference between the two snapshots. The bottom panel corresponds to the age-range of the snapshots used for the machine-learning study.

Open with DEXTER

thumbnail Fig. 3

Same as Fig. 2, but for later times.

Open with DEXTER

3.2. Preliminary dimensionality reduction

We did not apply machine-learning algorithms directly to the train subset of the feature space. We first used principal component analysis to obtain the (centered and scaled) principal components of the train set, over which we also projected the test set. This is useful for visualization and interpretation purposes, and it may result in improved accuracy with respect to the original, untransformed coordinates if some of them are irrelevant to the classification problem. We limited the number of components used in the following analysis by omitting components if their standard deviations were less than or equal to 0.05 times the standard deviation of the first component, by using the R prcomp command with a tol setting of 0.05. This results in only the first eight components being retained. In Fig. 4, we plot the first two principal components of the whole feature space, and in Fig. 5, the third and fourth. The principal components obtained from the whole feature space differ slightly from the principal components that were calculated only on the train set but, in the validation phase, we used the latter to avoid compromising the independence of the test and train data. Even so, the principal components of the whole feature space can be used for the current qualitative discussion and visualization.

thumbnail Fig. 4

Feature space projected on the first two principal components (PC1 on the x- and PC2 on the y-axis). Each symbol represents a mock observation, obtained from a snapshot. Groups of symbols correspond to simulation runs and are labelled according to the name of each run. Empty red symbols correspond to merger simulations, filled black symbols to monolithic simulations. It appears that PC1 correlates with the initial dimensionless potential W0 of each run.

Open with DEXTER

thumbnail Fig. 5

Feature space projected on the third and fourth principal components (PC3 on the x- and PC4 on the y-axis). As in Fig. 4, each symbol represents a mock observation, obtained from a snapshot. Different simulation runs are harder to distinguish in this projection, so the labels used in Fig. 4 have been dropped. Empty red symbols correspond to merger simulations, filled black symbols to monolithic simulations. Mergers and monolithic observations appear systematically different in PC4.

Open with DEXTER

We see from Fig. 4 that for monolithic simulations the first principal component PC1 is essentially determined by the cluster’s initial W0, which is its concentration. Monolithic simulations with a given initial value of W0 produce snapshots that clump together in PC1, and snapshots from merger simulations also clump around values of PC1, which corresponds to the effective values of W0. To avoid this clumping effect, a larger number of simulations should be included in future studies, filling in the gaps in W0. Qualitatively, the fourth principal component from Fig. 5 seems to contribute the most in distinguishing merged from non-merged clusters.

3.3. Supervised learning

We used three different classification algorithms on the feature space to compare their performance and accuracy, all in the R language (R Core Team 2014) implementation:

  • the C5.0 classification-tree algorithm (Quinlan 1986, 1993, R package C50),

  • the k-nearest neighbor algorithm (KNN; see e.g. Altman 1992, R package class), and

  • the support-vector machines algorithm (SVM; Cortes & Vapnik 1995, R package e1071).

While it is outside of the scope of this note to explain the functioning of the above classification algorithms in detail, we briefly recall their basic functioning in the following. The C5.0 algorithm is a particular case of decision tree learning. The purpose of the algorithm is to infer a decision tree that is based on the training data. The decision tree, whose interior nodes correspond to decisions that is based on the value of one of the features, will then be used to classify test data. The algorithm works by recursively partitioning the feature space across lines that are parallel to the coordinates of the feature space. This results in subsets of the feature space that are associated to an end node (leaf) of the tree, which yields a definite prediction for the dependent variable, in this case the classification label. The main difference in tree-learning algorithm depends on how they decide to pick the features on which to split, and on when to stop growing the tree. The k-nearest neighbour algorithm classifies a point in feature space that is based on the classification of its k nearest neighbours. The Euclidean distance in the transformed feature space (i.e. in the feature space projected on the first two principal components, and which is calculated using only the train set) is used to find the k points of the train set that are nearest to the point for which a prediction is being sought. Then, a majority vote among the k points is used to decide how to classify the target. In this procedure, the only adjustable parameter is k, and we chose k = 5. In a more complex setting than ours, where a larger number of simulations is available and additional variability in the datapoints is introduced by mimicking observational uncertainties more realistically when making mock observations, it may be worthwhile to optimize the value of k through cross-validation to deliver the maximum accuracy of the classification. However, this would be premature at this stage. The support-vector machines algorithm is based, in its linear version, on finding the hyperplane that best separates two groups of points in feature space that belong to different classes. Such a hyperplane is, for the purposes of the algorithm, defined as the one for which the distance is maximal between the two groups along its perpendicular. Since linear separation is not always possible, generally the points in feature space are mapped to points in a higher-dimensional space, where it is easier to find a separating hyperplane, which corresponds to a curved hypersurface when brought back to the original feature space.

4. Results

The performance of the algorithms described in 3.3 was evaluated by measuring the rate of misclassification defined as follows: (1)where FP is the number of false positives, FN the number of false negatives, and N the total number of snapshots. The calculation was repeated over 100 random splits of the data into equal sized train- and test-sets, and is presented in Table 2.

Table 2

Summary of the misclassification rates for the three different algorithms we used (Col. 1).

C5.0 (with default settings) and KNN (with k = 5) perform similarly with slightly more than 10% of the snapshots being misclassified. SVM (also with default settings) has a somewhat higher performance, and misclassifies slightly less than 10% of the snapshots.

However, we have seen in Sect. 3.2 that the first principal component PC1 of feature space is, essentially, representing the initial W0. The initial values of W0 for the monolithic simulations included in this study were arbitrarily chosen to span the range W0 = 2−8, which results in clumps along PC1 and which reflect such an arbitrary choice. In the light of this issue, it is interesting to see how the algorithms perform if they are applied to the (PC2,...,PC8) space, excluding PC1. The results are listed in Table 3 and show a slight decline in performance.

Table 3

Summary of the misclassification rates for the three different algorithms (Col. 1) after excluding the first principal component from the transformed feature space.

5. Conclusions

A great variety of machine-learning techniques can be applied to dynamical questions regarding the evolution of GCs, once they have been reduced to a classification problem. Our ability to obtain scientific results on real data using this method is constrained by our ability to:

  • translate complex scientific questions into classification problems;

  • run simulations of different scenarios that capture all the physical mechanisms that significantly affect the observed data, so that simulated and observed data have the same statistical properties;

  • build mock observations that strictly follow the real limitations of actual observational procedures;

  • solve the actual machine-learning problem with proper application of the algorithms as quantification of the classifier performance.

In this note, we demonstrated how the use of simple machine learning algorithms can result in the automatic reconstruction of the dynamical history of simulated systems, at least regarding the specific question we decided to investigate, i.e. is it possible to tell clusters which underwent a merger event from monolitic ones? In our setup, three common machine-learning algorithms managed to achieve a misclassification rate of about 10% without any particular effort in parameter tuning. This is promising for the future application of this method to a larger set of more realistic simulations and eventually observational data. To this end, we need to relax our limiting assumptions in the simulation set-up (head-on free-fall merger only, no tidal effects, no primordial binaries, no mass-spectrum) and improve the process that generates mock observations (including incompleteness that depends on distance from the cluster centre, a realistic field of view, etc.).


1

This is a relatively strong assumption: there is no reason to believe that GC progenitors have to be as gas-free as the GCs we see today, so substantial dissipation and star formation may have taken place in a hypothetic merger event. A dry merger is, however, chosen for simplicity.

2

Similarly, increasing the initial relative velocity increases the centre-of-mass energy of the collision, enhancing mass loss (more stars become unbound and leave the cluster right after the merger) and consequently is expected to make the effects of the merger more evident.

Acknowledgments

This note is the result of the collaborative project between Korea Astronomy and Space Science Institute and Yonsei University through DRC program of Korea Research Council of Fundamental Science and Technology (DRC-12-2-KASI) and from NRF of Korea to CGER. M.P. acknowledges support from Mid-career Researcher Program (No. 2015-008049) through the National Research Foundation (NRF) of Korea. C.C. acknowledges support from the Research Fellow Program (NRF-2013R1A1A2006053) of the National Research Foundation of Korea.

References

All Tables

Table 1

Summary of the initial conditions of our simulations.

Table 2

Summary of the misclassification rates for the three different algorithms we used (Col. 1).

Table 3

Summary of the misclassification rates for the three different algorithms (Col. 1) after excluding the first principal component from the transformed feature space.

All Figures

thumbnail Fig. 1

Applying supervised classification to the interpretation of simulations, with reference to the GC-merging question. See discussion in the text.

Open with DEXTER
In the text
thumbnail Fig. 2

Comparison of snapshots taken from a 64 000-star single cluster (64kW08, left column) and a merger of two 32 000-star clusters (32kmerge7+7, right column). The rows correspond to different times in units of the single-star cluster half-mass relaxation time. In the plots each point represents a star in the plane of the sky. The x- and y-axis are in units of the projected half-mass radius. It is hard to visually tell the difference between the two snapshots. The bottom panel corresponds to the age-range of the snapshots used for the machine-learning study.

Open with DEXTER
In the text
thumbnail Fig. 3

Same as Fig. 2, but for later times.

Open with DEXTER
In the text
thumbnail Fig. 4

Feature space projected on the first two principal components (PC1 on the x- and PC2 on the y-axis). Each symbol represents a mock observation, obtained from a snapshot. Groups of symbols correspond to simulation runs and are labelled according to the name of each run. Empty red symbols correspond to merger simulations, filled black symbols to monolithic simulations. It appears that PC1 correlates with the initial dimensionless potential W0 of each run.

Open with DEXTER
In the text
thumbnail Fig. 5

Feature space projected on the third and fourth principal components (PC3 on the x- and PC4 on the y-axis). As in Fig. 4, each symbol represents a mock observation, obtained from a snapshot. Different simulation runs are harder to distinguish in this projection, so the labels used in Fig. 4 have been dropped. Empty red symbols correspond to merger simulations, filled black symbols to monolithic simulations. Mergers and monolithic observations appear systematically different in PC4.

Open with DEXTER
In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.