Issue 
A&A
Volume 595, November 2016



Article Number  A75  
Number of page(s)  18  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201628393  
Published online  01 November 2016 
SOMBI: Bayesian identification of parameter relations in unstructured cosmological data
^{1} MaxPlanck Institut für Astrophysik, KarlSchwarzschildStr. 1, 85748 Garching, Germany
email: philipp@diefranks.de
^{2} LudwigMaximiliansUniversität München, GeschwisterSchollPlatz 1, 80539 München, Germany
^{3} Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, 85748 Garching, Germany
Received: 26 February 2016
Accepted: 4 August 2016
This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in highdimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic largescalestructure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several subsamples of data with different average properties (for example different stellar masses or webtype classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of nonlinear redshift distortions.
Key words: methods: statistical / methods: numerical / largescale structure of Universe / methods: data analysis
© ESO, 2016
1. Introduction
Coming decades will witness an avalanche of cosmological data generated by new astronomical facilities such as the LSST (see LSST Science Collaboration 2009), SKA (see Carilli & Rawlings 2004) or the spaceborne Euclid mission (see Laureijs et al. 2011). These new generation of telescopes will produce enormous amounts of unstructured data. Unlike laboratory experiments on Earth, which perform targeted searches for specific physical processes, cosmological surveys always record a combination of several different, but interacting, phenomena as well as systematics. Challenges for coming data analyses therefore arise from the requirement to handle such unstructured datasets in order to either test current physical theories or identify new phenomena.
Generally, cosmological or astronomical observations are generated by a combination of different physical effects. As an example galaxies and stars form in deep gravitational potential wells of the underlying dark matter distribution. The depth and shape of such potentials can affect the formation of galaxies and stars and therefore the emission spectra of photons that we observe. Consequently, correlations between the properties of galaxies and their largescale environment provide insights into the mechanisms of galaxy formation (see e.g., Oemler 1974; Dressler 1980; Postman & Geller 1984; Balogh et al. 2001; Gómez et al. 2003; Hermit et al. 1996; Lewis et al. 2002; Blanton et al. 2005a; Kauffmann et al. 2004; Hogg & SDSS Collaboration 2003; Lee & Li 2008; Cortese et al. 2011; Yoon & Rosenberg 2015; Rodríguez et al. 2016). When propagating through the Universe, photons will be affected by cosmic expansion, dust extinction, and absorption in the intergalactic medium, yielding altered spectra when detected at the telescope. This simple example shows that cosmological observations generally detect a combination of several different physical phenomena which we need to separate in order to identify and study individual aspects of our physical theories.
Separating the observations into different subgroups, which permit the study of such individual aspects, is a particularly challenging task in regimes of huge datasets or in high dimensions where manual clustering of data is not feasible. As a response to this problem we present in this work a Bayesian inference approach to automatically search for data clusters and relations between observed parameters without human intervention. In addition the Bayesian approach allows us to provide corresponding uncertainty quantification for all inferred parameters.
Specifically we developed a Bayesian procedure to identify deterministic relations between pairs of observable parameters. In doing so we assumed the relation between parameters to be described by an arbitrarily nonlinear function which can be Taylor expanded to any given polynomial order. By restricting our analysis to a Taylor expansion of arbitrary polynomial order we show that the task of identifying nonlinear relations between parameters becomes a linear inference problem. To automatically and selfconsistently identify the optimal polynomial order which is supported by the available data, we used the Bayesian information criterion (BIC; see e.g., Liddle 2007, and references therein). To further handle unstructured data consisting of a combination of many different processes generating the observations we used a specific type of artificial neural network to separate individual data clusters. In the past, various implementations of neural networks have been used in order to structure cosmological data (see e.g., Naim et al. 1997; Fustes et al. 2013; Geach 2012; Liu et al. 2016; Maehoenen & Hakala 1995; Polsterer et al. 2015).
In particular we used socalled self organizing maps (SOM, first presented by Kohonen 1982), which map the topology of a high dimensional data space to a hyper space of lower dimensionality. As a consequence SOMs are an ideal tool to separate individual data clusters and to perform studies of the relations between parameters in such clusters.
In this work we present theory and implementation details of our method based on self organizing maps and Bayesian inference (SOMBI) as well as several detailed tests of it.
As illustrative examples we also applied the SOMBI algorithm to a galaxy and an active galactic nuclei (AGN) catalog to study relations between the properties of observed galaxies and the largescale cosmic environment. Specifically we combined galaxy properties provided by the Sloan digital sky survey (SDSS) with largescalestructure properties derived from reconstructed density fields provided by Jasche et al. (2015). This results in a combined dataset permitting to perform a detailed correlation analysis between galaxies, AGNs and the largescale environment hosting them.
The results obtained from our tests and data applications demonstrate that SOMBI is a powerful tool to study unstructured datasets in an astrophysical or cosmological setting.
In Sect. 2 we describe the methodology we used to identify correlations between observed parameters and demonstrate the ability of SOMs to separate clusters in unstructured datasets. In Sect. 3 we present the datasets used for correlation analysis as well as the density field reconstructions provided by Jasche et al. (2015). In order to reveal the correlation structure of galaxies and the cosmic largescalestructure (LSS), in Sect. 4 we apply the SOMBI algorithm to data and discuss the results. In order to verify our methodology, we compare it to results obtained by Lee & Li (2008) as well as state of the art methods for subdivision of cosmological data. Finally, in Sect. 5, we conclude the paper by discussing our results.
2. Methodology
This work presents a method to study relations between different parameters in unstructured cosmological or astrophysical observations. As a showcase in Sect. 4 we will study correlations between SDSS galaxy properties and the LSS.
Detection as well as interpretation of physical quantities from raw data is a very complex task and generally requires broad expert knowledge. A particular challenge arises from the requirement to capture all systematics and involved assumptions in an accurate data model to infer physical quantities. The goal of this work is to present a generic method able to reveal correlations in complex datasets, which does not require human intervention.
In general, datasets consist of a combination of data drawn from multiple generation processes. This results in subsamples of data holding various different correlation structures. In order to infer correlations form data correctly, we have to disentangle samples corresponding to independent data generation processes.
To do so, we assumed that dataspaces used for correlation determination consist of subsamples of data drawn from multiple simple data generation processes instead of one complex process. It is important to point out that we assumed that each subsample is drawn from one generation process and that the correlations corresponding to one process can be modeled by unique correlation functions. Therefore a major goal of this work is to present methods able to reveal subsamples regarding these assumptions. Within such a subsample we have to identify the correlation structure and model it parametrically.
2.1. Parametric model
The goal of correlation analysis is to find the correlation structure between two quantities x and y. In order to model the correlation function we assumed that the relation between x and y can be written as: (1)where f is an arbitrary unknown function and n is assumed to be uncorrelated, normal distributed noise. The underlying assumption of this relation is that x has a causal impact on y. If it were the other way around, x and y have to be interchanged.
If f is assumed to be continuously differentiable then it can be expanded in a Taylor series up to Mth order and Eq. (1)yields: (2)Determination of correlations therefore requires to determine optimal coefficients f_{i} for a given set of U data points d_{i} = (x_{i},y_{i}),i ∈ [1,...,U]. Equation (2)should hold for every data point in the sample and therefore results in U relations which can be recombined into a linear system of equations by defining vectors y: = (y_{1},y_{2},...,y_{U})^{T} and f: = (f_{0},f_{1},...,f_{M})^{T}. Specifically, (3)Without further knowledge about the noise we assumed n to obey Gaussian statistics with zero mean and diagonal covariance. This assumes the noise of individual data points to be uncorrelated. We further added the restriction that each n_{i} has the same variance p. This is reasonable if there are no locally varying uncertainties in the data space. Therefore the probability distribution for n is defined as: (4)where N_{ij} = pδ_{ij} and  N  = p^{U} denotes the determinant of N. Since it is required that p ≥ 0, it can be parametrized as p: = e^{η}, where the unknown constant η ∈R needs to be inferred from the data.
The goal of this method is to identify parameter relations in unstructured datasets. In a generic case, the probability distribution of the noise P(n) might differ from Eq. (4). In particular the noise can be correlated. If the correlation structure is known, this can easily be encoded in N. In this case the formal procedure of inference presented in the following is still valid, but the solutions (also the possibility of finding exact solutions) strongly depend on the form of P(n).
In contrast, in a more generic case the correlation structure of n can be unknown. In this case, correlations are indistinguishable of the correlation structure of the signal Rf due to the fact that Rf and n contribute to the data y in the same way (as indicated in Eq. (3)). Therefore the choice of P(n  N) as given in Eq. (4)is still acceptable, but the interpretation of the revealed correlation function f(x) changes. In particular f(x) represents the correlation between x and y as well as the correlation structure of n. A final interpretation of such correlations may require additional information on the noise properties to disentangle noise from signal. However, this is a fundamental requirement of any method aiming and inferring signals from observations. A more general, nonGaussian noise case would require a more substantial extension of SOMBI.
The prior distribution for η is assumed to be flat because a priori, the noise could have any magnitude, and therefore no value for η should be preferred. The joint probability of f, d and η can be obtained by marginalization over n and use of the data model given in Eq. (3). We further assumed the prior on f to be flat to permit f to model an arbitrary polynomial of order M. This yields: (5)Completing the square in the exponent, Eq. (5)yields: (6)with D = (R^{T}N^{1}R)^{1} and j = R^{T}N^{1}y. The second exponential function is a Gaussian distribution in f with mean Dj and covariance D.
The posterior probability distribution for f given the data d and the noise parameter η can be expressed in terms of the joint probability of all quantities using Bayes theorem. Specifically: (7)If η is known then the proper probability distribution of f given d and η is obtained from Eq. (6)by normalization. Specifically, (8)where we ignored factors independent on f since the distribution is now properly normalized. Mean and covariance of this distribution are given by (9)and (10)These equations resemble the solution of a Wiener filtering equation. The mean of f does not depend on η since the noise is assumed to be zero centered and the prior for f is flat. This method determines the full posterior probability distribution for the coefficients f.
For visualization, the posterior distribution P(f  d,η) (Eq. (8)) can be transformed into data space resulting in a PDF P(f(x)  d,η) for the realizations of correlation functions f(x), x ∈R. Specifically, (11)with (12)being the response for a continuous field x and (13)P(f(x)  d,η) describes how likely a realization f(x) is, given the data and η. This permits to visualize the reconstructed correlation function including corresponding uncertainties in specific areas of the data space. Details about the derivation of P(f(x)  d,η) are described in Appendix A.
In order to find the true value of η, we followed the spirit of the empirical Bayes approach. In particular, we obtained η via maximum a posteriori (MAP) estimation, given the marginal probability distribution P(η  d). We assumed the MAP solution η_{MAP} to be the true value for η, irregardless of possible uncertainties for the estimate of η. Given η_{MAP}, we ultimately determined the posterior distribution P(f  d,η_{MAP}) (Eq (8)).
The marginal distribution P(η  d) is obtained from Eq. (6)by marginalization with respect to f: (14)and the negative logarithm of this distribution is: (15)where in the following we call ℋ(η  d) the information Hamiltonian. Here we used the definitions of D and j and the fact that N is diagonal. We note that M + 1 is the dimensionality of the signal space, the space of polynomials up to order M describing the yx correlation function f(x), and U the dimensionality of the data space. H_{0} and are terms independent of η. The MAP solution for η is given by setting the first derivative of ℋ(η  d) with respect to η to zero: (16)and therefore (17)The Bayesian implementation of this method is able to model the posterior PDF for correlation structures in noisy data, given a specific data model (in particular the polynomial order of the Taylor series). For optimal reconstructions, the order M of the polynomial describing the signal correlation needs to be known. However, for real data application the underlying model is often not known. Especially in fields where the physical processes causing correlation are not yet understood completely, it is important to have a method which does not need to know the data model in the beginning. Therefore a possible way to infer generation processes from data are described in the next section.
2.1.1. Bayesian information criterion
The polynomial order M up to which correlations are modeled, should be determined by the data themselves. This decision can be regarded as a model selection, with all polynomials up to order M constitute a model and the polynomial coefficients f_{i} and the noise parameter η are the corresponding model parameters. In order to decide which polynomial order M is sufficient to describe the correlation structure of the data, we applied the BIC (see e.g., Liddle 2007).
The BIC approach compares the maximum of the likelihood P(d  f_{WF},η_{MAP}) of each model modified by the number of degrees of freedom m. Specifically (18)We note that if the order of the polynomial is M then m = M + 2 since there are M + 1 polynomial coefficients f_{i} plus the noise parameter η.
The application of the BIC aims to find the optimal polynomial order M that explains the observations. If the data does not support higher orders due to high impact of noise, the method will always prefer lower order polynomials even though the actual correlation might be of higher order. To demonstrate this effect we show in Fig. 1 how the selected polynomial order M decreases with increasing noise. In the depicted case we generated mock data according to Eq. (2)as a 15th order polynomial and construct samples by adding Gaussian distributed noise with different variance σ_{n}. In order to illustrate the impact of noise on the BIC, we depict the selected order as a function of the inverse signal to noise ratios where U = 1000 denotes the sample size.
Fig. 1 Histogram of recovered polynomial order for different inverse signal to noise ratios k. U = 1000 and denotes the sample size. The noise variance σ_{n} ranges from ≈0 to ≤3. The signal was generated according to Eq. (2)as a 15th order polynomial. We see that the most adequate order selected by the BIC decreases with increasing k. The selected model depends on the specific data realization, therefore we averaged over reconstructed orders with similar k 

Open with DEXTER 
Combining the BIC with the parametric estimation for correlations results in a reliable method to reconstruct correlations in data and quantify corresponding uncertainties. So far we assumed data to be generated from a single process. In the following, we describe our approach to handle complex data generated by an arbitrary combination of processes.
2.2. Self organizing maps
The previous section describes a parametric approach to reconstruct the correlation function for noisy data drawn from a single generation process. However, real data sometimes appears to be the result from multiple generation processes with different underlying data models. This yields data samples consisting of multiple subsamples with varying correlation structures. A successful application of the method described in Sect. 2.1 can only be guaranteed, if data presented to the algorithm is drawn from a single generation process. Therefore we present a method to reveal subsamples corresponding to a single data generation process from the full dataset. In particular, we will use a self organizing map.
A SOM^{1} is an artificial neural network specifically designed to identify clusters in high dimensional data and to divide data into corresponding subsamples. To accomplish this division, the SOM has to adopt the properties of the spatial distribution of data. To do so, we assumed that the distribution of data points in a highdimensional data space can be approximated by a lowdimensional manifold, mapped onto the data space. In this work, the manifold is approximated by a finite squarelattice pattern called neuronspace, consistent of a discrete set of points, called neurons. Each neuron holds a position in the neuronspace. This position is later used in order to specify links between neighboring neurons to preserve the topological structure of the approximated manifold (in this case a square lattice).
In addition to the position in the neuronspace, each neuron holds a position in data space, called weight W. Therefore, instead of mapping a manifold onto the data space, we update the weights according to data in a way such that the chosen neuron pattern represents the data distribution. The nonlinear mapping is inferred from data via a specific learning process as described in the following (for a detailed description of the algorithm see Appendix B).
The learning process consists of a recursive update rule, where weights get updated according to data. Data is successively presented to the network and for each iteration the best matching unit (BMU), which is the neuron holding the closest weight to the presented data vector V_{t}, is calculated. The distance D between data vectors and weights is measured via an Euclidean metric in the normalized data space. Specifically, (19)where σ_{i} being the scale factor for each component i defined as: (20)where V_{i max} and V_{i min} are the maximum and minimum values of the ith component of all data vectors.
In addition, weights of all neurons get updated according to an update function dependent on V_{t} and the BMU. The update function, (21)describes the change of weights W after presenting the tth data vector V_{t} to the network. d_{BMU} is the distance in the neuronspace between the position of the updated neuron and the neuron identified as BMU at iteration step t. L_{0} and λ are constant tunable parameters. defines the size of the neighbourhood of the BMU in the neuronspace.
Since the change of weights ΔW = W_{t + 1} − W_{t} decreases with increasing t, the order of data vectors presented to the network influences the final result of the learning process. In order to avoid a bias towards data presented to the network in the beginning, we repeated the learning process multiple times for random permutations of the full dataset and averaged the results (see Kohonen 2001).
Fig. 2 Distribution of the mock data (left panel), generated as described in Sect. 2.3. The x and z coordinates for each data point are drawn from four different Gaussian distributions with different means and covariances. The covariance is assumed to be diagonal. The y coordinates are generated to be correlated with the x coordinates with a correlation function consistent with Eq. (2)(see Table 1 for the correlation coefficients). The right panel shows the data space including the 3 × 3 square lattice neuron pattern after a successful training of the SOM. Neighboring neurons are interlinked in the picture. In addition, each subsample of data corresponding to one neuron, is drawn in the color of the specific neuron. 

Open with DEXTER 
The full training process can be expressed in terms of a recursive algorithm described as:

Repeat multiple times:

Initialization of the network pattern as a square lattice.

Initialization of the weights in data space for all neurons randomly.

Repeat for all data vectors V_{t}, t ∈ (1,..,N):

Calculate the BMU for V_{t}, which is the closest neuron to V_{t}. The distance is measured via an Euclidean metric in the normalized data space.

Update the weights W of all neurons according to V_{t} and the BMU as described by the update function Eq. (21).


For large datasets this training process is numerically expensive. But once completed, the trained SOM is a numerically fast and powerful tool to approximately represent the structure of datasets. A new vector V′ presented to the trained SOM is classified by the properties of the corresponding BMU. More precisely the neuron which holds the weight closest to V′ (in terms of the Euclidean distance) represents the region of the data space V′ lies in.
Regarding those properties, a SOM can be used to find dataclusters in high dimensional data spaces. Specifically, after the SOM has been trained, each training vector again is presented to the trained SOM and all vectors sharing the same BMU are stored in a subsample of data. Each subsample holds a set of data vectors with properties similar to the weight of the BMU. The average properties of this region are represented by the data space position of the BMU.
Combining the SOM approach with the parametric correlation determination results in a generic method able to identify subsamples of data drawn from one data generation process in highly structured datasets, which we call SOMBI. In addition, the corresponding correlations for each subsample including a correct treatment of uncertainties is provided by SOMBI. In order to illustrate the performance of our method we apply it to mock data consistent with our data model in the following.
2.3. Method validation with mock data
In this section we demonstrate the performance of the SOMBI algorithm.
Without loss of generality in the following, we restrict the testcase to a low (three) dimensional mock dataset. Data vectors V are described by their data space positions (x,y,z) in the following. The data was generated according to the data model described in Sect. 2. Specifically, we generate a data sample consistent of four spatially separated subsamples (see Fig. 2 left panel). Each subsample is Gaussian distributed among two dimensions (x and zaxis) of the data space with varying and independent means, covariances and sample sizes for each subsample. In the third dimension (yaxis) of the data space we include a noisy functional dependence on x consistent with Eq. (2)for each subsample. The dependencies differ for each subsample (see Table 1 for exact correlation coefficients).
For optimal reconstruction each subsample should correspond to a single neuron of the SOM after the training process. Since the number of subsamples for real data is not known in the beginning, we choose the number of neurons to be nine (3 × 3 squarelattice pattern).
During training, the SOM adopts the spatial properties of the data distribution. Once completed, all data points closest to a neuron are grouped and stored in reconstructed subsamples of data as shown in Fig. 2. As seen in the figure, each subsample is now represented by one neuron located in the center of the sample. The remaining neurons get mapped between subsamples due to the fact that the SOM aims to preserve the chosen topology for the network pattern. This results in a number of neurons holding only a tiny fraction of data without recoverable correlations. Those neurons should be excluded from further analysis. Specifically, all neurons with (22)should be excluded. N_{S} denotes the number of data points corresponding to the specific neuron sample, N_{D} denotes the total number of data points and N_{n} denotes the number of neurons. The remaining neurons result in subsamples of data (denoted as neuronsamples in the following) which represent the reconstructed data distribution.
Due to the fact that parts of the data are removed from revealed subsamples, less signal is provided for correlation reconstruction. In particular, as the spatial separation of subsamples corresponding to different correlation structures decreases, the amount of usable data decreases. Therefore spatial separation of subsamples drawn from different data generation processes plays an important role for the quality of reconstructed correlation functions.
In order to reveal the correlation structure of each neuronsample we apply our correlation determination method to each neuronsample. As indicated in Fig. 3, the application results in four different reconstructed correlation functions between x and y. Each reconstructed polynomial appears to represent the initial correlation functions correctly within uncertainties. As indicated in the picture, each neuronsample holds data corresponding to a single data generation process, allowing a successful application of the correlation determination method.
Fig. 3 Mock data distribution projected to the xyplane as well as the initial correlation functions of each subsample of data. In addition, the reconstructed polynomials for each neuronsample as selected after training are depicted as red dashed lines in the picture. The gray areas denote the 1, 2 and 3σ_{y} uncertainties of the reconstruction, respectively. σ_{y} is the projection of the parameter covariance D to the data space as described by Eq. (13). 

Open with DEXTER 
The test indicates that the method behaves as expected for consistent mock data. The SOM reveals spatially separated data clusters and the correlation determination method is valid within uncertainties. However, in this case we restrict data to consist of multiple, spatially separated, Gaussian distributed subsamples. This restriction does not have to hold for real data. Therefore, further testing and comparison to a frequently used subsampling method is described in Sect. 4.1. In addition, various performance tests of the SOM have been presented in literature (see e.g., Kohonen 2001; Way et al. 2012).
2.3.1. Inconsistent mock data
In addition to the previous example, we apply the SOMBI algorithm to a mock dataset which is inconsistent with our assumed data model. In particular we generate a two dimensional dataset (x,y) with a nonpolynomial correlation structure between x and y where we obtain y by drawing a sample of a one dimensional Gaussian random field with a covariance matrix given as a diagonal matrix in Fourier space: (23)where P(k) is referred to as the powerspectrum. For the purpose of this test, the exact form of the power spectrum is not crucial and was chosen for visual clarity. In order to gain a finite dataset, we discretized the function into 512 pairs of data points (x,y) (with a flat distribution in x). In addition we add Gaussian distributed noise to the y values of the sample consistent with Eq. (4).
Since this dataset does not have a clustered structure, using the SOM in this context does not seem to be necessary. However, we assume that the structure of the dataset is a priori unknown. Therefore we apply the full SOMBI algorithm including the SOM to the dataset where we generate the SOM as a linear chain consistent of three neurons since the data space is only two dimensional. The application follows the same procedure as described above and results in three data constrained posterior distributions for the reconstructed polynomials. The reconstructed correlations together with the dataset and the original correlation function are depicted in Fig. 4
Fig. 4 Nonpolynomial correlation structure (green line) together with the mock data generated from it. In addition the red, yellow and black dashed lines indicate the three reconstructed polynomials respectively. Gray areas denote the uncertainties of the reconstructions as described in the caption of Fig. 3. 

Open with DEXTER 
We see that the clustering of the SOM disentangles three subsamples of data with structurally different reconstructed polynomials. The reconstructions support the structure of the correlation functions within uncertainties on scales where the signal dominates the noise. However, small structures in the correlations cannot be reconstructed by the algorithm since the noise dominates in those regions. In addition, the approximation of the correlation function by finite order polynomials will always result in a mismatch for nonanalytic structures. However, the results support our claim that the SOM helps to disentangle complex structures into multiple but simpler structures.
3. Data
As a demonstration of our method, in the following we show examples of applications to galaxy data. Data used for correlation determination is described in detail in the following.
Fig. 5 Slices of the ensemble mean density field on a logarithmic scale log (2 + ⟨δ⟩) (upper panels) and the same slices with the SDSS galaxies mapped onto the grid as yellow dots (lower panels). In order to exclude areas of high uncertainty from the analysis we took a distance threshold in the comoving frame at d_{lim} ≈ 450 Mpc h^{1}. Therefore galaxies above this limit are excluded form the analysis and not depicted. 

Open with DEXTER 
3.1. Galaxy data
The dataset used in this work is constructed from the sample DR7.2 of the New York University Value Added Catalog (NYUVAGC) provided by Blanton et al. (2005b). This catalog is based on DR7 (see Abazajian et al. 2009), the seventh data release of the SDSS (see York et al. 2000). The sample consists of 527 372 galaxies in total, in a redshift range of 0.001 <z< 0.4. Table 2 shows the ranges of the catalog in the rband Petrosian apparent magnitude r, the logarithm of the stellar mass log (M_{∗}) in units of the solar mass M_{∗} = M/M_{S}h^{2} and the absolute magnitude M_{0.1r}. M_{0.1r} is corrected to its z = 0.1 value according to the Kcorrection code of Blanton & Roweis (2007) and the luminosity evolution model described by Blanton et al. (2003). For simplicity, we restrict the application of SOMBI to these properties to look for correlations with the LSS. LSS information is provided by a set of data constrained density field reconstruction maps. Details about the reconstruction maps are described in Sect. 3.3.
Property ranges of galaxy data.
3.2. AGN data
In addition to the galaxy dataset described above, we present correlations between the LSS and AGN dataset. The catalog is based on a previous SDSS data release (DR4 see AdelmanMcCarthy et al. 2006) and consists of 88 178 galaxies classified as AGNs according to Kauffmann et al. (2003).
The data includes various properties such as luminosities of specific emission lines ([O III] 5007, [NII]) as well as stellar masses, intrinsic velocity dispersions of galaxies and parameters associated with the recent star formation history such as stellar surface mass densities and concentration indexes. For structural information, the 4000 Å break strength is included in the dataset. In Sect. 4 we present the revealed correlations between each included galaxy property and the surrounding LSS. The correlation analysis is based on the method described in Sect. 2.
3.3. Largescalestructure reconstructions
In addition to the galaxy and AGN properties directly obtained from the SDSS sample, we include properties of the cosmic LSS to our analysis and apply the SOMBI algorithm to the resulting dataset in the next section. A modern approach to LSS reconstruction is based on the reconstruction of initial conditions under the constraint of a cosmological model (see e.g., Jasche et al. 2015; Jasche & Wandelt 2013). The main idea of this approach lies on the fact that the initial density field follows almost homogeneous and isotropic statistics which makes a successful modeling much more likely compared to the nonlinear present density field. In addition, the initial density field consists of small, very nearly Gaussian and nearly scaleinvariant correlated density perturbations. Within the standard cosmology the gravitational evolution and growth of those initial perturbations, which processed the initial conditions into the present density field, is well understood in principle. As a consequence, the successful reconstruction of the initial density field ultimately results in a detailed description of the present LSS.
In this work we rely on results previously obtained by the BORG algorithm (see Jasche & Wandelt 2013). BORG performs reconstructions of the initial density field based on a secondorder Lagrangian perturbation theory (see e.g., Bernardeau et al. 2002). The resulting reconstructions are based on a nonlinear, nonGaussian full Bayesian LSS analysis of the SDSS DR7 main galaxy sample, the same dataset as used for our correlation study. The method used by Jasche & Wandelt (2013) is based on a Markov Chain Monte Carlo sampling algorithm called BORG algorithm and results in a set of data constrained density contrast field samples δ_{i},i ∈ [1,..,S]. The density contrast δ is the normalized difference of the density ρ to its cosmic mean . Specifically, (24)The density contrast samples can be recombined to an approximate estimate for the PDF of the density contrast. Specifically, (25)Applying the SOMBI methods to the density contrast samples results in a PDF describing correlation for each sample: P(f  δ_{i},d). The dependency on δ_{i} has to be marginalized in order to yield the final PDF P(f  d). Marginalization over δ is described in detail in Appendix D.
Fig. 6 Web type classification in slices of the 3D LSS reconstruction. We cut the volume at the same positions as used in Fig. 5. Since an average web type is not well defined, we present only one sample of the reconstructions instead of the mean LSS. We distinguish the LSS according to the webtype classification described in Table 4. We note that sheets seem to fill the largest volume. In the chosen classification scheme, incident regions to sheets are also accounted as sheets. 

Open with DEXTER 
The density field inference was applied to the northern galactic cap as covered by the SDSS survey. More precisely, inference is performed on a cube with 750 Mpc h^{1} side length with a grid resolution of ≈3 Mpc h^{1} in the comoving frame. This results in a cubic grid with 265^{3} voxels. Table 3 denotes the boundaries of this box.
Boundaries of the cubic grid in the comoving frame.
In order to compare galaxy properties to the properties of the LSS, we map galaxies onto the cubic grid (as depicted in Fig. 5) and extract the information about the LSS provided by BORG for each position. The explicit mapping procedure is described in C.
The density field allows a derivation of many important quantities of the LSS. Some important examples are: the gravitational potential, the tidalshear tensor and the web type classification.
The rescaled gravitational potential Φ is given as (26)and the tidalshear tensor T_{ij} is given by the Hessian of Φ: (27)The eigenvalues of the tidalshear tensor λ_{i} (with i ∈ { 1,2,3 }), permit to classify different structure types within the cosmic matter distribution (see e.g., Lemson & Kauffmann 1999; Colberg et al. 2005; Novikov et al. 2006). For an application of web type classification to density fields inferred with the BORG algorithm see Leclercq et al. (2015a,b,c).
In this work, we rely on the eigenvalues of the tidalshear tensor in order to include nonlocal information of the LSS to the analysis. These eigenvalues provide also a coarse web type classification of the LSS in terms of voids, sheets, filaments and clusters.
3.3.1. Web type classification of the LSS
The web type is a classification of different structure types of the LSS. Various classification methods have been presented in literature (see e.g., AragónCalvo et al. 2007; Hahn et al. 2007; ForeroRomero et al. 2009; Hoffman et al. 2012; Lavaux & Wandelt 2010; Shandarin et al. 2012; Cautun et al. 2013). However, in this work we split the LSS into four different types (voids, sheets, filaments and clusters) according to the eigenvalues of the tidalshear tensor following the classification procedure described by Hahn et al. (2007). Table 4 shows the explicit classification rules and Fig. 6 shows the classification of a reconstructed sample according to these rules.
Web type classification according to the ordered eigenvalues λ_{1} > λ_{2} > λ_{3} of the tidalshear tensor. In this work we used λ_{th} = 0.
The structural classification as well as the density field reconstruction itself contain information about the LSS at the location of a galaxy. These quantities are used in the following to compare galaxy properties with the LSS.
Fig. 7 Distribution of the SDSS data projected to the M_{0.1r}zplane. The topleft panel shows the volume limitation for different absolute magnitude thresholds (− 18.5, − 20.0). The bottomleft panel shows the distribution of the neurons after a successful application of the SOM to the SDSS data. The topmid and bottommid panels show two different neuronsamples (NSamples) corresponding to the depicted neurons from the trained SOM. In addition, the volume limitation used for the correlation comparison are depicted in both panels. The selected neurons hold subsamples of data with a similar projected distribution compared to the volumelimited samples in order to compare the selection methods. The top and bottomright panels show reconstructed correlation functions for the volume limited sample with magnitude limits at − 18.5 (top) and − 20.1 (bottom) and for the corresponding neuronsamples. The range of each subsample in log (1 + δ) is indicated by the length of each reconstructed polynomial. 

Open with DEXTER 
4. Data application and discussion
In this section, we apply SOMBI to the galaxy and the AGN datasets derived from the SDSS survey as described in the previous section.
In order to apply the SOM to the extended data sample we include various galaxy and LSS properties to define the data space for training. In order to find as many separated regions in data as possible, we include properties holding unique information about the data. Therefore for the SDSS catalog a reasonable setup is to include redshifts z, rband absolute magnitudes M_{0.1r} and colors of galaxies. To include properties of the LSS we extended the training space with the logarithm of the density field log (1 + δ) and the three eigenvalues of the tidal shear tensor at the location of each galaxy. This setup seems to be reasonable, since many properties of the LSS (for example the web type classification) depend on these quantities. The logarithm of the stellar mass log (M_{∗}), another common property of galaxies, was excluded from the training process since it is expected to be proportional to the absolute magnitude (see e.g., Salaris & Cassisi 2005; Harwit 2006; Kroupa & Tout 1997). However, for correlation analysis, we will use the stellar mass again. The usage of the logarithm of the density field instead of the density field arises from the fact that the included galaxy properties are on a logarithmic scale and therefore dependencies should be estimated on this scale as well, in order to ensure an adequate data space distance measure.
4.1. Subdividing the galaxy sample
In this work we rely on the SOM to subdivide data. However, various manual selection methods have been presented in literature (see e.g., Mo et al. 2010). In order to illustrate the performance of our method, we compare a frequently used selection method, the volume limitation, to the SOM application.
Volume limitation is an approach to account for flux limitations of telescopes. Flux limitation means that at larger distances only the brightest galaxies are detected which introduces a distance dependent selection effect onto a sample of observed galaxies. A frequently used approach to remove this effect is to limit the volume of the catalog in redshift space such that in this subsample all existing galaxies are included. A possible way to accomplish volume limitation is to include only galaxies to the sample brighter than a certain absolute magnitude limit M_{lim} and below a certain redshift limit z_{lim}. Here z_{lim} is the distance at which a galaxy with absolute magnitude M_{lim} has an apparent magnitude equal to the survey limit m_{lim}. More precisely: (28)with r_{lim} being the luminosity distance corresponding to z_{lim} and r_{0} = 10 pc conventionally (see e.g., Mo et al. 2010).
Figure 7 shows different volume limitations of the SDSS data sample and the corresponding reconstructed correlation functions between the absolute magnitude M_{0.1r} and the logarithm of the density field log (1 + δ).
Fig. 8 Reconstructed correlation functions for different neuronsamples selected from the SDSS data sample by the SOM. In particular we depict the correlations for the logarithm of the stellar mass log (M_{∗}), the rband absolute magnitude M_{0.1r} and the gr color. In addition, each figure shows the data space position of the BMU corresponding to the subsample of data used for reconstruction. We see that the correlation of the stellar mass log (M_{∗}) and the absolute magnitude M_{0.1r} with the density field log (1 + δ) appear to be similar in different regions of the LSS. The upper two panels show reconstructions for subsamples of heavy, red galaxies in high density regions classified as clusters (halos) according to the corresponding eigenvalues. The bottomleft panel belongs to heavy, red galaxies in low density regions classified as filaments and the bottomright panel belongs to light, blue galaxies in regions classified as sheet (or filament since λ_{2} ≈ 0). We adjusted the range of the yaxis in the last panel in order to improve the visibility of the correlation structure. Colors are defined according to the color classification code of Li et al. (2006). 

Open with DEXTER 
In order to compare the SOM to volume limitation, we train the SOM with the extended SDSS dataset (the SDSS properties including LSS properties) and select neuronsamples which appear to hold data close to the volumelimited samples in the M_{0.1r}zplane. The SOM is set up to consist of 49 neurons (7 × 7 square lattice pattern) and the neuronsamples are generated as described in Sect. 2.2. For the training process, all available quantities from the SDSS and the LSS are included. Therefore sampling is based not only on the fluxlimitation bias encoded in the data distribution in the M_{0.1r}zplane, but also takes into account additional systematics hidden in extra dimensions of the data sample. Figure 7 shows the positions of all (left) and the selected (middle) trained neurons projected to the M_{0.1r}zplane. In addition, we depict in the middle panels also the data samples corresponding to these selected neurons.
Furthermore the reconstructed correlation functions for the selected neuronsamples and the corresponding volume limited samples are shown for comparison on the right of Fig. 7. For the lower magnitude (−18.5) sample the reconstruction appears to have a similar correlation strength compared to the neuronsample. However, due to the fact that the neuronsample includes galaxies above the magnitude limit introduced for volume limitation and not all galaxies below this threshold, the correlation functions appear to have an offset of ≈0.3 orders of magnitude. For the higher absolute magnitude (−20.1) sample the reconstructed correlations differ more dramatically. As we will see in the next Section, the different correlation functions between magnitudes and the cosmic density are caused by additional systematics hidden in extra dimensions of the data. Those systematics are removed from subsamples constructed by the SOM.
4.2. SDSS application
As described in the previous section, the application of the SOM to SDSS data results in various subsamples of data holding different properties of the data space. Subsamples can be used in order to reconstruct correlation functions between galaxy properties and the LSS. In addition, the data space weight of the corresponding neurons indicate the average properties of the galaxy sample. The reconstructed correlation functions for each sample as well as its average properties illuminate the relation of galaxies and their surrounding LSS. In order to illustrate this connection, we present reconstructions for various subsamples in the following.
In particular, for correlation determination we include the rband absolute magnitude M_{0.1r}, the logarithm of the stellar mass log (M_{∗}) (in units of M_{sun}h^{2}) and the gr color in the analysis and compare them to the logarithm of the cosmic largescale density an a logarithmic scale log (1 + δ). Figure 8 shows the reconstructed correlation functions between galaxy properties and the density field.
We see that the logarithm of the stellar mass appears to show a linear, positive correlation with the logarithm of the density field for multiple subsamples. In particular, the upper two panels of Fig. 8 show reconstructions for subsamples of galaxies with the described densitymass relation. Both samples hold massive galaxies in a high density cosmic environment classified as cluster (halo) according to the eigenvalues of the tidal shear tensor. According to the color classification code described by Li et al. (2006), both samples hold galaxies classified as red. Therefore we denote the samples as red samples in the following.
In addition, the SOM revealed a subsample of data (denoted as blue sample) holding blue galaxies of low mass in a low density cosmic environment classified as sheet (or filament since λ_{2} ≈ 0.0) depicted in the bottomleft panel of Fig. 8. The masses of those galaxies appear to show a similar correlation with the density field compared to masses in red samples.
The visible positive correlation of stellar masses with their environmental matter density verify the intuitive conclusion that heavy galaxies are located in denser regions compared to light galaxies. However, it is of particular interest to point out that this trend is valid for light galaxies in low density regions (blue sample) as well as for heavy galaxies in high density regions (red samples).
In addition, the reconstructed correlation functions for the absolute magnitude show a linear dependency on the logarithm of the density. The results appear to be consistent with the correlations for stellar masses, since the brightness of galaxies in terms of the absolute magnitude is expected to be proportional to the logarithm of the stellar mass.
Correlations for colors indicate density dependence for blue and red galaxy samples. In particular, we see that higher color values correspond to higher density regions on average, irrespective of the color classification of the subsamples.
Our reconstructed correlations are consistent with the trends obtained by Lee & Li (2008) in their studies of the correlations between physical properties of galaxies and the largescale environment. However, our recovered correlation amplitudes appear to differ from their results due to the fact that reconstructed amplitudes in the cosmic density field used by Lee & Li (2008) are lower. The difference is caused by the fact that the reconstructions used by Lee & Li (2008) have a larger voxel size (~6 Mpc h^{1}) compared to the results of Jasche & Wandelt (2013) (~3 Mpc h^{1}). In addition, the BORG algorithm includes a more detailed treatment of uncertainties in the reconstruction.
Fig. 9 Reconstructed correlation functions for one neuronsample selected from the AGN data sample by the SOM. The data space position of the BMU is depicted in the right side of the figure. 

Open with DEXTER 
Fig. 10 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER 
Fig. 11 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER 
Fig. 12 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER 
In addition, SOMBI reveals existing systematics and unexpected correlation trends in the data. In particular, in the bottomright panel of Fig. 8 we see inverted correlations in a subsample of data, compared to the correlations of the other (red and blue) samples. In total we identified three out of 49 subsamples (≈3% of all data points) with similar data space weights as well as similar correlation structures. The representative sample holds heavy, red galaxies in low density regions located in filaments (or sheets, since λ_{2} ≈ 0.09). The reconstructed correlation seems to indicate that for this sample heavy galaxies appear to be located in lower density regions compared to light galaxies. In addition the color dependency on the density field disappears. A possible interpretation of the inverted correlation could be that in low density regions such as voids structures have formed a long time ago and therefore galaxies located in such regions are more likely to be old, red and heavy galaxies. In contrast, in highdensity regions the increased presence of matter indicates an increased activity in galaxy and star formation. Therefore more young and light galaxies appear to be located in such regions.
At this stage, our results are not capable of validating the described interpretation. The limiting factors are systematics caused by redshift distortions in the data sample. These distortions arise from peculiar velocities δv of galaxies, which introduce a Doppler shift to the redshift measurement (see e.g., Kaiser 1987). This effect causes galaxy clusters to appear stretched along the line of sight, an effect frequently referred to as “Fingers of God”. The velocities of galaxies in high density regions rise up to δv ~ 1000 km s^{1}. Introducing a redshift uncertainty δz ≈ δvc^{1} leads to uncertainties in the comoving frame up to δd_{com} ≈ 14 Mpc. Since the resolution of the BORG reconstruction maps is ~3 Mpc, a galaxy can be mapped four voxels away from its actual position in an extreme case. In addition, the BORG reconstructions are only corrected for redshift distortions up to linear order, but effects of nonlinear redshift distortions may still be present in high density regions.
Therefore, at this point of the analysis we cannot verify whether the discovered subsample with inverted correlations indeed consists of heavy galaxies in lowdensity regions. Alternatively these could be galaxies actually located in high density regions but which are delocated by redshift distortions.
A more physical interpretation of galaxy physics is beyond the scope of this work and will be left for future publications.
4.3. AGN application
Now we apply SOMBI to galaxies classified as AGNs according to Kauffmann et al. (2003). The application follows the same strategy as described above resulting in 49 subsamples of data.
Galaxies hosting AGNs appear to have a higher stellar mass on average and are more likely to be located in higher density regions such as clusters (halos) or filaments. As a consequence of this all recovered subsamples of data are located in filaments or in clusters.
As a preliminary consistency check we see that the reconstructed dependency of the stellar mass on the LSS density for all recovered subsamples appears to be similar to the correlation structure of the full SDSS sample described above. Since the AGN data is a subset of data drawn from the SDSS sample, the correlation functions should be comparable.
In addition, in Figs. 9–12 we present correlations for all available galaxy parameters of AGNs with the logarithm density field. In particular, we reconstructed correlations for parameters associated with the recent star formation history and the LSS density field. We see that the R90 /R50 concentration index as well as the logarithm of the stellar surface mass density log (μ_{∗}) appear to be positive, linearly correlated to the logarithm of the density field. This result indicate that the star formation activity increases with increasing density, on average.
Structural parameters such as the 4000 Å break strength D_{n}(4000) as well as the intrinsic velocity dispersion v_{disp} of AGN galaxies appear to show a positive correlation with the logarithm of the density field. However, the revealed model for correlation (in particular the order of the polynomial as described in Sect. 2.1.1) differs for various subsamples. In particular, the resulting correlations for the velocity dispersion with the density appears to be linear for two subsamples (Figs. 9 and 11) and follows a curved shape for the remaining subsamples (Figs. 10 and 12).
The recovered correlation functions for structural parameters (D_{n}(4000), v_{disp}) as well as parameter associated with the recent star formation history (R90 /R50, log (μ_{∗})) show correlations with the density field consistent with the results obtained by Lee & Li (2008). As for the galaxy sample, correlation strengths differ compared to Lee & Li (2008).
Furthermore, we present correlations of the OIII 5007 and the NII emission line luminosities. The reconstructed correlation functions between the luminosity of the O III 5007 emission line (log ([OIII] 5007) in solar units) and the density field appears to differ for the depicted subsamples. In contrast, correlations for the OIII 5007 emission line relative to H_{β} with the density field as well as correlations for the NII emission line relative to H_{α} with the density field appear to be more stable throughout depicted subsamples. The results indicate that both, the OIII 5007 luminosity relative to H_{β} and the NII luminosity relative to H_{α} increase with increasing density. A physical interpretation of these results is beyond the scope of this work.
We believe that the automatic classification of subsamples of galaxies as well as the presented correlation analysis with the LSS is capable of revealing additional information about the connection between the LSS and galaxy properties. However, the goal of this work is to present the SOMBI method and to outline possible fields of application.
5. Summary and conclusion
This work describes the implementation and application of the SOMBI algorithm, a Bayesian inference approach to search for correlations between different observed quantities in cosmological data. As an example we infer relations between various properties of galaxies and the cosmic largescalestructure (LSS). This is of particular scientific interest, since the properties of galaxy formation and evolution are assumed to be directly linked to the LSS of our Universe. Studying the correlation between galaxies and their LSS environment will hence give further insight into the process governing galaxy formation.
Cosmological data generally consists of multiple subsamples drawn from various different generation processes. Each subsample is expected to hold unique correlation structures. Therefore, for the SOMBI algorithm we seek to find a way to distinguish subsamples of data belonging to different processes and to determine the correlation structure of each sample.
The correlation determination used by SOMBI assumes the correlation structures to be a polynomial with unknown order. The method infers a posterior PDF of the coefficients describing correlation via a Wiener Filter approach. To automatically choose the polynomial order, supported by the data, we employed a model selection method based on the BIC. The BIC compares the likelihood of different models matching the data. Apart from our initial restrictions and the restriction that data is drawn from a single generation process, this allowed us to compare galaxy properties to properties of the LSS without prior information about correlation structures.
To ensure a successful application of the correlation determination method we automatically distinguish subsamples of data belonging to different data generation processes. This is done by a specific kind of artificial neural network called self organizing map (SOM). A SOM seeks to classify and distinguish subsamples of data in noisy and highly structured observations. To do so, the SOM approximates the distribution of data by mapping a lowdimensional manifold (in this work two dimensional) onto the data space. The SOM provides subsamples of similar data. We assumed that each subsample consists of data drawn from one single generation process. To those samples the correlation analysis was applied successfully.
We tested the performance of the SOMBI algorithm with mock data and cosmological data. For the latter we compared our results to simple volumelimitation subsampling.
As an illustrative example, we applied the SOMBI algorithm to two datasets, a galaxy and an AGN catalog based on the SDSS, in order to study the connection between galaxy and LSS properties. LSS information, as used in this work, is provided by the BORG algorithm (see Jasche & Wandelt 2013), a fully Bayesian inference framework to analyze 3D density fields in observations.
The application of the SOMBI algorithm to the described datasets shows that galaxy properties are clearly connected with the LSS. In particular, for the galaxy sample, stellar masses and absolute magnitudes appear to be linear, positive correlated to the cosmic density field on a logarithmic scale. In addition, we looked at the revealed correlations of the color of galaxies and the LSS density. The reconstructed correlation functions imply that redder galaxies appear to be closer to dense regions.
Furthermore, we presented correlations for additional galaxy properties such as structural parameters, parameters associated with the recent star formation history, velocity dispersions and luminosities of specific emission lines. Parameters are drawn from a subset of SDSS galaxies hosting AGNs. The results indicate that all described properties are correlated with the cosmic density field. However, correlation strengths appear to differ for recovered subsamples, as classified by the SOM.
We conclude that the combined results ranging from the classification of galaxies according to data space properties to the revealed correlation structures revealed by the SOMBI algorithm provide insights into galaxy formation and evolution in specific cosmic environments on a preliminary level. A more detailed application of the SOMBI algorithm to cosmological data will be left for future work.
The generic framework of our method allows a simple analysis of many different kinds of datasets, including highly structured and noisy data. In addition, SOMBI is applicable for structure identification and correlation determination in different but related fields.
The SOM implementation in this work is provided by the python package PYMVPA (www.pymvpa.org/generated/ mvpa2.mappers.som.SimpleSOMMapper.html). PYMVPA is a frequently used package in computational science (see e.g., Hanke et al. 2009)
Acknowledgments
We thank Maksim Greiner and Fabian Schmidt for comments on the manuscript. This research was supported by the DFG cluster of excellence “Origin and Structure of the Universe” (www.universecluster.de). Funding for the Sloan digital sky survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the US Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/
References
 Abazajian, K. N., AdelmanMcCarthy, J. K., Agüeros, M. A., et al. 2009, ApJS, 182, 543 [NASA ADS] [CrossRef] [Google Scholar]
 AdelmanMcCarthy, J. K., Agüeros, M. A., Allam, S. S., et al. 2006, ApJS, 162, 38 [NASA ADS] [CrossRef] [Google Scholar]
 AragónCalvo, M. A., Jones, B. J. T., van de Weygaert, R., & van der Hulst, J. M. 2007, A&A, 474, 315 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Balogh, M. L., Christlein, D., Zabludoff, A. I., & Zaritsky, D. 2001, ApJ, 557, 117 [NASA ADS] [CrossRef] [Google Scholar]
 Bernardeau, F., Colombi, S., Gaztañaga, E., & Scoccimarro, R. 2002, Phys. Rep., 367, 1 [NASA ADS] [CrossRef] [MathSciNet] [Google Scholar]
 Blanton, M. R., & Roweis, S. 2007, AJ, 133, 734 [NASA ADS] [CrossRef] [Google Scholar]
 Blanton, M. R., Hogg, D. W., Bahcall, N. A., et al. 2003, ApJ, 592, 819 [NASA ADS] [CrossRef] [Google Scholar]
 Blanton, M. R., Eisenstein, D., Hogg, D. W., Schlegel, D. J., & Brinkmann, J. 2005a, ApJ, 629, 143 [NASA ADS] [CrossRef] [Google Scholar]
 Blanton, M. R., Schlegel, D. J., Strauss, M. A., et al. 2005b, AJ, 129, 2562 [NASA ADS] [CrossRef] [Google Scholar]
 Carilli, C. L., & Rawlings, S. 2004, New Astron. Rev., 48, 979 [NASA ADS] [CrossRef] [Google Scholar]
 Cautun, M., van de Weygaert, R., & Jones, B. J. T. 2013, MNRAS, 429, 1286 [NASA ADS] [CrossRef] [Google Scholar]
 Colberg, J. M., Sheth, R. K., Diaferio, A., Gao, L., & Yoshida, N. 2005, MNRAS, 360, 216 [NASA ADS] [CrossRef] [Google Scholar]
 Cortese, L., Catinella, B., Boissier, S., Boselli, A., & Heinis, S. 2011, MNRAS, 415, 1797 [NASA ADS] [CrossRef] [Google Scholar]
 Dressler, A. 1980, ApJ, 236, 351 [NASA ADS] [CrossRef] [Google Scholar]
 ForeroRomero, J. E., Hoffman, Y., Gottlöber, S., Klypin, A., & Yepes, G. 2009, MNRAS, 396, 1815 [NASA ADS] [CrossRef] [Google Scholar]
 Fustes, D., Manteiga, M., Dafonte, C., et al. 2013, A&A, 559, A7 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Geach, J. E. 2012, MNRAS, 419, 2633 [NASA ADS] [CrossRef] [Google Scholar]
 Gómez, P. L., Nichol, R. C., Miller, C. J., et al. 2003, ApJ, 584, 210 [NASA ADS] [CrossRef] [Google Scholar]
 Hahn, O., Porciani, C., Carollo, C. M., & Dekel, A. 2007, MNRAS, 375, 489 [NASA ADS] [CrossRef] [Google Scholar]
 Hanke, M., Halchenko, Y. O., Sederberg, P., et al. 2009, Neuroinformatics, 7, 37 [CrossRef] [Google Scholar]
 Harwit, M. 2006, Astrophysical Concepts (John Wiley & Son Ltd.) [Google Scholar]
 Hermit, S., Santiago, B. X., Lahav, O., et al. 1996, MNRAS, 283, 709 [NASA ADS] [CrossRef] [Google Scholar]
 Hoffman, Y., Metuki, O., Yepes, G., et al. 2012, MNRAS, 425, 2049 [NASA ADS] [CrossRef] [Google Scholar]
 Hogg, D. W., & SDSS Collaboration 2003, BAAS, 35, 770 [NASA ADS] [Google Scholar]
 Jasche, J., Leclercq, F., & Wandelt, B. D. 2015, J. Cosmol. Astropart. Phys., 1, 36 [NASA ADS] [CrossRef] [Google Scholar]
 Jasche, J., & Wandelt, B. D. 2013, MNRAS, 432, 894 [NASA ADS] [CrossRef] [Google Scholar]
 Kaiser, N. 1987, MNRAS, 227, 1 [NASA ADS] [CrossRef] [Google Scholar]
 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003, MNRAS, 346, 1055 [NASA ADS] [CrossRef] [Google Scholar]
 Kauffmann, G., White, S. D. M., Heckman, T. M., et al. 2004, MNRAS, 353, 713 [NASA ADS] [CrossRef] [Google Scholar]
 Kohonen, T. 1982, Biological Cybernetics, 43, 59 [CrossRef] [EDP Sciences] [Google Scholar]
 Kohonen, T. 2001, SelfOrganizing Maps (Springer) [Google Scholar]
 Kroupa, P., & Tout, C. A. 1997, MNRAS, 287, 402 [NASA ADS] [CrossRef] [Google Scholar]
 Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, ArXiv eprints [arXiv:1110.3193] [Google Scholar]
 Lavaux, G., & Wandelt, B. D. 2010, MNRAS, 403, 1392 [NASA ADS] [CrossRef] [Google Scholar]
 Leclercq, F., Jasche, J., Lavaux, G., & Wandelt, B. 2015a, ArXiv eprints [arXiv:1512.02242] [Google Scholar]
 Leclercq, F., Jasche, J., & Wandelt, B. 2015b, J. Cosmology Astropart. Phys., 6, 015 [NASA ADS] [CrossRef] [Google Scholar]
 Leclercq, F., Jasche, J., & Wandelt, B. 2015c, A&A, 576, L17 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Lee, J., & Li, C. 2008, ArXiv eprints [arXiv:0803.1759] [Google Scholar]
 Lemson, G., & Kauffmann, G. 1999, MNRAS, 302, 111 [NASA ADS] [CrossRef] [Google Scholar]
 Lewis, I., Balogh, M., De Propris, R., et al. 2002, MNRAS, 334, 673 [NASA ADS] [CrossRef] [Google Scholar]
 Li, C., Kauffmann, G., Jing, Y. P., et al. 2006, MNRAS, 368, 21 [NASA ADS] [CrossRef] [Google Scholar]
 Liddle, A. R. 2007, MNRAS, 377, L74 [NASA ADS] [Google Scholar]
 Liu, Z., Song, L., & Zhao, W. 2016, MNRAS, 455, 4289 [NASA ADS] [CrossRef] [Google Scholar]
 LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, ArXiv eprints [arXiv:0912.0201] [Google Scholar]
 Maehoenen, P. H., & Hakala, P. J. 1995, ApJ, 452, L77 [NASA ADS] [CrossRef] [Google Scholar]
 Mo, H., van den Bosch, F. C., & White, S. 2010, Galaxy Formation and Evolution (Cambridge University Press) [Google Scholar]
 Naim, A., Ratnatunga, K. U., & Griffiths, R. E. 1997, ArXiv eprints [arXiv:astroph/9704012] [Google Scholar]
 Novikov, D., Colombi, S., & Doré, O. 2006, MNRAS, 366, 1201 [NASA ADS] [Google Scholar]
 Oemler, Jr., A. 1974, ApJ, 194, 1 [NASA ADS] [CrossRef] [Google Scholar]
 Polsterer, K. L., Gieseke, F., Gianniotis, N., & Kügler, D. 2015, IAU General Assembly, 22, 2258115 [NASA ADS] [Google Scholar]
 Postman, M., & Geller, M. J. 1984, ApJ, 281, 95 [NASA ADS] [CrossRef] [Google Scholar]
 Rodríguez, S., Padilla, N. D., & García Lambas, D. 2016, MNRAS, 456, 571 [NASA ADS] [CrossRef] [Google Scholar]
 Salaris, M., & Cassisi, S. 2005, Evolution of Stars and Stellar Populations (WileyVCH) [Google Scholar]
 Shandarin, S., Habib, S., & Heitmann, K. 2012, Phys. Rev. D, 85, 083005 [NASA ADS] [CrossRef] [Google Scholar]
 Spergel, D. N., Bean, R., Doré, O., et al. 2007, ApJS, 170, 377 [NASA ADS] [CrossRef] [Google Scholar]
 Way, M. J., Scargle, J. D., Ali, K. M., & Srivastava, A. N. 2012, Advances in Machine Learning and Data Mining for Astronomy (Boca Raton: CRC Press) [Google Scholar]
 Yoon, I., & Rosenberg, J. L. 2015, ApJ, 812, 4 [NASA ADS] [CrossRef] [Google Scholar]
 York, D. G., Adelman, J., Anderson, Jr., J. E., et al. 2000, AJ, 120, 1579 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: PDF for realizations of reconstructed correlation functions
For visualization the posterior of f (Eq. (8)) can be transformed into data space resulting in a PDF for the realizations of the correlation function f(x). In Eq. (2)we assumed that f(x) can be Taylor expanded up to order M. Therefore the mean ⟨f(x)⟩ is derived as: (A.1)with x ∈R and . has the same structure as R (Eq. (3)) but the finite dimensional part of the operator, corresponding to the data points x_{i}, has been replaced by an infinite dimensional part for x ∈R.
Analogously we obtain the covariance Y as: (A.2)Combining these results yields a PDF for the possible realizations of the fitted curve (A.3)which describes how likely a realization is, given the data. This permits to visualize the fitted curve including corresponding uncertainties in specific areas of the data space.
Appendix B: Self organizing map algorithm
Various different implementations of SOMs have been presented in the literature (see e.g., Kohonen 2001). Many implementations appear to follow the same generic idea but differ in some implementation details. The difference is caused by the fact that SOMs are used in order to tackle many different questions regarding the structural form of data. Therefore, we present the detailed implementation of our SOM algorithm in the following.
A SOM is an artificial neural network specifically designed to determine the structure of datasets in high dimensional spaces. The network has a specific topological structure. In this work we rely on a network with neurons interlinked in a squarelattice pattern with a neighbourhood function representing the strength of those links. The network is trained by data with a training algorithm which gets repeated for every data point multiple times resulting in a learning process. The generic form of the network as well as the learning process is described in Sect. 2.2.
Before the process can start the network has to be linked to data space. Therefore each neuron holds a vector W = (W_{1},W_{2},...,W_{N})^{T} in the N dimensional data space, called weight. It is important to point out that the neurons live in two different spaces: the data space with the position represented by its weight and the network pattern where each neuron is linked to each other by a neighbourhood function.
In the beginning of the learning process, no information about the data space has been provided to the network. Therefore weights are initialized randomly in data space. After initialization the actual learning process starts. Each iteration of the learning process follows the same generic form.
First the Best matching unit (BMU) is calculated for a randomly chosen data vector V = (V_{1},V_{2},...,V_{N})^{T}. The BMU is defined to be the closest neuron to V in terms of similarity, as expressed by a dataspace distance measure. For this we use the Euclidean distance D in rescaled data dimensions. Specifically (B.1)where σ_{i} being the scale factor for each component i. This automatically solves the problem to compare quantities with disparate units. We define σ_{i} as: (B.2)where V_{i max} and V_{i min} are the maximum and minimum values of the ith component of all data vectors.
The weight of the neuron for which D gets minimal is modified according to the value of V. Therefore the new weight for the BMU at iteration step t + 1 is: (B.3)where W_{t} is the previous weight and L_{t} is the “learning rate". The learning rate is a decreasing function of t and hence quantifies how strong an input vector should influence the weights at a specific iteration step. It has to be a decreasing function since the tth vector presented to the network should not change the weight of a neuron as much as the previous ones to ensure converging information updates. There are two convenient shapes for learning rates: a linear and an exponential decay. In this work we chose to use the exponential decay with L_{t} given as: (B.4)L_{0} is the initial learning rate and λ is a tunable parameter to adopt the change of the learning rate for each iteration.
Since neurons are linked to each other, adaptation of individual neurons will also affect the weights of all other neurons. The strength of the modification of those weights should decrease with distance to the BMU in the specified topology of the network. Therefore the size of the neighbourhood of a single neuron for a specific iteration step t is (B.5)where σ_{0} is the initial neighbourhood size. The size decreases with t in order to ensure that the modification of the vicinity of the BMU gets less important with increasing t. The neighbourhood size σ defines the influence rate Θ of one iteration: (B.6)where d_{BMU} is the distance between the position of the updated neuron and the BMU of the tth iteration step in the square lattice pattern. It is important to distinguish d_{BMU} from D, since d_{BMU} is the distance between two neurons in the network pattern and D is the euclidean distance in data space. Θ assumes a value of one for the BMU itself therefore modification functions can be combined, yielding (B.7)These steps are repeated for every single vector in the dataset.
To avoid biasing weights to the first subset of data, the whole learning process has to be repeated multiple times. The final result of the learning process is given by averaging the weights for each learning process.
Appendix C: Mapping the SDSS data onto reconstructed density fields
In order to extract the properties of the density field reconstructions from the results provided by the BORG algorithm, we map the SDSS data onto the reconstructed volume. More precisely, we look for the position of each galaxy in the cubic volume and store the properties of the LSS in the voxel hosting the galaxy. All galaxies within one voxel are assigned the same LSS information. This results in an extended data catalog, containing the intrinsic properties of the galaxies as well as the properties of the LSS in the surrounding area of each galaxy. This procedure is perfectly applicable for all kinds of cosmological data as long as there is information about the 3D position of the objects in the data.
Since the SDSS data provides position information in redshift space, we need to transform the coordinates to the comoving frame. Redshifts z_{i} are transformed to comoving distances d_{com} according to: (C.1)where c is the speed of light and H(z) denotes the Hubble parameter. H(z) is given as: (C.2)under the assumption of a concordance ΛCDM model with the cosmological parameters Ω_{m} = 0.24, Ω_{c} = 0.00, Ω_{Λ} = 0.76, h = 0.73 and H_{0} = h 100 km s^{1} MPc^{1} (see Spergel et al. 2007). We used this set of parameters instead of more recent ones in order to match the cosmology used for the LSS reconstructions.
As a final step we calculate the Cartesian coordinates for each galaxy: where α and δ are the right ascension and declination of the ecliptic frame, respectively.
Since the BORG reconstruction maps provide an approximate PDF for the density field, we see that uncertainties in the reconstruction increase with increasing distance. Therefore, in order to exclude areas of high uncertainties in the analysis of correlation determination, we excluded all galaxies of the SDSS sample above a certain distance d_{lim} = 450 Mpc h^{1}. This results in a subsample including only galaxies with redshifts between 0.001 <z< 0.156. Due to the fact that the BORG reconstruction maps are based on the SDSS, uncertainties in the reconstruction increase in regions with less signal, specifically regions with a low number of galaxies. Therefore the majority of data remains included in the limited sample.
Appendix D: Probability distribution for correlation functions with the LSS
As described in Sect. 3.3 the BORG algorithm provides an ensemble of density contrast field realizations that capture observational uncertainties. In order to treat the uncertainties in the density contrast correctly during correlation determination, the reconstruction algorithm described in Sect. 2.1 has to be applied to each realization independently. This yields a PDF P(f  δ_{i}d) for each δ_{i}. The dependency of the realizations has to be marginalized out of the PDF’s in order to obtain the final PDF for the correlation function P(f  d). This results in a Gaussian mixture for the posterior PDF. Specifically, (D.1)where δ_{i} denotes one of the S realizations of the density contrast and m_{i} and D_{i} denote the corresponding mean and covariance for each fit.
All Tables
Web type classification according to the ordered eigenvalues λ_{1} > λ_{2} > λ_{3} of the tidalshear tensor. In this work we used λ_{th} = 0.
All Figures
Fig. 1 Histogram of recovered polynomial order for different inverse signal to noise ratios k. U = 1000 and denotes the sample size. The noise variance σ_{n} ranges from ≈0 to ≤3. The signal was generated according to Eq. (2)as a 15th order polynomial. We see that the most adequate order selected by the BIC decreases with increasing k. The selected model depends on the specific data realization, therefore we averaged over reconstructed orders with similar k 

Open with DEXTER  
In the text 
Fig. 2 Distribution of the mock data (left panel), generated as described in Sect. 2.3. The x and z coordinates for each data point are drawn from four different Gaussian distributions with different means and covariances. The covariance is assumed to be diagonal. The y coordinates are generated to be correlated with the x coordinates with a correlation function consistent with Eq. (2)(see Table 1 for the correlation coefficients). The right panel shows the data space including the 3 × 3 square lattice neuron pattern after a successful training of the SOM. Neighboring neurons are interlinked in the picture. In addition, each subsample of data corresponding to one neuron, is drawn in the color of the specific neuron. 

Open with DEXTER  
In the text 
Fig. 3 Mock data distribution projected to the xyplane as well as the initial correlation functions of each subsample of data. In addition, the reconstructed polynomials for each neuronsample as selected after training are depicted as red dashed lines in the picture. The gray areas denote the 1, 2 and 3σ_{y} uncertainties of the reconstruction, respectively. σ_{y} is the projection of the parameter covariance D to the data space as described by Eq. (13). 

Open with DEXTER  
In the text 
Fig. 4 Nonpolynomial correlation structure (green line) together with the mock data generated from it. In addition the red, yellow and black dashed lines indicate the three reconstructed polynomials respectively. Gray areas denote the uncertainties of the reconstructions as described in the caption of Fig. 3. 

Open with DEXTER  
In the text 
Fig. 5 Slices of the ensemble mean density field on a logarithmic scale log (2 + ⟨δ⟩) (upper panels) and the same slices with the SDSS galaxies mapped onto the grid as yellow dots (lower panels). In order to exclude areas of high uncertainty from the analysis we took a distance threshold in the comoving frame at d_{lim} ≈ 450 Mpc h^{1}. Therefore galaxies above this limit are excluded form the analysis and not depicted. 

Open with DEXTER  
In the text 
Fig. 6 Web type classification in slices of the 3D LSS reconstruction. We cut the volume at the same positions as used in Fig. 5. Since an average web type is not well defined, we present only one sample of the reconstructions instead of the mean LSS. We distinguish the LSS according to the webtype classification described in Table 4. We note that sheets seem to fill the largest volume. In the chosen classification scheme, incident regions to sheets are also accounted as sheets. 

Open with DEXTER  
In the text 
Fig. 7 Distribution of the SDSS data projected to the M_{0.1r}zplane. The topleft panel shows the volume limitation for different absolute magnitude thresholds (− 18.5, − 20.0). The bottomleft panel shows the distribution of the neurons after a successful application of the SOM to the SDSS data. The topmid and bottommid panels show two different neuronsamples (NSamples) corresponding to the depicted neurons from the trained SOM. In addition, the volume limitation used for the correlation comparison are depicted in both panels. The selected neurons hold subsamples of data with a similar projected distribution compared to the volumelimited samples in order to compare the selection methods. The top and bottomright panels show reconstructed correlation functions for the volume limited sample with magnitude limits at − 18.5 (top) and − 20.1 (bottom) and for the corresponding neuronsamples. The range of each subsample in log (1 + δ) is indicated by the length of each reconstructed polynomial. 

Open with DEXTER  
In the text 
Fig. 8 Reconstructed correlation functions for different neuronsamples selected from the SDSS data sample by the SOM. In particular we depict the correlations for the logarithm of the stellar mass log (M_{∗}), the rband absolute magnitude M_{0.1r} and the gr color. In addition, each figure shows the data space position of the BMU corresponding to the subsample of data used for reconstruction. We see that the correlation of the stellar mass log (M_{∗}) and the absolute magnitude M_{0.1r} with the density field log (1 + δ) appear to be similar in different regions of the LSS. The upper two panels show reconstructions for subsamples of heavy, red galaxies in high density regions classified as clusters (halos) according to the corresponding eigenvalues. The bottomleft panel belongs to heavy, red galaxies in low density regions classified as filaments and the bottomright panel belongs to light, blue galaxies in regions classified as sheet (or filament since λ_{2} ≈ 0). We adjusted the range of the yaxis in the last panel in order to improve the visibility of the correlation structure. Colors are defined according to the color classification code of Li et al. (2006). 

Open with DEXTER  
In the text 
Fig. 9 Reconstructed correlation functions for one neuronsample selected from the AGN data sample by the SOM. The data space position of the BMU is depicted in the right side of the figure. 

Open with DEXTER  
In the text 
Fig. 10 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER  
In the text 
Fig. 11 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER  
In the text 
Fig. 12 Reconstructed correlation functions for one neuronsample selected by the SOM from AGN data. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.