A&A 406, 393-401 (2003)
DOI: 10.1051/0004-6361:20030719

Back to the primordial Universe by a Monge-Ampère-Kantorovich optimization scheme

R. Mohayaee1 - U. Frisch1 - S. Matarrese2 - A. Sobolevski{\u{\i}}\kern.15em1,3


1 - Département Cassini, Observatoire de la Côte d'Azur, BP 4229, 06304 Nice Cedex 4, France
2 - Dipartimento di Fisica "Galileo Galilei'' and INFN, Sezione di Padova, via Marzolo 8, 35131 Padova, Italy
3 - Department of Physics, M. V. Lomonossov Moscow State University, 119992 Moscow, Russia

Received 12 March 2003 / Accepted 12 May 2003

Abstract
A new method for reconstruction of the primordial density fluctuation field is presented. Various previous approaches to this problem rendered non-unique solutions. Here, it is demonstrated that the initial positions of dark matter fluid elements, under the hypothesis that their displacement is the gradient of a convex potential, can be reconstructed uniquely. In our approach, the cosmological reconstruction problem is reformulated as an assignment problem in optimization theory. When tested against numerical simulations, our scheme yields excellent reconstruction on scales larger than a few megaparsecs.

Key words: reconstruction - Lagrangian methods - convexity - optimization

1 Introduction

The present distribution of galaxies brought to us by redshift surveys indicates that the Universe on large scales exhibits a high degree of clumpiness with coherent structures such as voids, great walls, filaments and clusters. The cosmic microwave background (CMB) explorers, however, indicate that the Universe was highly homogeneous billions of years ago. When studying these data, among the questions that are of concern in cosmology are the initial conditions of the Universe and the dynamics under which it grew into the present Universe. CMB explorers provide us with valuable knowledge into the initial conditions of the Universe, but the present distribution of the galaxies opens a second, complementary window on the early Universe.

Unravelling the properties of the early Universe from the present data is an instance of the general class of inverse problems in physics. In cosmology this problem is frequently tackled in an empirical way by taking a forward approach. In this approach, a cosmological model is proposed for the initial power spectrum of dark matter. Next, a particle representation of the initial density field is made which provides the initial data for an N-body simulation which is run using Newtonian dynamics and is stopped at the present time. Subsequently, a statistical comparison between the outcome of the simulation and the observational data can be made, assuming that a suitable bias relation exists between the distribution of galaxies and that of dark matter. If the statistical test is satisfactory then the implication is that the initial condition was viable; otherwise one changes the cosmological parameters and goes through the whole process again. This is repeated until one obtains a satisfactory statistical test, affirming a good choice for the initial condition.

However, this inverse problem does not have to be necessarily dealt with in a forward manner. Can one fit the present distribution of the galaxies exactly rather than statistically and run it back in time to make the reconstruction of the primordial density fluctuation field? The Newtonian gravity is time-reversible; therefore, had the present velocities of the galaxies been known in addition to their positions, one would have been able to integrate the equations of motions back in time and solve the reconstruction problem trivially. As a matter of fact, however, the peculiar velocities of only a few thousands of galaxies are known out of the hundreds of thousands whose redshifts have been measured. Indeed, one goal of reconstruction is to evaluate the peculiar velocities of the galaxies and in this manner put direct constraints on cosmological parameters.

Without a second boundary condition, reconstruction would thus be an infeasible task. For reconstruction so far we have only mentioned the present positions of the galaxies. The second condition is the homogeneity of the initial density field: as we go back in time, the peculiar velocities of the galaxies vanish. This corresponds to discarding the decaying mode, which is the assumption used throughout this paper (see also discussion in Sect. 2.1 of Brenier et al. 2003). Thus, contrary to the forward approach where one solves an initial-value problem, in the reconstruction approach one is dealing with a two-point boundary value problem. In the former, one starts with the initial positions and velocities of the particles and solves Newton's equations arriving at a unique final position and velocity for a given particle (with certain mathematical reservations discussed, e.g., by Buchert & Ehlers 1997; Ehlers & Buchert 1997). In the latter one does not always have uniqueness. This has been one of the shortcomings of reconstruction, which was consequently taken to be an ill-posed problem.

In this work, we report on a new method of reconstruction which guarantees uniqueness. In Sect. 2, we review the previous works on reconstruction. We describe our mathematical formulation of reconstruction problem in Sect. 3 and show that cosmological reconstruction problem, under our hypotheses, is an instance of an assignment problem in optimization theory. In Sect. 4, we describe the numerical algorithms to solve the assignment problem. In Sect. 5, we test our reconstruction method with numerical N-body simulation both in real and redshift spaces.

2 A brief review of variational and Eulerian approaches to reconstruction

The history of reconstruction goes back to the work of Peebles (Peebles 1989) on tracing the orbits of the members of the local group. In his approach, reconstruction was solved as a variational problem. Instead of solving the equations of motion, one searches for the stationary points of the corresponding Euler-Lagrange action. The action in the comoving coordinates is (Peebles 1980)

 \begin{displaymath}S=\int_{0}^{t_0} {\rm d}t~\left[ {m_i a^2 \dot{\vec{x}}_i^2\o...
...vert} +{2\over 3}\pi G\rho_{\rm b}
a^2 m_i \vec{x}_i^2\right],
\end{displaymath} (1)

where summation over repeated indices and $j\not=i$ is implied, t0 denotes the present time, the path of the ith particle with mass mi is $\vec{x}_i(t)$, $\rho_{\rm b}$ is the mean mass density, and the present value of the expansion parameter a(t) is a0=a(t0)=1. Denoting the integrand in (1) by ${\cal L}$, we obtain the equation of motion by requiring

\begin{displaymath}\delta S=
\int_0^{t_0}
{\rm d}t
\left[
{\partial {\cal L}\ove...
...artial\dot{\vec{x}}_i}\cdot{\delta\dot{\vec{x}}_i}
\right]
=0,
\end{displaymath} (2)

which leads to

 \begin{displaymath}\int_0^{t_0} {\rm d}t \delta\vec{x}_i
\left[
\sum_{j\not=i}{G...
...left[a^2\dot{\vec{x}}_i\cdot\delta\vec{x}_i\right]^{t_0}_0
=0.
\end{displaymath} (3)

The boundary conditions

\begin{displaymath}\delta\vec{x}_i = 0 \qquad ~\;{\rm at} \qquad t=t_0\end{displaymath}


 \begin{displaymath}a^2\dot{\vec{x}}_i=0 \qquad {\rm at} \qquad t\rightarrow 0
\end{displaymath} (4)

would then eliminate the boundary terms in (3).

The components $\alpha=1,2,3$ of the orbit of the ith particle are modelled as

 \begin{displaymath}x_i^\alpha=x_i^\alpha(t_0)+\sum_n C_{i,n}^\alpha f_n(t).
\end{displaymath} (5)

The functions fn are normally taken to be convenient functions of the scale factor aand should satisfy the boundary conditions (4). Initially, trial functions such as fn=an(1-a)or $
f_n={\rm sin}\left({n\pi a/ 2}\right)
$with $f_0={\rm cos}\left({\pi a/ 2}\right)$ were used. The coefficients Ci,n are then found by substituting expression (5) in the action (1) and finding the stationary points. That is for physical trajectories

\begin{displaymath}{\partial S\over \partial C^\alpha_{i,n}}=0,
\end{displaymath} (6)

leading to

 \begin{displaymath}m_i\int_0^{t_0}
{\rm d}t f_n(t)\left[ -{{\rm d}\over {\rm d}...
...a-x_i^\alpha\over
\vert \vec{x}_i-\vec{x}_j\vert}\right] \cdot
\end{displaymath} (7)

In his first work, Peebles (1989) considered only the minimum of the action, while reconstructing the trajectories of the galaxies back in time. In a low-density Universe, assuming a linear bias, the predicted velocities agreed with the observed ones for most galaxies of the local group but failed with a large discrepancy for remaining members. Later on, it was found that when the trajectories corresponding to the saddle-point of the action were taken instead of the minimum, much better agreement between predicted and observed velocities was obtained, for almost all the galaxies in the local group (see Fig. 1). Thus, by adjusting the orbits until the predicted and observed velocities agreed, reasonable bounds on cosmological parameters were found (Peebles 1989, 1990).
  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{peebles.ps}\end{figure} Figure 1: A schematic demonstration of Peebles' reconstruction of the trajectories of the members of the local neighbourhood using a variational approach based on the minimization of the Euler-Lagrange action. In most cases there is more than one allowed trajectory due to orbit crossing (closely related to the multistreaming of the underlying dark matter fluid). The pink (darker) orbits correspond to taking the minimum of the action whereas the yellow (brighter) orbits were obtained by taking the saddle-point solution. Of particular interest is the orbit of N6822 which in the former solution is on its first approach towards us and in the second solution is in its passing orbit. A better agreement between the evaluated and observed velocities was shown to correspond to the saddle point solution.
Open with DEXTER

Although rather successful when applied to catalogues such as NBGC (Tully 1988), reconstruction with such an aim, namely establishing bounds on cosmological parameters using measured peculiar velocities, could not be applied to larger galaxy redshift surveys, containing hundreds of thousands of galaxies for the majority of which the peculiar velocities are unknown. For large datasets, it is not possible to use the velocities to choose the right orbit from the many physically possible ones. In order to resolve the problem of multiple solutions (the existence of many mathematically possible orbits) one normally had to do significant smoothing and then try the computationally costly reconstruction using Peebles variational approach (Shaya et al. 1995; Branchini et al. 2001). However, one was still not guaranteed to have chosen the right orbit (see Hjortland 1999 for a review of the action variational method).

The multiple solution can be caused by various factors. For example, the discretization in the numerical integrations (7) can produce spurious valleys in the landscape of the Euler-Lagrange action. However, even overcoming all these numerical artifacts one is still not guaranteed uniqueness. There is a genuine physical reason for the lack of uniqueness which is often referred to in cosmology as multistreaming.

Cold dark matter (CDM) is a collisionless fluid with no velocity dispersion. An important feature that arises in the course of the evolution of a self-gravitating CDM Universe is the formation of multistream regions on which the velocity field is non-unique. These regions are bounded by caustics at which the density is divergent. At a multistream point where velocity is multiple-valued, a particle can have many different mathematically viable trajectories each of which would correspond to a different stationary point of the Euler-Lagrange action which is no longer convex.

In addition to variational approach to reconstruction, there is the well-known POTENT reconstruction of the three-dimensional velocity field from its radial component (Dekel et al. 1990). The POTENT method is the non-iterative Eulerian version of the earlier iterative Lagrangian method (Bertschinger & Dekel 1989) and assumes potential flow in Eulerian space. POTENT finds the velocity potential field by integrating the radial components of the velocities along the line of sight. Being an Eulerian method, its validity does not extend much beyond the linear Eulerian regime.

In following works, using POTENT reconstructed velocity field or the density field obtained from the analysis of the redshift surveys, the gravitational potential field was also reconstructed (Nusser & Dekel 1992). The gravitational potential was then traced back in time using the so-called Zel'dovich-Bernoulli equation which combines the Zel'dovich approximation with the Euler momentum conservation equation. Later on, it was further shown (Gramann 1993) that the initial gravitational potential is more accurately recovered using the Zel'dovich-continuity equation which combines the Zel'dovich approximation with the mass continuity equation (Nusser et al. 1991). For a summarizing discussion of various inversion techniques see, e.g., Susperregi & Buchert (1997).

However, these procedures are valid only as long as the density fluctuations are in the Eulerian linear or quasi-linear regimes (defined by $\vert(\rho-\rho_{\rm b})/\rho_{\rm b}\vert\le 1$). They do not robustly recover the initial density in regions of high density where the present-day structures are highly nonlinear ( $\vert(\rho-\rho_{\rm b})/\rho_{\rm b}\vert
\gg 1$). Therefore, they require that the final density field be smoothed quite heavily to remove any severe nonlinearities, before the dynamical evolution equations are integrated backward in time.

Here, we describe a new method of reconstruction (Frisch et al. 2002) which not only overcomes the problem of nonuniqueness encountered in the least-action-based methods but also remains valid well beyond linear Eulerian approximations employed in many other popular reconstruction methods, few of which we have discussed.

3 Monge-Ampère-Kantorovich reconstruction

Thus, reconstruction is a well-posed problem for as long as we avoid multistream regions. The mathematical formulation of this problem is as follows (Frisch et al. 2002). Unlike most of the previous works on reconstruction where one studies the Euler-Lagrange action, we start from a constraint equation, namely the mass conservation,

\begin{displaymath}\rho(\vec{x}){\rm d}\vec{x}=\rho_0(\vec{q}) {\rm d}\vec{q},
\qquad
\end{displaymath} (8)

where $\rho_0(\vec{q})$ is the density at the initial position, $\vec{q}$, and  $\rho(\vec{x})$ is the density at the present position, $\vec{x}$, of the fluid element. The above mass conservation equation can be rearranged in the following form

 \begin{displaymath}{\rm det}\left[{\partial q_i\over \partial x_j}\right]=
{\rho(\vec{x})\over \rho_0(\vec{q})},
\end{displaymath} (9)

where ${\rm det}$ stands for determinant and $\rho_0(\vec{q})$ is constant since the decaying mode is assumed to vanish. The right-hand side of the above expression is basically given by our boundary conditions: the final positions of the particles are known and the initial distribution is homogeneous, $\rho_0(\vec{q}) =
\rho_0 ={\rm const}$. To solve the equation, we make the following two hypotheses: the Lagrangian map ( $\vec{q}\rightarrow\vec{x}$) is the gradient of a convex potential $\Phi$. That is

\begin{displaymath}\vec{x}(\vec{q},t)=\nabla_q\Phi(\vec{q},t).
\end{displaymath} (10)

The convexity guarantees that a single Lagrangian position corresponds to a single Eulerian position, i.e., there has been no multistreaming[*]. These assumptions imply that the inverse map $\vec{x}\rightarrow \vec{q}$ also has a potential representation

\begin{displaymath}\vec{q}={\bf\nabla}_{\vec{x}}\Theta (\vec{x},t),
\end{displaymath} (11)

where the potential $\Theta(\vec{x})$ is also a convex function and is related to $\Phi(\vec{q})$ by the Legendre-Fenchel transform (Arnold 1978)
$\displaystyle \Theta(\vec{x})=\max_{\vec{q}}\left[
\vec{q}\cdot\vec{x}-\Phi(\vec{q})\right]~$ ; $\displaystyle ~
\Phi(\vec{q})=
\max_{\vec{x}}\left[\vec{x}\cdot\vec{q}-\Theta (\vec{x})\right].$ (12)

The inverse map is now substituted in (9) yielding

 \begin{displaymath}{\rm det}\left[{\partial^2 \Theta(\vec{x},t)
\over \partial x_i\partial x_j}\right]=
{\rho(\vec{x})\over \rho_0},
\end{displaymath} (13)

which is the well-known Monge-Ampère equation. The solution to this 222 years old problem has recently been discovered (Brenier 1987; Benamou & Brenier 2000) when it was realised that the map generated by the solution to the Monge-Ampère equation is the unique solution to an optimization problem. This is the Monge-Kantorovich mass transportation problem, in which one seeks the map $\vec{x}\rightarrow \vec{q}$ which minimises the quadratic cost function

 \begin{displaymath}I=\int_{\vec{q}} \rho_0(\vec{q})\vert\vec{x}-\vec{q}\vert^2 {...
...{\vec{x}} \rho(\vec{x})\vert\vec{x}-\vec{q}\vert^2 {\rm d}^3x.
\end{displaymath} (14)

A sketch of the proof is as follows. A small variation in the cost function yields

\begin{displaymath}\delta I=\int_{\vec{x}} \left[2\rho(\vec{x})(\vec{x}-\vec{q})
\cdot \delta \vec{x}\right] {\rm d}^3x,
\end{displaymath} (15)

which must be supplemented by the condition

\begin{displaymath}{\bf\nabla}_{\vec{x}}\cdot\left(\rho(\vec{x})\delta\vec{x}\right)=0,
\end{displaymath} (16)

which expresses the constraint that the Eulerian density remains unchanged. The vanishing of $\delta I$ should then hold for all $\vec{x}-\vec{q}$which are orthogonal (in L2) to functions of zero divergence. These are clearly gradients. Hence $\vec{x}-\vec{q}(\vec{x})$ and thus $\vec{q}(\vec{x})$is a gradient of a function of $\vec{x}$.

Discretising the cost (14) into equal mass units yields

 \begin{displaymath}\tilde{I}=\min_{j(\cdot)}\left(\sum_{i=1}^N\left(\vec{q}_{j(i)}-
{\bf x}_i\right)^2\right).
\end{displaymath} (17)

The formulation presented in (17) is known as the assignment problem: given N initial and N final entries one has to find the permutation which minimises the quadratic cost function.
  \begin{figure}
\par\includegraphics[width=8.3cm,clip]{piza-marseille.ps}
\end{figure} Figure 2: Solving the reconstruction problem as an assignment problem. An example of a system of N=9 particles is sketched. The cost matrix is shown. For example the entry C32 is the cost of getting from the Lagrangian position $\vec{q}_2$ to the Eulerian position $\vec{x}_3$. The cost of this transport is $C_{32}=(\vec{q}_2-\vec{x}_3)^2$. The total cost $C_{\rm T}$ of a given permutation is the sum of all such costs which are chosen for that permutation. That is $C_{\rm T}=\sum _i C_{ij(i)}$. Generally, there are N! possible permutations (or $C_{\rm T}s$). Only one of these N! permutations is the optimal assignment: ${C_{\rm optimal}}=
\min_{j(\cdot)}(C_{\rm T})$. For this system there are 362 889 different costs possible, each obtained by a different permutation. Algorithms with factorial complexity are clearly impractical even for small systems. However, assignment algorithms have complexities of polynomial degrees.
Open with DEXTER

  
4 Solving the assignment problem

If one were to solve the assignment problem for N particles directly, one would need to search among N! possible permutations for the one which has the minimum cost. However, advanced assignment algorithms exist which reduce the complexity of the problem from factorial to polynomial (so far at best to approximately N2.5 for our problem as verified by numerical tests, e.g., Burkard & Derigs 1980 and Bertsekas 1998). Before discussing these methods, let us briefly comment on a class of stochastic algorithms, which do not give uniqueness and hence should be avoided.

In the PIZA method of solving the assignment problem (Croft & Gaztañaga 1997), initially a random pairing between N Lagrangian and N Eulerian coordinates is made. Starting from this initial random one-to-one assignment, subsequently a pair (corresponding to two Eulerian and two Lagrangian positions) is chosen at random. For example, let us consider the randomly-selected pair  ${\vec{x}_1}$ and  ${\vec{x}_2}$ which have been assigned in the initial random assignment to $\vec{q}_1$ and $\vec{q}_2$ respectively. Next one swaps their Lagrangian coordinates and assigns ${\vec{x}_1}$ to $\vec{q}_2$ and ${\vec{x}_2}$ to $\vec{q}_1$ in this example. If

$\displaystyle \left[(\vec{x}_1-\vec{q}_1)^2+(\vec{x}_2-\vec{q}_2)^2\right] ~~>~~
\left[(\vec{x}_1-\vec{q}_2)^2+(\vec{x}_2-\vec{q}_1)^2 \right],$     (18)

then one keeps the Lagrangian positions swapped, otherwise the original assignment is restored. This process is repeated until one is convinced that a lower cost cannot be achieved. However, in this manner there is no guarantee that the optimal assignment has been achieved and the true minimum cost has been found. Moreover, there is a possibility of deadlock when the cost can be decreased only by a simultaneous interchange of three or more particles, while the PIZA algorithm reports a spurious minimum of the cost with respect to two-particle interchanges. Results obtained in this way depend strongly on the choice of initial random assignment and on the random selection of the pairs and suffer severely from the lack of uniqueness (see Fig. 3).

In spite of these problems associated with PIZA-type algorithms in solving the assignment problem, it has been shown (Narayanan & Croft 1999) in a point-by-point comparison of the reconstructed with the true initial density field from a N-body simulation, that PIZA performs better than other methods such as those based on Eulerian formulation of Zel'dovich approximation (discussed at the end of Sect. 2). There are various deterministic algorithms which guarantee that the optimal assignment is found. An example is an algorithm written by Hénon (Hénon 1995), demonstrated in Fig. 4. In this approach, a simple mechanical device is built which solves the assignment problem. The device acts as an analog computer: the numbers entering the problem are represented by physical quantities and the equations are replaced by physical laws.

One starts with N columns Bs (which represent the Lagrangian positions of the particles) and N rows, As (which represent the Eulerian positions of the particles). On each row there are N studs whose lengths are directly related to the distances between that row and each column. The columns are given weights of 1 and the rows act as floats and are given negative weights of -1.

The potential energy of the system shown in Fig. 4, within an additive constant, is

 \begin{displaymath}U=\sum_i \alpha_i-\sum_j \beta_j,
\end{displaymath} (19)

where $\alpha _i$ is the height of the row Ai and $\beta _j$ is the height of the column Bj since all rows and columns have the same weight (1 and -1 respectively).

Initially, all rods are maintained at a fixed position by two additional rods P and Q with the row Ai above column Bj, so that there is no contact between the rows and the studs. Next, the rods are released by removing the holds P and Q and the system starts evolving: rows go down and columns go up and the contacts are made with the studs. Aggregates of rows and columns are progressively formed. As new contacts are made, these aggregates are modified and thus a complicated evolution follows which is graphically demonstrated with a simple example in Fig. 5. One can then show that an equilibrium where the potential energy (19) of the system is minimum will be reached after a finite time. It can then be shown that if the system is in equilibrium and the column Bj is in contact with row Aithen the force fij is the optimal solution of the assignment problem. The potential energy of the corresponding equilibrium is equivalent to the total cost of the optimal solution.

  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{exactpizaseed1-exactpizaseed2-2.ps}
\end{figure} Figure 3: The lack of uniqueness in the results of two runs using a stochastic algorithm to solve the assignment problems. In the stochastic algorithm, based on pair-interchange, one searches for, what is frequently referred to in pure mathematics as, a monotonic map instead of a true cyclic monotone map. The red and blue points (stars and boxes respectively) are the perfectly-reconstructed Lagrangian positions using the same stochastic code with two different random seeds. The outputs of the two runs do not coincide, meaning that the reconstructed Lagrangian positions (and hence peculiar velocities) depend on the initial random assignment and on the random pair interchange. The lack of uniqueness in this case is superficial and a shortcoming of the chosen numerical method. Such difficulties do not arise when deterministic algorithms are used to solve the assignment problem.
Open with DEXTER

5 Test of Monge-Ampère-Kantorovich (MAK) reconstruction with N-body simulations

Thus, in our reconstruction method, the initial positions of the particles are uniquely found by solving the assignment problem. This result was based on our reconstruction hypothesis. We could test the validity of our hypothesis by direct comparison with numerical N-body simulations which is what we shall demonstrate later in this section. However, it is worth commenting briefly on the theoretical, observational and numerical justifications for our hypothesis. It is well-known that the Zel'dovich approximation (Zel'dovich 1970) works well in describing the large-scale structure of the Universe. In the Zel'dovich approximation particles move with their initial velocities on inertial trajectories in appropriately redefined coordinates. It is also known that the Zel'dovich approximation may be written as the first order of a systematic Lagrangian perturbation theory (Buchert 1992) and that in the absence of the decaying mode the irrotationality of the particle displacements (expressed in Lagrangian coordinates) remains valid even at the second-order in the Lagrangian perturbation theory (Moutarde et al. 1991; Buchert 1994; Catelan 1995). This provides the theoretical motivation for our first hypothesis.

The lack of transversally extended multistream regions is confirmed by the geometrical properties of the cosmological structures. In the presence of significant multistreaming such as the one that occurs in the Zel'dovich approximation, one could not expect the formation of long-lived structures such as filaments and great walls which is observed in numerical N-body simulations: these structures would form and would smear out and disappear rapidly. This is not backed by numerical simulations. The latter show the formation of concentrated shock-like structures described well by the Burgers model (a model of shock formation and evolution in a compressible fluid), which has been extensively used to describe the large-scale structure of the Universe (Gurbatov & Saichev 1984; Shandarin & Zel'dovich 1989). The success of Burgers model indicates that a viscosity-generating mechanism operates at small scales in collisionless systems resulting in the formation of shock-like structures rather than caustic-like structures (a related discussion may be found in Buchert & Dominguez 1998).

In spite of this evidence in support of our reconstruction hypothesis, one needs to test it before applying it to real galaxy catalogues. We have tested our reconstruction against numerical N-body simulation. We ran a $\Lambda $CDM simulation of 1283 dark matter particles, using the adaptive P3M code HYDRA (Couchman et al. 1995). Our cosmological parameters are $\Omega_m=0.3,
\Omega_\lambda=0.7, h=0.65, \sigma_8=0.9$ and a box size of 200 Mpc/h. We took two sparse samples of 17 178 and 100 000 particles corresponding to grids of 6 Mpc and 3 Mpc respectively, at z=0 and placed them initially on a uniform grid. For the 17 178 particle reconstruction, the sparse sampling was done by first selecting a subset of 323 particles on a regular Eulerian grid and then selecting all the particles which remained inside a spherical cut of radius half the size of the simulation box. The sparse sample of 100 000 points was made randomly. An assignment algorithm, similar to that of Hénon described in Figs. 4 and 5, is used to find the correspondences between the Eulerian and the Lagrangian positions. The results of our test are shown in Fig. 6.

  \begin{figure}
\par\includegraphics[width=7.5cm,clip]{henonmarseille-1.ps}
\end{figure} Figure 4: Hénon's mechanical device which solves the assignment problem. The device acts as an analog computer. The rows Ai remain parallel to the y axis, are constrained to move vertically and have positive weights. The columns Bj remain parallel to the x axis, can only move vertically and have negative weights. Vertical studs are placed on the columns, in such a way that each stud enforces a minimal distance between row Ai and column Bj. Initially all rods are kept at fixed positions by stops Pand Q. Then the rods are released by removing the stops Q and P and the system starts evolving. Rows go down and columns go up and aggregates of rows and columns are made. Thus a complex evolution takes place. The rules for the formation of aggregates is demonstrated by a simple example in the figure that follows.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{henonmarseille-2.ps}
\end{figure} Figure 5: A step-by-step ((a) to (l)) progress of Hénon's algorithm on a simple example with three initial and final positions (columns and rows) is shown. The table on the top right shows the values of the costs (distances between rows and columns). When executing the algorithm by hand, it is convenient to keep track of the distances between the rows and the studs, i.e. the quantities $\gamma _{ij}=\alpha _i-\beta _j-c_{ij}$ where $\alpha _i$ is the variable height of the row Ai and $\beta _j$ is the variable height of the column Bj. A graph of the initial state is made with the first contact being made between the second row and second column which have the largest cost (note that the code originally written by Hénon, in fact, finds the maximum cost; for our purpose of finding the minimum cost, the matrix elements cij should be subtracted from a large number). Thus, at this point the entries in the $\gamma $ matrix change since now the second row and column are in contact and hence c22=0. Obviously the distances between all other rows and columns should also be modified. The second matrix shows the new distances and automatically a contact is made between third row and third column whose separation is now also zero. Since the second and third rows cannot move now, the next contact is made between the first column and the second row and the break occurs where the total force on the branch is weakest. Since this is where the second row meets the support Q, part of the Q tree is captured by the P tree, as demonstrated. The next contact can now only be made between the first row and the first column and the break occurs at the weakest branch. The equilibrium position is now reached where each row is supported by one column. In the final optimal assignment Col. 1 is attached to row 1, Col. 2 goes to row 2 and Col. 3 is associated to row 3. For this simple exercise one can easily see that this procedure achieves the maximum cost (which in this example is 21).
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{fig2-periodic-1-10to4.ps}
\end{figure} Figure 6: Test of MAK reconstruction of the Lagrangian positions, using a $\Lambda $CDM simulation of 1283 particles in a box of size $200 ~{\rm Mpc}^3/h^3$. In the scatter plot, the dots near the diagonal are a scatter plot of reconstructed initial points versus simulation initial points for a grid of size 6 Mpc/h with about 20 000 points. The scatter diagram uses a quasi-periodic projection coordinate $\tilde{\vec{q}}\equiv (q_x+\sqrt 2 q_y+{\sqrt 3} q_z)/(1+
\sqrt 2+\sqrt 3)$ which guarantees a one-to-one correspondence between $\tilde{\vec{q}}$ values and points on the regular Lagrangian grid. The upper left inset is a histogram (by percentage) of distances in reconstruction mesh units between such points; the first bin corresponds to perfect reconstruction; the lower-inset is a similar histogram for reconstruction on a finer grid of about 3 Mpc/h using 100 000 points. With the 6 Mpc/h grid 62% of the points, and with 3 Mpc/h grid more than 50% of the points, are assigned perfectly.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{mak1-piza_nonmonotonic_seed1.ps}
\end{figure} Figure 7: A comparison of a PIZA-type, stochastic method, versus our MAK, deterministic method of solving the assignment problem is shown. The outputs of the simulation used for Fig. 6 have also been used here. In the stochastic method, almost one billion pair interchanges are made, at which point the cost-reduction cannot be reduced anymore. However, even with such a large number of interchanges, the number of exactly reconstructed points is well below that achieved by MAK. The upper sets are the scatter plots of reconstructed versus simulation Lagrangian positions for PIZA (left top set) and for MAK (right top set). The lower sets are histograms of the distances between the reconstructed Lagrangian positions and those given by the simulation. The bin sizes are taken to be slightly less than one mesh. Hence, all the points in the first (darker) bin correspond to those which have been exactly reconstructed. Using PIZA, we achieve a maximum of about 5000 out of the 17 178 points exactly-reconstructed positions whereas with MAK this number reaches almost 8000 out of 17 178 points. Note that, for the sole purpose of comparing MAK with PIZA it is not necessary to account for periodicity corrections, which would improve both performances equally. Accounting for the periodicity improves exactly-reconstructed MAK positions to almost 11 000 points out of 17 178 points used for reconstruction, as shown in the upper inset of Fig. 6. Starting with an initial random cost of about 200 million (Mpc/h)2 (5000 in our box unit which runs from zero to one), after one billion pair interchange, a minimum cost of about 1 960 000 (Mpc/h)2 (49 in our box unit) is achieved. Continuing to run the code on a 2  GHz DEC alpha workstation, consuming almost a week of CPU time, does not reduce the cost any further. With the MAK algorithm, the minimum cost is achieved, on the same machine, in a few minutes. The cost for MAK is 1 836 000 (Mpc/h)2 (45.9 in box units) which is significantly lower than the minimum cost of PIZA. Considering that the average particle displacement in one Hubble time is about 10 Mpc/h (about 1/20 of the box size) this discrepancy between MAK and PIZA costs is rather significant.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.8cm,clip]{redshift-mak-sim.ps}
\end{figure} Figure 8: Reconstruction test in redshift space with the same data as that used for real-space reconstruction tested in the upper left histogram of Fig. 6. The velocities are smoothed over a sphere of radius 2 Mpc/h. The observer is taken to be at the centre of the simulation box. The Lagrangian reconstructed points are plotted against the simulation Lagrangian positions using the quasi-periodic projection coordinates used for the scatter plot of Fig. 6. The histogram corresponds to the distances between the reconstructed and the simulation Lagrangian positions for each Eulerian position. The bins of the histogram are smaller than one mesh and the first one corresponds to exactly-reconstructed Lagrangian positions. We obtain $43\%$ of perfect reconstruction.
Open with DEXTER


 \begin{figure}
\par\includegraphics[width=8.5cm,clip]{redshiftmak-realmak.ps}
\end{figure} Figure 9: Comparison of redshift versus real space reconstruction allows a test of the robustness of MAK reconstruction against systematic errors. The same data as those used for Fig. 8 has been used to obtain the above result. In both real and redshift space, large number (almost $50\%$) of the exactly reconstructed points, i.e. those which fall in the first bin of the histogram, correspond to the same particles.
Open with DEXTER


  \begin{figure}
\par\includegraphics[width=8.5cm,clip]{failed.ps}
\end{figure} Figure 10: A thin slice (26 Mpc/h) of the box (of sides 200 Mpc/h) of the  $128^3, \Lambda $CDM simulation the same as that used for Fig. 6 is shown. Out of the 100 000 points which have been used for reconstruction, more than half are exactly reconstructed. The points which fail exact reconstruction are highlighted (yellow, brighter points). The plot verifies that these points lie mainly in the overdense regions where we do not expect our reconstruction hypothesis to hold well.
Open with DEXTER

It is instructive to compare our results presented in Fig. 6 with those obtained by a stochastic PIZA-type algorithm. Figure 7 demonstrates such a comparison. We see that not only the stochastic algorithms do not guarantee uniqueness, as demonstrated in Fig. 3, but also the number of exactly reconstructed points by these algorithms are much below those obtained by our deterministic algorithm. When reconstructing from observational data, in redshift space, the galaxies positions are displaced radially by an amount proportional to the radial component of the peculiar velocity in the line of sight. Thus for real catalogues, the cost minimization need be performed using redshift space coordinates (Valentine et al. 2000). The redshift position of a galaxy is given by

 \begin{displaymath}\vec{s}=\vec{x}+\hat{\vec{s}}\left({\dot{\vec{x}}\over H}\cdot
\hat{\vec{s}}\right),
\end{displaymath} (20)

where under our assumption of irrotationality and convexity of the Lagrangian map and to first order in Lagrangian perturbation theory,

\begin{displaymath}\dot{\vec{x}}=\beta H (\vec{x}-\vec{q})
\end{displaymath} (21)

and $
\beta={f(\Omega)/ b}
$with $f(\Omega)={\rm d}{\rm ln}\delta/{\rm d}{\rm ln}a=\Omega_{\rm m}^{0.6}$ and the bias factor is $\delta_{\rm galaxies}=b\delta_{\rm mass}$. The quadratic cost function can then be easily found in terms of redshift space coordinates as follows

 \begin{displaymath}(\vec{x}-\vec{q})^2=
(\vec{s}-\vec{q})^2-
{\beta(2+\beta)\ove...
...\beta)^2}
\left((\vec{s}-\vec{q})\cdot
\hat{\vec{s}}\right)^2.
\end{displaymath} (22)

Using MAK, based now on the modified cost (17) we can then find the corresponding $\vec{q}$ for each ${\bf s}$ and hence find the peculiar velocities. Indeed, using (20)

\begin{displaymath}\vec{s}-\vec{q}=
{\dot{\vec{x}}\over H}\left(1+{b\over \Omega_{\rm m}^{0.6}}\right)\cdot
\end{displaymath} (23)

Note that in this approach the motion of the local group itself is ignored, which however should also be accounted for (Taylor & Valentine 1999). Furthermore, the effect of catalogue selection function and bias can be handled by giving each galaxy a mass inversely proportional to the catalogue selection function (Nusser & Branchini 2000).

We have also performed MAK reconstruction with the redshift-modified cost function which led to somewhat degraded results but at the same time provided an approximate determination of peculiar velocities (see Fig. 8). More accurate a determination of peculiar velocities can be made using second-order Lagrangian perturbation theory (Buchert & Ehlers 1993; Munshi et al. 1994). Therefore the question that one asks at this point is: where does reconstruction perform poorly? In Fig. 10 we have demonstrated this fact, by highlighting the points where reconstruction fails by more than 6 Mpc/h. A plot of the distance between the reconstructed and simulated Lagrangian positions versus the density in a sphere of about 5 Mpc/h confirms that, as expected, reconstruction becomes less accurate in the denser regions. Figure 10 highlights where exact reconstruction fails. We see that exact reconstruction is not achieved in particular in the dense regions. Achieving reconstruction at small scales remains a subject of on-going research. As long as multistreaming effects are unimportant, that is above a few megaparsecs, uniqueness of the reconstruction is guaranteed. Approximate algorithms capturing such highly nonlinear effects are now being developed.

Acknowledgements
We thank E. Branchini, T. Buchert, M. Hénon, J. A. de Freitas Pacheco and P. Thomas for discussions and comments. U.F. thanks the Indo-French Center for the Promotion of Advanced Research (IFCPAR 2404-2). A.S. is supported by a CNRS Henri Poincaré fellowship and RFBR 02-01-01062. R.M. is supported by the European Union Marie Curie Fellowship HPMF-CT-2002-01532.

References



Copyright ESO 2003