Velocity correction for Hubble constant measurements from standard sirens

Gravitational wave (GW) sources are an excellent probe of the luminosity distance and o ﬀ er a novel measure of the Hubble constant, H 0 . This estimation of H 0 from standard sirens requires an accurate estimation of the cosmological redshift of the host galaxy of the GW source after correcting for its peculiar velocity. The absence of an accurate peculiar velocity correction a ﬀ ects both the precision and accuracy of the measurement of H 0 , particularly for nearby sources. Here, we propose a framework to incorporate such a peculiar velocity correction for GW sources. A ﬁrst implementation of our method to the event GW170817, combined with observations taken with Very Large Baseline Interferometry (VLBI), leads to a revised value of H 0 = 68 . 3 + 4 . 6 − 4 . 5 kms − 1 Mpc − 1 . While this revision is minor, it demonstrates that our method makes it possible to obtain unbiased and accurate measurements of H 0 at the precision required for the standard siren cosmology.


Introduction
Efforts to take an accurate measurement of the expansion rate of the Universe at the present epoch, known as the Hubble constant (H 0 ), have been ongoing since the discovery of the expanding Universe 1 by Lemaître (1927Lemaître ( , 1931 and Hubble (1929). Several complementary approaches have measured its value with a high level of precision (Hinshaw et al. 2013;Bennett et al. 2013;Planck Collaboration XVI 2014;Anderson et al. 2014;Planck Collaboration XIII 2016;Cuesta et al. 2016;Planck Collaboration VI 2020;Riess et al. 2019). However, current measurements of H 0 obtained using standard rulers anchored in the early Universe, such as cosmic microwave background (CMB), baryon acoustic oscillations (BAO, Planck Collaboration XVI 2014;Anderson et al. 2014;Aubourg et al. 2015;Planck Collaboration XIII 2016;Macaulay et al. 2019), and Big Bang nucleosynthesis (BBN, Addison et al. 2018;Abbott et al. 2018a), differ from late-Universe probes using standard candles, such as supernovae (SN type-Ia, Reid et al. 2009;Riess et al. 2019) and strong lensing from the H0LiCOW project (Wong et al. 2019), along with the use of the angular diameter distance between the 1 The International Astronomical Union has recently renamed the Hubble law as the Hubble-Lemaître law, in recognition of the pioneering contribution of Lemaître (https://www.iau.org/news/ pressreleases/detail/iau1812/). lensed images as a calibrator (Jee et al. 2019). A recent measurement of H 0 from the Carnegie-Chicago Hubble program, based on the use of the tip of the red giant branch (TRGB) as a calibrator for the SN type-Ia, has reduced the tension (Freedman et al. 2019). However, a more recent analysis by Yuan et al. (2019) has claimed inaccuracies in the calibration of Freedman et al. (2019), which once again aggravates the tension.
Taken at face value, this tension, statistically significant by more than 4σ, would necessitate a revision of the flat Lambda cold dark matter (ΛCDM) model of cosmology (Verde et al. , 2019Bernal et al. 2016;Di Valentino et al. 2017;Kreisch et al. 2020;Poulin et al. 2019;Lin et al. 2019;Agrawal et al. 2019;Knox & Millea 2020). Whether this discrepancy is associated with systematic or calibration errors in either of the data sets or whether it indicates new physics is currently a subject of intense debate. In this context, the spotlight has turned to standard sirens (Schutz 1986;Abbott et al. 2017a), with binary neutron star mergers as an independent probe with the potential to reach the percent-level precision needed to validate the low redshift (z) determination of H 0 (Dalal et al. 2006;Nissanke et al. 2010Nissanke et al. , 2013aChen et al. 2018;Feeney et al. 2019;Seto & Kyutoku 2018;Mortlock et al. 2019). This potential crucially depends on whether the contamination from the peculiar velocity can be corrected at the required level of accuracy. The estimation of the cosmic velocity field was established in several earlier studies (Kaiser et al. 1991;Shaya et al. 1992;Hudson 1994;Davis et al. 1996Davis et al. , 2011Branchini et al. 1999Branchini et al. , 2001Saunders et al. 2000;Nusser et al. 2001;Hudson et al. 2004;Radburn-Smith et al. 2004;Pike & Hudson 2005;Erdoǧdu et al. 2006;Lavaux & Hudson 2011;Ma et al. 2012;Turnbull et al. 2012).
In this paper, we propose a new framework to obtain an unbiased and accurate measurement of H 0 from gravitational wave (GW) observations. Our method relies on the borg framework, named for the Bayesian Origin Reconstruction from Galaxies (Jasche & Wandelt 2013;Jasche et al. 2015;Lavaux & Jasche 2016), to reconstruct the cosmic velocity field using the galaxy field observed in the redshift space. This framework is quite different from the methods used by Abbott et al. (2017a), which depend on the linear velocity estimates from Carrick et al. (2015). Along with the complete Bayesian posterior distribution of the velocity field available from the borg framework, it is also useful in capturing the non-linear velocity component (as discussed in Sect. 4), which goes beyond the framework from Carrick et al. (2015). Our method is an alternative way to incorporate the peculiar velocity corrections to the future GW sources in the sky patch of the 2M++ and SDSS-III samples, as discussed in Sect. 4. To make a reliable estimation of H 0 from GW sources, it is essential to correct for peculiar velocity using multiple independent approaches to minimize the chance of any systematic bias. The proposal made in this work can be included along with other methods, such as those of Carrick et al. (2015), and Springob et al. (2014) for future GW sources.
The paper is organized as follows. In Sect. 2, we discuss the low redshift probes to H 0 using standard candles and standard sirens. In Sects. 3 and 4, we discuss the effect of peculiar velocity on luminosity distance and redshift and we discuss a borg framework in the context of estimating it from the cosmological observations. In Sect. 5, we discuss the algorithm for incorporating a peculiar velocity correction to the host of standard sirens. Finally, in Sects. 6 and 7, respectively, we obtain the revised value of H 0 from the event GW170817 (Abbott et al. 2019a) and discuss the applicability of our method to the future GW events.

Low redshift probes of Hubble constant from standard sirens
All direct low-z measurements of H 0 depend on measuring the luminosity distance to the source, which is given by in a homogeneous Friedmann-Lemaître-Robertson-Walker (FLRW) metric when assuming a stationary source and observer. Here, c is the speed of light, and E(z) ≡ H(z)/H 0 = Ω m (1 + z) 3 + (1 − Ω m ), within the framework of flat ΛCDM (Hinshaw et al. 2013;Bennett et al. 2013;Planck Collaboration XVI 2014;Anderson et al. 2014;Planck Collaboration XIII 2016;Cuesta et al. 2016;Planck Collaboration VI 2020), with Ω m = ρ m /ρ c (matter density ρ m today divided by the critical density ρ c = 3H 2 0 /8πG). At very low redshift, z the Hubble parameter H(z) is nearly constant and this relation is simplified greatly while becoming independent of the background cosmological model: GW sources provide a new avenue wherby we can measure the luminosity distance as they provide remarkable standard sirens that assume only that the general theory of relativity serves as a valid description; that is, they do not need to be "standardized" using other astrophysical sources. Indeed, as pointed out by Schutz (1986), the distance to GW sources can be measured without a degeneracy using its redshifted chirp mass M z , which is related to the physical chirp mass in the source frame by the relation M z = (1 + z)M. Source frame chirp mass is related to the mass of each of the compact objects, m 1 and m 2 , via the relation M = (m 1 m 2 ) 3/5 /(m 1 + m 2 ) 1/5 . The GW strain in the frequency domain can be expressed for the two polarization states, h + , h × , via the relation (Hawking & Israel 1987;Cutler & Flanagan 1994;Poisson & Will 1995;Maggiore 2008) h + ( f z ) = 5 96 where G is the gravitational constant, c is the speed of light, f z = f /(1 + z) redshifted GW frequency, φ z is the phase of the GW signal, and i ≡ arccos(L.n) is the inclination angle of the GW source which is defined as the angle between the unit vector of angular momentum,L, and the line of sight of the source position,n. The above expression indicates the presence of a degeneracy between d L and cos(i) for individual polarization states. However, the polarization states of GW can be measured as they have different dependency on cos(i) (Holz & Hughes 2005;Dalal et al. 2006;Nissanke et al. 2010Nissanke et al. , 2013a. The measurement of both polarization states of the GW signal requires multiple detectors, with different relative orientation between the arms of the detectors. Along with the luminosity distance measurement to the standard sirens, a measurement of the Hubble constant also requires an independent measurement of the redshift of the source. The binary neutron star, black hole-neutron star, and supermassive binary black hole mergers are all expected to have electromagnetic counterparts. This can lead to the identification of the host galaxy of the GW source and the redshift to the GW source can be estimated from the electromagnetic spectra of the host using spectroscopic (or photometric) measurement by the relation 1 + z = λ o /λ e 2 . As a result, by using d L from the GW signal, and z from the electromagnetic spectra, GW sources provide an excellent avenue for measuring the Hubble constant using Eq. (2).

Luminosity distance and redshift in the presence of large-scale structure
The luminosity distance to a source and its observed redshift in a homogeneous FLRW Universe is different from its counterpart in the presence of cosmic perturbations 3 . The presence of perturbations in the matter density leads to temporal and spatial fluctuations in the metric perturbations (through terms involvinġ Φ,Φ, ∇Φ, ∇ 2 Φ), which are related to effects including (but not limited to) the peculiar velocity of the source and of the observer, the gravitational redshift, the integrated Sachs-Wolfe, and gravitational lensing (Sasaki 1987;Kolb et al. 2005; Barausse et al. 2005;Bonvin et al. 2006).
The observed redshift to the source also differs from the cosmological redshift due to the contributions from the difference in the peculiar velocity between the observer, u o , and the source, u s , and also to the gravitational redshift. At low redshift, the most important contributions arise from the difference in the velocity of the observer and source. The observed redshift z obs can be written in terms of the peculiar velocity, v p = (u s −u o ).û 4 , via the relation (1 + z obs ) = (1 + z) 1 + v p c . The corresponding modification in Eq. (2) is This implies that the contribution from the peculiar velocity leads to a bias in the inferred value of H 0 , if it is not accounted for. For multiple sources with uncorrelated velocities, the effect of ignoring the peculiar velocity component in the average is an excess variance in the measurement of H 0 . If we assume the distribution of the peculiar velocity field to be Gaussian with a variance σ 2 v , then the corresponding excess variance in the H 0 measurement for a source at distance d L becomes σ 2 v /d 2 L . The current framework for obtaining H 0 from the standard sirens use a Gaussian prior on the peculiar velocity (Abbott et al. 2017a;Chen et al. 2018;Feeney et al. 2019). We propose an algorithm to correct for the peculiar velocity contribution to standard sirens.
The peculiar velocity of a host galaxy, v p = v h + v vir , can arise from two components, namely, (i) the motion of the halo due to the spatial gradient in the gravitational potential, v h , and (ii) the virial velocity component, v vir , of the host galaxy inside the halo. The non-rotational velocity component of v l can be obtained from linear perturbation theory as where Φ is the gravitational potential, g is the growth rate related to the linear growth factor D by the relation g = d ln D/d ln a, and a = 1/(1 + z) is the scale factor. At small scales, non-linear effects can be quite important and the previous relation becomes inaccurate. The virial velocity component, v vir , can be related to the mass of the halo, M h , and the distance to halo center, r, via the relation v 2 vir ∝ M h /r, which holds for a virialized system. For a system with a constant mass density, the halo mass and r are related by M h ∝ r 3 . As a result, the virial velocity v vir is only related to the mass of the halo by the relation v 2 vir ∝ M 2/3 h . This relation indicates that a galaxy with a heavier halo has a larger velocity dispersion than if it resides in a smaller halo. We use the following form, fitted to simulations, to estimate the velocity dispersion of the non-linear velocity component (Sheth & Diaferio 2001): where g v = 0.9, ∆ nl (z) = 18π 2 + 60x − 32x 2 , and x = Ω m (1 + z) 3 /E 2 (z) − 1.

Estimation of the velocity field using borg
The velocity field that we use in this work is produced by the Bayesian Origin Reconstruction from Galaxies (borg) probabilistic model of large-scale structure, as currently applied to the 4 The line of sight vectorû is directed from the observer to the source, i.e., u.û ≥ 0, implying that the source is moving away from the observer. To that effect, the method solves a large-scale Bayesian inverse problem by fitting a dynamical structure formation model to data and thereby inferring the primordial initial conditions from which presently observed structures formed. The borg forward modeling approach marginalizes automatically over unknown galaxy bias and accounts for selection and evolutionary effects.
The variant of borg that we use for the 2M++ catalog models the galaxy to dark matter bias using a three-parameter function motivated by the analysis of N-body simulations (Neyrinck et al. 2014). We note that this bias relation is a local non-linear relation for which the conventional understanding of bias on large scale does not apply. It is more akin to a halo occupation distribution for high luminosity. The data specifically selects that behavior as can be seen in Sect. 5.1 of Jasche & Lavaux (2019). We should approach with caution any naive numerical comparison of borg bias parameters to linear bias values found in the literature. Instead of the linear perturbation result in the previous section, borg uses a full particle-mesh N-body solver to evolve the initial conditions to a dark matter density distribution at z ≈ 0 (Jasche & Lavaux 2019) through direct integration of the Hamiltonian dynamics equation.
The borg model provides a set of points in the parameter space (dimensionality 256 3 , for initial conditions, and a few more for additional bias parameters) that provides a numerical approximation of the posterior distribution of those parameters given the 2M++ observed galaxy distribution. Once initial and final positions of dark matter particles are known, the velocity field is estimated using the Simplex-in-Cell estimator (SIC, Hahn et al. 2015;Leclercq et al. 2017). The SIC estimator relies on a phase-space interpolation technique, which provides an accurate estimate of velocity fields without the need for commonly used but arbitrary kernel smoothing procedures, such as cloud-in-cell (CIC). It provides a physical picture for velocity fields even in low-density regions, which are poorly-sampled by dark matter particles, and takes into account the multi-valued nature of the velocity field in shell-crossed regions by proper stream averaging.

Validation of the borg reconstruction algorithm
In this section, we discuss several validation tests of the borg velocity field inference. In Sect. 4.1.1, we assess the validity on the sole basis of the statistics of the reconstructed samples. In Sect. 4.1.2, we run a borg analysis on a set of mock tracers to check the unbiasedness and the typical reconstruction errors.

Comparisons to performance of linear perturbation theory
The model of Carrick et al. (2015) is commonly used for correcting the estimated cosmological redshift of observed objects in the nearby Universe. This model relies on computing the inverse Laplacian operator on a luminosity-weighted distribution of the 2M++ galaxies. This model has been fitted to observed distances of galaxies derived from the SFI++ (Springob et al. 2007) and 10 1 10 0 10 1 k (h Mpc 1 ) Display of the correlation coefficient between the non-linear density and the negative divergence of the non-linear velocity field, θ ≡ −∇ · u. The red line is the ensemble average over the posterior distribution of the cross-correlation and the shaded dark red area is the one standard deviation according to the mean of that cross-correlation derived from the same distribution obtained by borg. The green and shaded green region is the result of computing the same quantities on 80 random realizations. Apart from very minor differences, the two curves are on top of each other, which can be expected as the physical model is the same in both cases. The vertical grey band on the right side indicate the resolution limit of the borg reconstruction. The correlation reaches asymptotically one for the largest scales (small k values). The red line is reaching a value of 99% for the largest probed mode. As expected, this cross-correlation goes to zero at small scales (high k value).
the First Amendment A1 SNIa data (Turnbull et al. 2012). The borg model goes beyond this approach. It is based on deriving a physically meaningful velocity field, beyond linear perturbation theory, using several pieces of information, including the same 2M++ data-set and cosmology.
The performance as well as the need to go beyond linear perturbations are two aspects that have been extensively discussed in the past. The differences between the density and velocity power spectra have already been computed analytically on large scales (Scoccimarro 2004) and observed in simulations (Chodorowski & Ciecielag 2002;Scoccimarro 2004;Jennings et al. 2011;Jennings 2012), but it is a challenging task to derive from observational data. This additionally motivates the borg approach. Leclercq et al. (2015, Fig . 1) showed that borg's constrained simulations of the density field are statistically consistent with independent random realizations. We further emphasize this point here by analyzing the capability of going beyond the linear level for the velocity field. In that respect, we show the cross-correlation between the density and θ = −∇ · u in Fig. 1 for constrained realizations and for random simulations generated using the same particle mesh implementation and cosmological setup. There are no visible differences between the cross-correlations between the inferred 2M++ fields and the ones for random realizations. This provides evidence that the density de-correlates from the velocity divergence in the way that is expected from the non-linear gravitational clustering physics of dark matter. We also show in Fig. 2    Ensemble average power-spectrum of the non-linear velocity field scaled, alongside with one standard deviation contour in the shaded area. The line and shaded region in red are computed from the ensemble posterior distribution built from the 2M++ data. The line and shaded region in green are measured from 80 random realizations of a ΛCDM universe of the same size. Those random realizations were computed using the same N-body solver, but starting from a random sample of a Gaussian distribution with a power-spectrum provided by the ΛCDM model. We show in thick blue line the power spectrum assuming linear theory. We note that large scales are unaffected by non-linearities. Intermediate scales (k 0.1 h Mpc −1 , corresponding to ∼60 h Mpc −1 ) get non-negligible contributions from non-linear dynamics, as highlighted by the area under the curve. The vertical grey band on the right side indicate the resolution limit of the borg reconstruction (∼2.6 h Mpc −1 ). contribution of the different scales to the total variance of the velocity field. We note that pure Eulerian linear perturbation theory (solid blue line) is systematically above the curve obtained from constrained 2M++ realizations (shades of red) and the random velocity field power spectrum (shades of green) for the range of scales of interest. That behavior certainly changes on smaller scales for which random motions become significant, as is hinted by the crossing of the two curves at k 2 h Mpc −1 . For scales of k < 0.8 h Mpc −1 , the 2M++ constrained realization matches well with the pure Eulerian linear perturbation theory within the expected variance (denoted in the red shaded region). The drop of power at that scale for the mock and constrained realizations is related to the observation of non-linearities in the power spectrum as pointed out in Jennings et al. (2011).
The two above points show the limits of using the density field as a predictor for the velocity field on intermediate scales.
The linear model may likely have reached its maximum capacity of prediction. The borg-derived non-linear model of the velocity field is physically more realistic, but the simpler model in Carrick et al. (2015) has the virtue of having been previously compared to distance observations. We intend to do the same in a forthcoming publication for borg. In addition, borg self-consistently models redshift-space distortions, which brings additional information with which to disentangle galaxy bias from dark matter clustering.

Tests with an N-body simulation
Although a full mock catalog analysis is beyond the scope of this article, we ran some tests on N-body simulations to showcase the correctness of the borg reconstruction algorithm. Using Gadget-2 (Springel 2005), we generated an N-body simulation covering a cubic volume with a side length of 200 h Mpc −1 and sampled with 256 3 particles. We used the same cosmology as that which was used for the borg 2M++ reconstruction in . To simplify the test case, we generated a mock catalog of 8457 objects directly from the particles of the simulation. The corresponding number density is ∼8−10 times smaller than in the observational data. We then ran a BORG-PM reconstruction with 20 time-steps from z = 50 to z = 0 linearly distributed according to the scale factor, with a grid of 32 3 elements to represent initial conditions. This corresponds to a spatial resolution of 6.25 h Mpc −1 , which is two to three times lower than the one used for the run on observations. We show in Fig. 3 We note the qualitative agreement between that sample and the density contrasts of the full simulation. In Fig. 4, we give a quantitative comparison derived from the entire posterior distribution. The assessment is done by computing the cross-correlation between the borg density field and the simulation density field and the variance between those two fields. They show that the borg density field is unbiased and tightly correlated on large scales. Figure 5 illustrates the distribution of the differences between the velocity field of the simulation and the one provided by borg inference. The comparison is done for a single sample obtained by borg and for linear theory directly from the density contrast obtained from the mock catalog (linear theory). The velocity fields are computed using the Simplex-In-Cell algorithm at high resolution and downgraded at 32 3 using local averages. In both cases, the reconstructions are unbiased but linear theory exhibits heavier tails for the residuals, leading to a typical width We conclude from this test that the peculiar velocities reconstructed with the borg algorithm are unbiased. They also have typically smaller errors compared to velocities derived using a linear perturbation theory approach.

Implementation of BORG on 2M++
In Fig. 7 (left panel), we show the velocity field in supergalactic coordinates for 2M++ along with the starred spatial position Tests of the BORG-PM algorithm on a N-body simulation and result for the velocity field. We compute the velocity field using the Simplex-In-Cell algorithm using the entire set of particles of the N-body simulation and a resimulation from a single borg sample. We compute the linear theory prediction for the velocity field based on the mock catalog as well. We show here the residual between each of the reconstructed velocity field and the one of the simulation. The difference is computed for each mesh element. We note that the residual distribution is narrower and have less heavy tails for the borg reconstruction than for the linear theory one. In both cases, the velocity field is unbiased. of NGC 4993. The middle panel in Fig. 7 indicates the estimate of the velocity field from 2M++ in the Supergalactic coordinate system. Additionally, we further show in the right panel the velocity field as inferred from the SDSS-III/BOSS survey. The latter uses a simpler dynamical model based on first-order Lagrangian perturbation theory ). The contribution of the non-linear velocity component captured by borg is shown in Fig. 8. This non-linear velocity component is the one captured by the particle mesh solver of borg, although it is not yet sufficient to resolve virial motions in large scale structures. We denote this regime by "NL" for non-linear. It corresponds to the intermediate "gray" regime when linear modeling of the velocity field is not sufficient and still not describable without a full non-perturbative description of the non-linear dynamics. To assess its importance, we considered two residuals. The left panel shows the relative contribution of the pure NL velocity from borg with respect to the total velocity contribution by evaluating the expression |(v r,BORG −v r,linear )/v r,BORG |. It is derived from the difference between the total contribution and the velocity field derived from the gradient of the gravitational potential of the matter density field. We note that the difference between the two fields may be due to at least two causes: either the linear theory approximation underestimates the amplitude of flow in infalling regions or it unphysically overestimates the velocity flow close to the peak of the matter density contrast. The right panel shows the estimated standard deviation of the non-linear velocity field, both the total velocity and the residue, obtained by taking the difference between the total non-linear and the linear component of the velocity field. This quantity corresponds to the second moment of the probability density function, conditioned on δ m . We note that this moment does not correspond to the full marginal distribution and, thus, it is possible that the conditional standard deviation is reduced compared to the marginal standard deviation. We compute this standard deviation from each bin of the matter density field. We note that the ratio of these two standard deviations of the total velocity field and the non-linear component of the velocity field can go up to ∼(20−30)% at δ m ∼ 4, indicating that the non-linear component is far from negligible. We point that the velocity and density field inference is largely independent of the Hubble constant. Our working coordinate system uses the arbitrary value of H 0 = 100 km s −1 Mpc −1 . In this coordinate system, the value of H 0 is irrelevant for computing the dynamics and the observable in the observer coordinate frame. However, what matters is the redshift evolution of E(z), which only depends on relative expansion history. For the portion of Universe that is considered here, the only relevant parameters are Ω m = 0.315 and Ω Λ = 0.685 (Planck Collaboration VI 2020). The only place where the absolute value of H 0 enters is in the prior power spectrum of primordial fluctuations that was used to run the analysis. This is, however, at least a second-order effect as this only governs the way borg has to fill the missing information in the data space, but not the actual mass distribution that is inferred on large scales.

Algorithm for the peculiar velocity correction for standard sirens
The observed GW signal from a network of GW detectors is a probe of the distance to the source, as we discussed earlier on. The GW signal parameters comprise the source parameters, such as the chirp mass, individual masses of the compact objects, spin of the individual compact objects, and the object's tidal deformability, and the parameters that relate to the observer of the GW source, such as the luminosity distance d L , sky location n, polarization angle ψ, and the inclination angle i. It is this latter set that is useful to estimate the value of H 0 (Dalal et al. 2006;Nissanke et al. 2010;Abbott et al. 2017a). The GW strain, h + and h × , provides estimates of the luminosity distance and the inclination angle of the GW source. The sky location of the GW source is obtained using the arrival times of the GW signal at different detectors, as well as their antenna function (Fairhurst 2009(Fairhurst , 2011Nissanke et al. 2011Nissanke et al. , 2013bSchutz 2011;Veitch et al. 2012). The redshift of the GW source may be measured by identifying the host galaxy using the electromagnetic counterpart from the GW sources. We propose a way of estimating the peculiar velocity of the host galaxy based on the estimation of the large-scale flow, v h , from borg, and the small scale motion as a stochastic velocity dispersion using the fitting form given in Eq. (6). We note that v h is itself composed of the velocity field at the linear order in perturbation, with an additional (at least) 30% correction from nonlinear evolution provided by borg for density δ m > 0. Using the location of the identified host galaxy (from the electromagnetic counterpart), we estimate the corresponding posterior probability density function (PDF) of the velocity field component, By combining the measurements from GW data, d GW , the redshift of the host galaxy,ẑ, and the peculiar velocity estimate, v p , and luminosity distance, d L , we can obtain the posterior of H 0 for n GW sources by using the Bayesian framework: relative difference between the velocity field derived from the gravity field assuming linearity and the non-linear velocity field produced by borg with respect to the total for the ensemble mean fields. Right panel: standard deviation of the velocity field for each density bin (orange curve) and standard deviation of the non-linear velocity, (v r,BORG −v r,linear ), for each density bin (blue curve). We see here that the non-virial, non-linear component of the velocity field can represent a substantial portion of the cosmic velocity field: up to ∼25% of the standard deviation of the total velocity field when the local matter density exceeds δ m 4.
whereû i (RA i , Dec i ) is the sky position 6 , L denotes the likelihood which is assumed to be Gaussian, P(z i |û i ) is the posterior of the redshift estimate at the GW source,û i , P(v i p |M,û i ) is the peculiar velocity estimate using the cosmological method, and Π(H 0 ) is the prior on the value of H 0 .

Applying our method to GW170817
6.1. Peculiar velocity estimate for NGC 4993 NGC 4993 is the host galaxy of the event GW170817 (the merger of two neutron stars) which was discovered by the LIGO Scientific Collaboration and Virgo Collaboration (LVC Abbott et al. 2017b,a). The distance to the GW170817 inferred from the GW observation is d L = 43.8 +2.9 −6.9 Mpc (Abbott et al. 2017b,a) and the recession velocity of its host NGC 4993, with respect to the CMB rest frame, is 3327 ± 72 km s −1 (Abbott et al. 2017a;Crook et al. 2007) The velocity estimate for NGC 4993 has two components, v h and v vir . We estimate the v h component from borg, and the non-linear part of the velocity is obtained using the form mentioned in Eq. (6). NGC 4993 has a halo of mass 6 RA and Dec respectively denote the right accession and declination. about M h ∼ 10 12 M (Pan et al. 2017;Ebrová & Bílek 2020). The corresponding estimate of σ vir is about 100 km s −1 . The excess velocity component is included as an additional velocity dispersion assuming a Gaussian PDF of variance σ 2 vir with a zero mean.
The posterior of the peculiar velocity from borg is shown in blue in Fig. 9 for the Planck-2015 best-fit cosmological parameters (Planck Collaboration XIII 2016). The PDF of the velocity field is non-Gaussian. The mean value of the velocity field is v p = 373 km s −1 and the maximum a posterior value of the velocity field is v MP p = 385 km s −1 . The standard deviation of the velocity pdf is σ h ∼ 82 km s −1 . The total standard deviation of the velocity due to both v h and v vir is σ t = σ 2 h + σ 2 vir ∼ 130 km s −1 . The corresponding PDF for v p is shown by the curve in black color in Fig. 9. The inferred value of the peculiar velocity of NGC 4993 differs from the value assumed by LVC (Abbott et al. 2017a). Abbott et al. (2017a) considered a velocity distribution that was Gaussian with a mean and standard deviation given by 310 km s −1 and 150 km s −1 , respectively. The estimates of the velocity from our method predict a 24% higher mean value of the velocity and about 13% less standard A65, page 8 of 11 deviation. The comparison of the distribution of the peculiar velocity between the LVC and our method is shown in red and black, respectively, in Fig. 9.

Revised H 0 from GW170817
Using our new, more precise estimate of the peculiar velocity for the NGC 4993, we obtain a revised value of H 0 using the Bayesian framework given in Eq. (7) from the data of luminosity distance and inclination angle, assuming either a highspin or a low-spin prior on each compact object (Abbott et al. 2019a). Figure 10 shows the corresponding posterior of H 0 . The posterior of H 0 peaks at H 0 = 68.3 km s −1 Mpc −1 , instead of 70 +12 −8 km s −1 Mpc −1 from the results of LVC (Abbott et al. 2017a) with the same 68.3% credible interval as obtained by LVC. The marginal difference in the shape of the posterior between our method (black line) and LVC (red dashed line) arises only as a result of the difference in the velocity correction between our method and LVC. In addition to calibration noise and peculiar velocity measurements, the main source of the error in the measurement of H 0 is due to the degeneracy between the inclination angle, i, and the distance, d L (Abbott et al. 2017a). This acts as a limiting factor for the measurement of H 0 from GW170817 if there is no independent measurement of inclination angle. Critically, the addition of Virgo (Acernese et al. 2015) to the nearly co-aligned two LIGO detectors (Abramovici et al. 1992;Martynov et al. 2016) allowed for the improvement in the sky localization of the GW source and offered the possibility of improving the measurement of the inclination angle.
A further improvement in the measurement of H 0 is possible if the uncertainty in the inclination angle, i, can be reduced. A recent study has used the observed data of the electromagnetic counterpart to GW170817, the superluminal motion measured by the Very Large Baseline Interferometry (VLBI) observations (Mooley et al. 2018a) and the afterglow light curve data (e.g., Mooley et al. 2018b) to constrain the inclination angle of GW170817. Using the constraints on the inclination angle, Hotokezaka et al. (2019) obtained a revised value of H 0 . We implement our velocity-field correction method for the combined measurement of GW+VLBI, by using a flat prior on the value of inclination angle between 0.25 rad ≤ i ( d L 41 Mpc ) ≤ 0.45 rad. The revised H 0 by our method is shown in Fig. 10

Conclusions and future prospects
Cosmology with gravitational waves is a newly emerging field that holds the enormous potential to explore new aspects of the Universe and fundamental physics (Saltas et Belgacem et al. 2018aBelgacem et al. ,b, 2019Pardo et al. 2018;Abbott et al. 2019b;Mukherjee et al. 2020a,b). One such primary science goal is the measurement of the Hubble constant from the GW sources when the redshift to the source can be identified from the electromagnetic counterpart or the cross-correlation with galaxy surveys (Oguri 2016;Mukherjee & Wandelt 2018). Using the current GW detectors, such as Advanced LIGO (Martynov et al. 2016) and Advanced Virgo (Acernese et al. 2015), recent forecasts predict the ability to measure H 0 with an accuracy of less than 2% with about 50 binary neutron star (BNS, Chen et al. 2018;Feeney et al. 2019). To achieve such a precise estimation of H 0 , it is essential to accurately correct for the peculiar velocity of the hosts of GW sirens. This is especially the case for very low redshift sources, which will have a high signal-to-noise ratio (S/N) and, hence, significantly contribute to the joint posteriors for ensembles of events. In addition, peculiar velocity corrections will be important for events with a more favorable S/N in the Virgo or additional other detectors, as well as neutron star-black hole mergers (Nissanke et al. 2010;Vitale & Chen 2018). The absence of such a correction will affect both the accuracy and precision of the measurement of the value of H 0 . For a typical value of the peculiar velocity of about ∼300 km s −1 for a GW host at redshift z = 0.01, the contribution from peculiar velocity is comparable to the term related to the Hubble flow. The contribution becomes less (or more) severe at higher (or lower) redshift. As a result, we need to estimate the value of the peculiar velocity with sufficient accuracy to avoid any systematic bias and additional variance in the measurement of H 0 . Although averaging over large GW samples can lead to an unbiased estimate of H 0 , the absence of a peculiar velocity correction will increase the error budget on H 0 due to the additional variance from the peculiar velocity contribution. In such a case, it's necessary to have a larger number of GW samples of N gw to beat the variance as N −1/2 gw (Nissanke et al. 2010(Nissanke et al. , 2013aChen et al. 2018;Feeney et al. 2019;Mortlock et al. 2019). Incorporating an accurate correction for the peculiar velocity of the host of GW sources, we can achieve faster and more economically accurate, and also precise, measurements of H 0 .
In this paper, we use a statistical reconstruction method to correct for the peculiar velocity of the host of the GW sources. The peculiar velocity for the host galaxy arises from the gravitational potential of the cosmic density field. We estimate the posterior of the peculiar velocity for both the linear and the non-linear component. The large-scale velocity flow is estimated using the Bayesian framework called borg. The stochastic velocity dispersion of the source is obtained using a numerical fitting-form given in Eq. (6), by using the mass of the halo of the host. By combining the results from peculiar velocity estimate for each source, their redshifts, and the inferred luminosity distance from the GW data, we obtain a Bayesian inference of the value of H 0 according to the framework discussed around Eq. (7).
We implemented our method on GW170817 (Abbott et al. 2017b), which is in the host NGC 4993 (Pan et al. 2017). The corresponding posterior distribution with the results from LVC (Abbott et al. 2017a) and GW+VLBI (Hotokezaka et al. 2019) are shown in Fig. 10 by the solid line in black and in blue, respectively. While our correction marginally reduces the maximum a posteriori value of H 0 to 68.3 +4.6 −4.5 km s −1 Mpc −1 for GW+VLBI, it slightly disfavors very low values of H 0 60 km s −1 Mpc −1 compared to the results from LVC. As we were preparing this manuscript for submission, an analysis was published (Howlett & Davis 2020) that was based on the implementation of an alternative velocity correction approach to the host of GW170817 and recovered a value of H 0 = 64.8 +7.3 −7.2 km s −1 Mpc −1 . The mean value of H 0 differs from the values obtained in this work (H 0 = 68.3 +4.6 −4.5 km s −1 Mpc −1 ) and also from the value obtained by Hotokezaka et al. (2019) (H 0 = 70.3 +5.3 −5.0 km s −1 Mpc −1 ). However, the value by Howlett & Davis (2020) is consistent within the error-bars of both the estimations. Also, another recent work (Nicolaou et al. 2020) discussed the possible impact of peculiar velocity on the standard siren measurements and obtained a revised value of H 0 = 68.6 +14 −8.5 km s −1 Mpc −1 for the event GW170817. Furthermore, a recent work by Boruah et al. (2020) also discusses the impact of the line of sight marginalization for models derived using linear theory applied to spectroscopic data calibrated with supernovae data and the velocity field derived from the distance data obtained from Tully-Fisher or Fundamental Plane methods.
The velocity correction is readily available for GW sources with electromagnetic counterparts (such as binary neutron stars; black hole neutron stars) in the cosmic volumes covered by the 2M++ (Lavaux & Hudson 2011; and the SDSS-III/BOSS surveys (Eisenstein et al. 2011;Dawson et al. 2013;Lavaux et al. 2019 see Fig. 7). The 2M++ volume covers galactic latitudes |b| > 10 • up to a redshift of z ∼ 0.05; the SDSS-III/BOSS survey spans redshifts of z ∼ 0.2−0.7 for the sky areas (0 • < Dec < 60 • and 120 • < RA < 240 • ) or (0 • < Dec < 30 • and |RA| < 30 • ). We expect that within a year our method will be available for the SDSS-IV/eBOSS survey (Dawson et al. 2016). In the long term, with the availability of the nearly full-sky data sets jointly from the upcoming cosmology missions such as Dark Energy Spectroscopic Instrument, (DESI, Aghamousa et al. 2016) and the Large Synoptic Survey Telescope (LSST, LSST Science Collaboration 2009), our algorithm will be available for most of the low-redshift GW sources that are to be observed by the currently planned network of groundbased GW detectors (Schutz 2011;Abbott et al. 2018b).