Peering beyond the horizon with standard sirens and redshift drift

An interesting test on the nature of the Universe is to measure the global spatial curvature of the metric in a model independent way, at a level of $|\Omega_k|<10^{-4}$, or, if possible, at the cosmic variance level of the amplitude of the CMB fluctuations $|\Omega_k|\approx10^{-5}$. A limit of $|\Omega_k|<10^{-4}$ would yield stringent tests on several models of inflation. Further, improving the constraint by an order of magnitude would help in reducing"model confusion"in standard parameter estimation. Moreover, if the curvature is measured to be at the value of the amplitude of the CMB fluctuations, it would offer a powerful test on the inflationary paradigm and would indicate that our Universe must be significantly larger than the current horizon. On the contrary, in the context of standard inflation, measuring a value above CMB fluctuations will lead us to conclude that the Universe is not much larger than the current observed horizon, this can also be interpreted as the presence of large fluctuations outside the horizon. However, it has proven difficult, so far, to find observables that can achieve such level of accuracy, and, most of all, be model-independent. Here we propose a method that can in principle achieve that, this is done by making minimal assumptions and using distance probes that are cosmology-independent: gravitational waves, redshift drift and cosmic chronometers. We discuss what kind of observations are needed in principle to achieve the desired accuracy.


Introduction
While significant progress -e.g., [1]-has been made in measuring some of the parameters of the standard model of cosmology (ΛCDM), we still lack a physical understanding of what these parameters mean. For example, we have no clue about what the major energy components of the Universe are, namely: dark matter and dark energy. The situation is even worse as we do not even understand within the context of the standard model of particle physics why there is matter at all and not only photons; this is the so-called matter-antimatter asymmetry. Clearly, progress needs to be made to understand the physics of the Universe going beyond simple parameter fitting within a given model.
One route of making progress is to measure properties of the Universe in a model-independent way, as the majority of the current parameter determination from the cosmic microwave background (CMB) and large scale structure (LSS) data are assuming a cosmological model, which relies on a suite of assumptions. On the theoretical front, there is significant interest in modeling the metric of the Universe without the constraints of imposing homogeneity and isotropy. One of the pioneering approaches has been the "silent Universes" approach [2,3]. On the experimental side, one attempt was the determination of the standard ruler in a model independent way [4,5]. In this work, we take this approach one step further and develop a method to measure the spatial curvature of the Universe in a way that is independent of the cosmological model.
While current measurements have established that curvature is dynamically negligible, precise limits, if possible relying on minimal cosmological assumptions, will enable a host of powerful tests of cosmology. Measuring the spatial curvature of the Universe, Ω k , is a very important challenge of current observational cosmology (e.g., for constraining inflationary models [6][7][8][9]).
Precision measurements of spatial curvature can be used to test the cosmological principle of homogeneity and isotropy itself [10]. There has been some recent work (see e.g., [11][12][13]) proposing model-dependent methods to measure the curvature and some methods to reduce the measurement sensitivity to dark energy [14,15]. Measurements of the curvature parameter come from the combination of CMB [16,17] and LSS [13,18,19]. Current limits on spatial curvature from observations from the Planck satellite after adding BAO data have established an upper limit on its value of |Ω k | < 5 × 10 −3 [1], while the combination of Planck and the BOSS survey data found Ω k = 0.0010 ± 0.0029 [20]. These constraints are all obtained within the framework of the ΛCDM model (see Ref. [21] for constraints with other cosmological models yielding to a detection of curvature).
It is important to note that, based on statistical arguments, the information coming from joint CMB and future large-scale structure surveys, may not allow unambiguous conclusions about cosmology and the geometry of our Universe if the value of the curvature parameter is at or below |Ω k | ∼ O(10 −4 ) [11,12,22,23]. Simply fitting the curvature as a parameter in a model-dependent way results in "model confusion" i.e., a Bayesian analysis would wrongly favour a flat Universe in the presence of curvature for models that deviate from the standard ΛCDM. It is this model-dependence of the curvature determination that motivates measuring the curvature of the Universe at the level of primordial curvature perturbations of ∼ 10 −5 e.g., Ref. [24]. The cosmology-independent determination that we advocate here addresses this issue in a complementary and synergistic way.
There are clearly different aims in constraining curvature. One is to clearly improve the current ΛCDM-based determinations of Ω k below its current value of ∼ 5×10 −3 using standard, cosmologicalmodel-dependent, approaches. This will not be addressed here. We focus here on obtaining constraints in a cosmological model-independent way. In this context there are several target levels for the constraints: confirming the current bounds, one or two orders of magnitude improvements over current constraints i.e., |Ω k | < 10 −4 , and the cosmic variance floor of |Ω k | < 10 −5 . Constraints at the |Ω k | < 10 −4 will offer stringent tests on several models of inflation; while inflation generally predicts the spatial curvature of the Universe to be essentially flat, so that detecting Ω k = 0 would significantly limit the parameter space for inflationary models, scenarios involving interesting open [see e.g., [25][26][27][28][29][30][31] or closed [6] geometries have been considered. Moreover, a negative curvature arises in false vacuum decay, which can naturally arise in the multiverse [31], while this would require strong fine tuning in standard cosmological models. Conversely, a measurement of positive curvature would falsify most of the multiverse models.
Constraints at the |Ω k | < 10 −5 levels will achieve two goals: eliminate model confusion, and offer a powerful test of the inflationary paradigm. If inflation is the correct model to describe the early Universe, then our current observed horizon is just a small patch of a very large (infinite) Universe, hence curvature of the metric of the Universe should be at the level of the fluctuations which we know experimentally have a value of δρ/ρ ∼ 10 −5 .
There are two main complications to achieve our goal: first, it is difficult to find observables that are independent of the cosmological model and second, it is difficult to devise a method to measure the curvature at the required precision; Ref. [18] shows the difficulties of obtaining measurements of the curvature that approach the intrinsic limit of |Ω k | < 10 −5 even within a ΛCDM model, and in [19] it was shown that adding large-scale relativistic effects will not help enough. This paper attempts to take up this challenge and evaluate what kind of observations will be needed to achieve this precision.
As arduous as it may be, a model-independent determination of the value of the curvature will provide a powerful consistency test and would shed light on early Universe physics and what may lie beyond our current horizon.

Method
We start by only assuming homogeneity and isotropy and hence a Friedman-Robertson-Walker (FRW) metric; only when needed we will explicitly point out when we assume the validity of Einstein's General Relativity (GR). Our considerations and calculations are valid in any metric theory of gravity, as long as homogeneity and isotropy hold, and can be easily generalized to departures from GR. Some considerations are in order here. If the geometry of our Universe slightly deviates from the FLRW geometry, then definition of global spatial curvature in the FLRW sense does not necessarily hold. One case explored in the literature is for example that of emerging spatial curvature [32][33][34][35][36], where it is the very same non-linear evolution of cosmic structures what causes deviations from FLRW metric. Since Einsteins equations are non-linear, the evolution of an inhomogeneous nonlinear system homogeneous and isotropic only in a statistical sense is not the same as the evolution of an exactly homogeneous and isotropic system [37]. A detailed quantitative understanding of this phenomenon requires fully relativistic cosmological simulations. It has been shown [35] that once the evolution of structures enters the non-linear regime, the symmetry between overdensities and underdensities is broken. Then one can still measure an effective mean spatial curvature of the universe, but this one slowly drifts towards negative curvature induced by cosmic voids. The emergence of the spatial curvature (due to nonlinear relativistic corrections) then biases the determination of a global curvature parameter. For our application, the measurement of curvature parameter, if then reinterpreted in a FLRW metric, will be biased. It will have to be interpreted including relativistic corrections due to nonlinear cosmic evolution. Keeping all the in mind, we proceed assuming a FRW metric.
The luminosity distance in a FRW metric, as a function of redshift z and the Hubble constant, H 0 , and Hubble parameter, H(z), is: It is therefore clear that the combination of observational measurements of d L (z), H 0 and H(z) will provide an estimate for Ω k (see [4,5]) 1 . However, the complication is to obtain a measurement of Ω k at the required level of precision. This is discussed later in more detail. Ref. [10] also starts from eq. 2.1 above to test instead the cosmological principle, arguing that in a FRW metric Ω k should be constant and does not depend on redshift. They proceed then by taking redshift derivatives of d L and H(z) to detect deviations from the null hypothesis, and use standard cosmological-model dependent observable quantities as data. Here we proceed differently. We assume the FRW metric. We wish to measure the expansion history and luminosity distances using quantities that are not dependent on the cosmological model themselves. The newest development is that it has been recently experimentally demonstrated that a gravitational wave detection with an electromagnetic counterpart will provide z and d L (z) independently of the cosmological model [39][40][41][42]. This technique has been named "standard sirens".

Observables
We consider two routes for measuring a combination of H 0 and H(z) with high accuracy: cosmic chronometers (CC) and redshift drift, as supernovae have been discussed elsewhere [18]. For d L measurements we will consider standard sirens. Redshift drift ∆z, and velocity shift ∆v, over a time interval ∆t 0 for the observer, are related to the Hubble expansion as ∆v ∆t 0 = ∆z ∆t 0 where ∆t 0 is usually of the order of tens of years. With an estimation of the uncertainty on ∆v over several redshift bins from future surveys we can calculate all that is needed to obtain some forecasts on uncertainties on H 0 and H(z). Estimates of uncertainties in future redshift drift measurements can be found in e.g. [43,44] which suggest the following formula for the error in velocity drift (in cm/s): where N QSO is the number of observed quasars, and α(z) = 1.7 for z < 4; α(z) = 0.9 beyond that redshift. Taking as an example observations with the E-ELT, in a time span of 30 years we can assume SNR=3000 for 10 QSOs for each redshift bin in Table 1.
The other alternative to obtain H 0 and H(z) is to use cosmic chronometers [45][46][47][48]. They measure directly Significant work has been done to assess the feasibility of the method and current measurements of H(z) are at the 6% level [47,48] but are limited by systematic errors. Future measurements from EUCLID [48] will be instrumental to reduce systematic errors, and will easily achieve sub-percentage statistical precision; further improvements could be achieved with even larger surveys. The advantage of the redshift drift method is that it does not depend on any modelling of the properties of QSOs, while the cosmic chronometer method does depend on modelling of stellar evolution (they are basically atomic clocks). On the other hand, the cosmic chronometer method can obtain large (millions) samples of stellar clocks, thus reducing the statistical uncertainties on H(z) significantly.
In what follows, we consider both the redshift drift error and a 1% error on H(z) in 8 redshift bins up to z = 5, see Table 1. Then we will compute what error on H(z) and d L (z) is needed to achieve a target error on Ω k .
In our proposed approach, luminosity distance estimates and their errors come from gravitational waves (GWs) measurements with optical counterpart. The recent discovery of the neutron starneutron star merger by the LIGO-VIRGO collaboration [40][41][42], shows that it is feasible to measure the luminosity distance d L even for a single object with error-bars highly competitive compared to e.g., the standard candles (supernovae) approach. We propose to combine two measurements, also known as standard sirens approach [39], to measure the luminosity distance d L from gravitational waves, and the redshift from the optical counterpart (we will assume GWs from mergers with EM counterpart).
Even with planned GW detectors, it will be possible to measure d L (z) at 1% [49]. Moreover, lensed GWs have been shown to be even more powerful, allowing a 0.7% measurement of H 0 with only 10 objects [50] (but assuming flatness).
To measure d L using GW we will assume that we will have 8 redshift bins in the range 0 < z < 5. For this purpose we assume a futuristic instrumental setup based on the Big Bang Observer [51]; errors on d L are given in Table 1 and, given the current uncertainties in key quantities such as the experimental set up and expected number of sources, need to be taken as an order of magnitude estimate. In a futuristic outlook, they could even be improved (see e.g., [49]).

Observational issues
There are, however, three complications: a) local peculiar velocities (also called redshift space distortions, RSD [52]); b) gravitational lensing [53,54]; c) cosmological perturbations that modify the GW wavefront, giving rise to an additional uncertainty on the determination of d L [55] (these include integrated Sachs-Wolfe and doppler effects). Regarding point (a), on average, there will be a ≈ few ×100 km/s peculiar velocity effect; on large scale these are bulk flows generated by linear RSD -a.k.a. Kaiser effect [52]-, while on small scales peculiar velocities are local and randomized in direction -a.k.a. fingers-of-God. The effect of large-scale flows could in principle be reduced or removed by having a template of the density field and assuming GR to predict from it the peculiar velocity field. It will also average out if enough independent "patches" are observed. The random component is expected to average out to zero in the case where a large number of GWs are detected, especially if not too nearby and over large patches in the sky (for a review see e.g., [56] and references therein). A linear order estimate indicates a relative distance error of ∼ 10 −3 per object hence, especially at small scales it quickly averages out. This effect is common to Supernovae determination of luminosity distances, but it is highly subdominant to other effects that contribute to the scatter of the Supernovae-based luminosity-distance relation. We will therefore neglect this contribution in the following. Effects of peculiar velocities on the template of the GW signal were discussed e.g., in [57]. We assume these effects can also be corrected for.
We refer to the effect of perturbations described in points (b) and (c) as "projection effects". In Table 1 we report also the expected relative error per redshift bin due to these effects, following Ref. [55]. The magnitude of this effect is highly uncertain because the GW equivalent of magnification bias is poorly known. We assume that this error also behaves as a statistical error rather than a systematic one. In other words, it does not yield a systematic error-floor but an extra source of scatter that gets reduced by averaging the signal from sources widely separated in the sky. Thus this error will be summed to the standard observational one which we refer to as "instrument". It is important to note that in the redshift range of interest the dominant term in the projection effects is the one due to lensing convergence (see [55]). The magnitude of this contribution depends critically on the equivalent luminosity function of the GW signal, in analogy of the effect of cosmic magnification in optical wavelength that depends crucially on the slope of the source galaxy luminosity function. In Table 1 we report the values of projection additional errors calculated when the GW-equivalent magnification bias is set to be s = 0 (it is useful to recollect that s = 0.4 would cancel lensing convergence effects), therefore giving quite pessimistic results. There are in principle two ways to minimize this source of errors: i) by selecting a sub-sample of the sources (as suggested in [58]) so that the source luminosity function has a magnification bias that cancels convergence effects (as the velocity and ISW-like contributions are almost negligible), or ii) by performing a "de-projection" of the observed map, in the same way CMB maps are now routinely de-lensed [59][60][61][62], having a 3D galaxy density map and a bias estimate available. In the same spirit, it is in principle possible to "de-RSD" our observed map, hence being able to completely correct for local and perturbations modifications of the estimate of d L .
For this reason, in the following we will consider two cases, one where the projection effect is fully present and one where it is subdominant compared to the statistical (instrument) errors and therefore is neglected (see Table 1). These should be considered as two (pessimistic and optimistic) limiting cases.

Forecasts
In this section we first forecast the achievable error on Ω k given the experimental set up of Table 1.
We then investigate what kind of observations and what level of uncertainty are needed to measure the curvature at the cosmic variance level of 10 −5 with the method proposed in the previous section. We finally reflect on whether this is achievable at least in principle or if there is a fundamental limitation that prevents it.
In interpreting our results one should keep in mind that for this analytical forecast the additional "scatter on the determination of d L due to lensing magnification has been modelled as Gaussian. In reality it is well known that the distribution is non-gaussian but its shape can be well approximated by a known analytic formula see e.g., [38]. In any practical application this will have to be correctly modelled to avoid important biases. With this caveat in mind, for this initial error-estimate we still use the Gaussian approximation.

Analytic considerations
It is worth noting that only the luminosity distance has information on geometry and thus curvature. Let us define the comoving distance as χ(z) = z 0 H(z ) −1 dz . With this definition the luminosity distance is related to χ by Recall that a Taylor expansion of sin(x) around zero yields sin(x) ∼ x − x 3 /6 and for sinh(x) we have sinh(x) ∼ x + x 3 /6. Hence we obtain that to leading order Therefore, assuming negligible uncertainty on χ(z), we can evaluate the variation of d L due to a variation on Ω k and viceversa. Figure (1) shows the variation on Ω k per relative variation on d L as a function of redshift, or to be more explicit, (∂ log d L /∂Ω k ) −1 computed from the expression in Eq. (3.2). It is easy to see why it is so challenging to reduce the error on the curvature below 10 −3 : even assuming a perfect reconstruction of the expansion history -H(z), and hence χ(z) -, the error on Ω k in the redshift range of interest is always larger than that on d L . But while d L changes with redshift, Ω k does not. This indicates that many high-precision measurements of d L at different redshifts are needed and that, for the same precision on d L , higher z are better suited to measure Ω k .
As it can be seen from Figure (1), we need a relative measurement of d L with precision better than 10 −4 to obtain constraints on the curvature better than 10 −3 at low redshift, while at high redshift the ratio between these two errors tends to ∼ 2. As discussed in Section 2.2, while at high redshift one is expected to find more sources and thus reduce the d L "instrumental" error, the scatter on d L due to projection effects increases. While we have argued that projection effects could at least in principle be removed (or made subdominant) either at the expenses of reducing the number of Figure 1. Response of the inferred Ω k values to a variation in ∆dL/dL ≡ ∆ log dL as a function of redshift (assuming that the Universe expansion history is fixed and known with negligible errors). This has been computed using Eq. (3.2) for a fiducial concordance ΛCDM cosmology. sources (to tailor the sample to have a magnification bias that cables the effect), or by a reconstruction effort, this consideration indicates that there may be a trade-off between target redshift range and number of sources that may minimise the combination of the instrumental plus projection effects errors optimising observational efforts. This will be discussed elsewhere.

Uncertainty estimates
We use a Monte Carlo numerical method to invert Eq. (2.1) to compute the expected errors on Ω k , when using the errors on the observables in Table 1. We also include a prior on H 0 with the reported error as in Table 2. As discussed above, it is possible to use either cosmic chronometers or redshift drift to obtain a cosmological model independent measurement of H(z). The results are reported in Table 2, including or not projection effects, as explained in Section 2.2. Table 2 shows that errors of the order σ Ω k ∼ 10 −2 can be achieved with a set up similar to that of Table 1. The error on the curvature is dominated by the uncertainties on d L . Moderate improvements on d L (an order of magnitude) are required to reach the level of σ Ω k ∼ 10 −3 . To reach the level σ Ω k ∼ 10 −4 , two orders of magnitude improvements on distance measurements are required.
We have shown above that planned experiments will be able to provide cosmological modelindependent measurements on the spatial curvature at the level σ Ω k ∼ O(10 −2 ). A moderate improvement of such experiments could allow reaching levels of σ Ω k ∼ 10 −3 . This would confirm the current conclusion on the level of spatial flatness coming from cosmology-model-dependent measurements. A more significant improvement of such experiments could allow reaching levels of σ Ω k ∼ 10 −4 , which would represent a considerable advancement in the primordial Universe model testing.
The next step is trying to measure the curvature at the CMB fluctuations level, therefore obtaining information on global properties of the Universe, in the context of standard inflation. Thus the question becomes: what is needed to obtain σ(Ω k ) 10 −5 , comparable to the cosmic variance limit of ≈ 1.5 × 10 −5 [24]? From Table 2 we can see that, with our assumed setup, one could measure the curvature at the perturbation level only when improving the errors on the luminosity distance and the Hubble parameter by a factor of 1000 and 10, respectively. This might seem unfeasible if we were to simply scale the errors with the square root of the number of sources. However, improvements on measurements of the orientation of the GW source via multiple GW observatories and on the  Table 2. Constraints on Ω k for different assumptions on the uncertainties of the observables as in Table 1.
We explore what reduction of the uncertainty on dL and expansion history are needed to achieve constraints at the level of Ω k ∼ 10 −5 . The " − " in the case with projection indicates that reduction of the other errors is irrelevant since this term already dominates.
Our results provide a clear target for what would be, in principle, the requirements needed to obtain such measurements. Whether this is achievable in practice will need to be evaluated. In the next, final Section, we will discuss the implications of having such knowledge of the spatial curvature. The calculation presented here should be considered as a proof of principle. The required experimental errors to achieve a cosmic variance level for the curvature parameter correspond to a likely idealised case, even if the LISA mission may reach this level of accuracy. In any case, our results can be used as a guidance to strengthen the science case for future experiments.

Discussions and Conclusions
In this paper we have proposed a method to obtain measurements of the global spatial curvature of the universe independently of the cosmological model. We have shown that a combination of gravitational wave observations (standard sirens) with redshift drift or cosmic chronometers measurements of the expansion history could provide enough accuracy to obtain measurements of Ω k < 10 −3 , with only moderate improvements on the specifications of currently planned experiments. While the current limit on the curvature provided by the Planck satellite in combination with the BOSS survey is of Ω k < 5×10 −3 [1], this is obtained within the framework of a cosmological model (ΛCDM). In contrast, our method is independent of the cosmological model. Finally, we argue that with a more futuristic setup it is, in principle, possible to reach a precision in the measurement of the spatial curvature that approaches the limit imposed by primordial fluctuations; a measurement of Ω k using the proposed method here will therefore provide clues on the size of our Universe and the amount of fluctuations beyond our current horizon, an important and unprobed source of information on the origin of the Universe.
One of the most appealing features of measurements of Ω k is the fact that the spatial curvature lets information leak in from outside our observable patch; in a sense, it is the window that allows us to peak outside the in principle observable Universe.
We point out that a measured value of Ω k > 10 −5 would allow us to speculate on the size of the entire Universe, but only in the context of standard inflationary scenarios: it is in fact possible to imagine situations in which the spatial curvature is large even if the Universe is also considerably larger than the observable patch. If, e.g., the initial curvature was very high, with a suitable number of e-foldings the Universe can be very large even with a value of Ω k > 10 −5 .
Finally, it is interesting to note an often overlooked point: the very concept of cosmic variance is related and due to the "fair sample hypothesis" [63]. This involves ergodicity and the fact that we assume the Universe to be homogeneous and isotropic.
One could be tempted to state that cosmic variance is a byproduct of inflation; however, this is not true, because at its very core, cosmic variance arises from the fact that it is assumed that our observable patch of the Universe is a single realization of some stochastic process [64], hence we need to treat our patch as part of a larger ensemble. In fact this idea was introduced in the 1960s, well before inflation was theorized. While it is true that with inflation there are N (i.e., a large number of) realizations that exist in N spatial patches, and we live in one of them (and hence the statistical ensemble), even without inflation, under the fair sample hypothesis, our Universe is one particular realization (even in the purely hypothetical case where there is only one patch and nothing outside), out of N possible Universes that could have been created -for an early discussion about primordial fluctuations well before any inflationary models, see Ref. [65].
However, if the entire Universe (i.e., a patch at least much larger than the currently observable Universe) is emerging from some deterministic process, or if there is some underlying physical law imposing fixed perturbations (of a certain amplitude), then there is no intrinsic cosmic variance as commonly defined. A careful investigation of which underlying physical models might satisfy the above condition is beyond the scope of this paper, and while we do not necessarily endorse this possibility, given our attempt of being as model-independent as possible, it makes sense to take into account the possibility that, cosmic variance is not an obligatory feature of our Universe 2 .