Erratum: Constraints on deviations from ΛCDM within Horndeski gravity

Please see the PDF file for details.


JCAP06(2016)E01
In the paper "Constraints on deviations from ΛCDM within Horndeski gravity" some of the runs were done with standard numerical precision. However we have realised that with increased numerical precision some of the results change quantitatively but not qualitatively. The conclusions of the paper are unchanged. Here we report the plots and results for runs performed with increased precision. We include some discussion only when there are changes with respect to the published paper, indicating the relative section.
To understand why we need an increased precision, it is important to note that in general Dark Energy/Modified gravity models the additional degree of freedom may have a non-trivial dynamics. Indeed, we noticed that the perturbations of the scalar field can have rapid oscillations that potentially affect the observable properties of the universe. These rapid oscillations occur especially on large-scales, i.e. on scales that crossed the cosmological horizon only recently, and they have a timescale that is shorter than any other cosmological timescale. The precision parameters used in the standard CLASS (the ones we used in the previous version of the paper) have been tuned to correctly integrate the evolution of the perturbations assuming the usual timescales of a ΛCDM universe. Then, the general idea of this improvement is to modify the precision parameters that regulate the integration step of the perturbations, together with some other parameter that increases the accuracy of the results on large scales. For completeness, we report the list of all the parameters we modified together with the new value we gave (default values are reported in parenthesis): • perturb sampling stepsize = 0.05 (0.10). Factor multiplied by the smallest timescale of the universe to get the integration step; • start small k at tau c over tau h = 1e-4 (0.0015). Factor that ensures that the largest wavelengths start being sampled when the universe is sufficiently opaque. Decrease to start earlier in time; • start large k at tau h over tau k = 1e-4 (0.07). Factor that ensures that the largest wavelengths start being sampled when the mode is sufficiently outside the Hubble scale. Decrease to start earlier in time; • l logstep = 1.045 (1.12). Maximum spacing of values of over which Bessel and transfer functions are sampled (so, spacing becomes linear instead of logarithmic at some point); • l linstep = 50 (40). Factor for logarithmic spacing of values of over which Bessel and transfer functions are sampled.
We also checked that a further improvement of these precision parameter does not affect the final result. For future works, we recommend the users of hi class to use the new precision parameters and not the default ones implemented in CLASS. In the public release of hi class [1] the default precision parameters have been modified to match this improved version.
In section 3 discussion of table 5. The MCMC procedure is not optimised to find the best fit model which maximises the likelihood, therefore there is an intrinsic error associated to these numbers which have been estimated to be ∼ 0.7 [2]. Compared to the ΛCDM model, we find that the improvement on the fitting of cosmological data due to the extra degrees of freedom provided by the Horndeski parameters is not significant in most of the cases. A possible exception is the inclusion of RSD where the improvement is log likelihood 4 but -    Table 4. Constraints on the coefficients c B , c M , and c T from different cosmological dataset combinations and for different values of c K . Quoted limits are 95% CL. A hard prior on c T > −0.9 is applied as well as a prior on −2 < c M < +2 that has become relevant in some cases.
at the "cost" of three extra degrees of freedom. This suggests that the deviations found in our datasets are still consistent with a fluctuation within the ΛCDM scenario, even though (remarkably) the posterior distributions of MG coefficients presented in table 4 are not always consistent with zero. We have checked that the effect of the RSD is not driven by the data point with the smallest error-bars (the BOSS measurement at z = 0.57; 8th entry in table 3).
In section 3 discussion about table 6. In table 6 we report the Bayes factor of the ΛCDM to MG models computed following [2]; we use a slightly modified version of the Jeffrey's scale to interpret the evidence ratios. The Bayes factor favours the simpler, ΛCDM, model or does not decide between the two cases but in no case prefers the more complex model.  Table 5. Absolute value of the log likelihoods (i.e. χ 2 /2) at the best fit point from the individual data that comprises each dataset combination explored in our analysis. The column labelled Total displays the maximum likelihood value in the chain. The last column shows the difference in Log likelihood with respect to the ΛCDM model. Red (negative) numbers represent worst fit, positive (black) numbers better fit. Given the intrinsic uncertainty of the MCMC in determining the best likelihood value, the improvement in χ 2 offered by the more complex model is in most cases not significant.  In section 3 discussion about figure 4. The revised constraints on the c T parameter turn out to be slightly more constraining, which has some implications for the toy model discussed in this section. Whereas the n = 1/3 value was marginally inside the 99.7% C.L. region, now this is no longer true, although by a small margin. This would imply that all three cases we studied for this toy model are put under pressure by current cosmological data.