-
PDF
- Split View
-
Views
-
Cite
Cite
L Sudharani, N S Kavya, V Venkatesha, Unveiling the effects of coupling extended Proca-Nuevo gravity on cosmic expansion with recent observations, Monthly Notices of the Royal Astronomical Society, Volume 535, Issue 2, December 2024, Pages 1998–2008, https://doi.org/10.1093/mnras/stae2472
- Share Icon Share
ABSTRACT
We study Coupling Extended Proca-Nuevo gravity, a non-linear theory extending from dRGT massive gravity with a spin-1 field. This theory is shown to yield reliable, ghost-free cosmological solutions, modelling both the Universe’s thermal history and late-time acceleration. By analysing data from dark energy spectroscopic instruments (DESI), cosmic chronometer (CCh), gamma-ray bursts (GRBs), and Type Ia Supernova (SNeIa), we derive parameter constraints with up to 3|$\sigma$| confidence, demonstrating good agreement with observations. Our comparison of BAO data from WiggleZ and DESI highlights its constraining power on the Hubble constant. The analysis of the cosmographic parameter, q shows the statistical compatibility with the recent data. Further, this indicates that Universe’s current accelerated expansion aligns with quintessential behaviour.
1 INITIATION
The most interesting topic in modern cosmology is the Universe’s swift expansion, which poses a challenge to our knowledge of fundamental physics. This phenomenon suggests that the Universe is expanding more quickly with time, unlike what is commonly believed. It was first identified in the late 20th century by looking at distant supernovae. Although the simplest explanation would be the consideration of the cosmological constant, the concordance model of cosmology is highly effective in explaining the evolution of the Universe at both the background and perturbation levels. It is based on general relativity (GR) with a cosmological constant, on the particles of the standard model, and cold dark matter. The two key approaches for creating extended scenarios emerged from the associated challenge relating to the quantum-field-theoretical calculation of its value and the potential for a dynamical nature. However, one approach is to keep GR as the fundamental theory of gravity while taking into account novel and exotic matter that make up the idea of dark energy (Copeland, Sami & Tsujikawa 2006; Cai et al. 2010). The second involves creating expanded or modified theories of gravity that, despite having GR as a low-energy limit, generally offer the additional degrees of freedom necessary to propel the acceleration of the dynamic Universe (Capozziello & De Laurentis 2011).
Nevertheless, according to recent observations of various origins, ΛCDM predictions seem to be in tension with the data, as for instance |$H_0$| tension, and is observed between two measurements, one from the cosmic microwave background (CMB) temperature and polarization data by the Planck Collaboration (Aghanim et al. 2020), which reports |$H_0 = 67.37 \pm 0.54$| km s−1 Mpc−1, and another from local measurements by the Hubble Space Telescope (Riess et al. 2019), yielding |$H_0 = 74.03 \pm 1.42$| km s−1 Mpc−1. Recent analyses combining gravitational lensing and time-delay effects have reported a significant deviation at |$5.3\sigma$| (Wong et al. 2020). A further possible source of conflict relates to the measurements of the parameter |$\sigma _8$|, which measures the gravitational clustering of matter at a scale of |$8\,h^ {-1}\,{\rm Mpc}$| based on the amplitude of the linearly developed power spectrum (Di Valentino et al. 2021). However, from the theoretical point of view, |$\Lambda$|CDM encounters the cosmological constant issue, because GR cannot be approached with a quantum description since it is non-renormalizable. As a result, a lot of effort has gone into developing gravitational modifications, or theories that offer both theoretical and phenomenological benefits while maintaining GR as a limit.
A primary subset of modified gravity theories arises by extending the Einstein–Hilbert Lagrangian with additional terms. This approach leads to a variety of formulations, including |$f(R)$|, |$f(P)$|, |$f(Q)$|, |$f(T)$|, |$f(G)$|, and Lovelock gravity theories etc. (refer Lovelock 1971; Starobinsky 1980; Cai et al. 2016; Erices, Papantonopoulos & Saridakis 2019; Heisenberg 2024). Further, the scientific community is especially engaged in these gravitational theory classes since they all display complex cosmological features (Bamba et al. 2012; Skugoreva, Saridakis & Toporensky 2015; Pan, Yang & Paliathanasis 2020; Vagnozzi 2020; Ilyas et al. 2021; Naik et al. 2023; Kavya et al. 2024; Mishra et al. 2024; Sudharani et al. 2024b). Considering the graviton to be enormous gives rise to an exciting subclass of modified gravity. Drawing from the framework of massive gravity, researchers proposed the generalized Proca action for a vector field that includes derivative self-interactions, which leads to a theory with only three propagating degrees of freedom. This theory, outlined in Allys, Peter & Rodriguez (2016) and De Felice et al. (2016a), provides a consistent, local description of a massive vector field free from ghost-like instabilities. Consequently, both the background and perturbation-level cosmological solutions were examined (De Felice et al. 2016a). The generalized Proca theory has demonstrated interesting cosmological phenomenology (Heisenberg, Kase & Tsujikawa 2016; Minamitsuji 2016; De Felice et al. 2016b; Beltran Jimenez & Heisenberg 2017; de Felice, Heisenberg & Tsujikawa 2017; De Felice et al. 2020; Geng et al. 2021). Proca-Nuevo (PN) theory is a recent variant of Proca theory that has been introduced de Rham & Pozsgay (2020). By incorporating non-linear variables for a massive spin-1 field, this theory expands on the conventional Proca framework while preserving crucial consistency restrictions. PN theory can be extended to models providing consistent and ghost-free cosmological answers when combined with gravity. These models include, in particular, late-time self-accelerating phase and hot Big Bang situations. Certain variations of this theory proceed at the speed of light and satisfy all stability and subluminality conditions at the perturbative level. The theory’s constraint structure has been thoroughly analysed, and more cosmological solutions have been looked at in Gupta, Saridakis & Sen (2009), Anagnostopoulos, Basilakos & Saridakis (2021), Saridakis (2021), de Rham et al. 2022a(, 2023), Anagnostopoulos & Saridakis (2024), and Errasti Díez, Maier & Méndez-Zavaleta (2024). Additionally, studies on the quantum stability of PN interactions (de Rham et al. 2022b) show that the quantum behaviours of PN and generalized Proca theories are comparable, indicating that they might be particular instances of a larger theoretical framework.
In this paper, we use several data sets to investigate covariant PN theory in cosmological circumstances. A detailed geometric formulation of integrating extended PN theory with gravity and its cosmic background is given in Section 2. We present the data sets utilized in Section 3, and the approach is described in Section 4. Section 5 discusses the outcomes, and Section 6 concludes and offers some thoughts for the future.
2 GEOMETRIC FORMULATION OF COUPLING EXTENDED PROCA-NUEVO WITH GRAVITY
2.1 Assessment of Proca-Nuevo and extended Proca-Nuevo theory
Let us consider a vector field |$\mathcal {V}_{\mu }$| on a flat space–time with the Minkowski metric |$\eta _{\mu \nu }$|. The helicity decomposition of vast gravity provides the insight used in the building of PN theory. We begin with the equation
where |$\Lambda$| is an energy scale that will determine the strength of the vector self-interactions. Similar to the St|$\ddot{u}$|ckelberg metric of massive gravity, we must keep in mind that, at this moment, we ignore gravity therefore |$f_{\mu \nu }$| is merely the Lorentz tensor. For later convenience, we present
Therefore, |$f_{\mu \nu }$| can be written as follows
Although it might initially seem that the dependence of |$\phi ^r$| on the coordinates |$x^r$| indicates a possible violation of Poincarè invariance, the actual quantity used as a fundamental component in the Lagrangian is |$f_{\mu \nu }$|.
In addition, we are going to introduce the tensor
and, |$\mathcal {X}^{\mu }_{\nu }[\mathcal {V}] = \left(\sqrt{\eta ^{-1} f[\mathcal {V}]} \right)^{\mu }_ {\nu }$|. Considering the four dimensions, the PN theory for the vector field |$\mathcal {V_\mu }$| is then expressed as de Rham, Gabadadze & Tolley (2011a, b), Ondo & Tolley (2013), de Rham 2014, and Rham & Pozsgay (2020)
Now, the nth-order PN term is defined as
In equation (5), the PN term is multiplied by a set of coefficients |$\alpha _n(\mathcal {X})$|, which is the arbitrary function of the following equation
In our analysis, X, |$\alpha _n$|, and |$\mathcal {L}_n$| are dimensionless quantities. Notice that |$\mathcal {L}_{0}$| is simply a constant, which means that the expression |$\alpha _0(X) \mathcal {L}_0 \equiv V(\mathcal {V}^\mu \mathcal {V}_\mu)$| represents the usual potential for the vector field. To ensure that the trivial vacuum state |$\langle \mathcal {V}^\mu \rangle = 0$| is physically consistent, |$\alpha _{0}$| must include a non-zero quadratic term, specifically, |$\alpha _0 \supseteq -\frac{1}{2} \left(\frac{r^2}{\Lambda ^4} \right) \mathcal {V}^\mu \mathcal {V}_\mu .$| PN and GP are two distinct ghost-free theories of a massive vector field. An interesting question is whether PN can be extended to include both PN and GP. These models implement the Proca constraint differently, as observed in the null eigenvector (NEV) of their Hessian matrices. In GP theory, the NEV is (1, |$\tilde{0}$|), indicating that the component |$V_0$| of the vector field is non-dynamical, similar to the linear theory. Conversely, PN theory uses a field-dependent NEV, as demonstrated in Rham & Pozsgay (2020), where the vector field’s constraint is managed differently and is equated as below
The non-perturbative normalized time-like NEV of the PN Lagrangian, denoted by |$V^{\rm PN}_{a}$|, satisfies the following conditions:
- |$\mathcal {H}^{ab} V^{\rm PN}_{a} = 0$|, where |$\mathcal {H}^{ab}$| is the Hessian matrix of time derivatives defined by$$\begin{eqnarray} \mathcal {H}^{ab} = \frac{\partial ^2 \mathcal {L}_{\rm PN}}{\partial \dot{\mathcal {V}}_{a} \partial \dot{\mathcal {V}}_{b}}. \end{eqnarray}$$
- The normalization condition$$\begin{eqnarray} \eta _{ab} V^{\rm PN}_{a} V^{\rm PN}_{b} = -1, \end{eqnarray}$$
where |$\eta _{ab}$| is the metric tensor.
The PN model is known for its connection to massive gravity, but it is also flexible enough to support additional interactions, any operator that maintains the Hessian invariant can be included without affecting the NEV form. In four dimensions, there are exactly five such operators involving the tensor |$\partial _\mu \mathcal {V}_\nu$|, called |$d_n(X) \mathcal {L}_n[-\mathcal {V}]$|, as specified by Hessian matrix. These |$\mathcal {L}_n[\partial \mathcal {V}]$| operators are trivial in and of themselves, but when added with a field-dependent coefficient, they can introduce significant effects while remaining trivial in terms of the Hessian; these operators are important to the new derivative interactions in the GP theory (except for |$\mathcal {L}_4$|). Notably, operators not based on elementary symmetric polynomials of |$\partial ^\mu \mathcal {V}_\nu$| generally affect the Hessian and thus cannot be added as easily.
Here, some redundancies arise from the construction described
|$\mathcal {L}_0[\partial \mathcal {V}]$| is constant and so its coefficient can be absorbed into |$\alpha _0$|, affecting the non-derivative potential.
Only three of the four remaining terms are linearly independent from the PN operators.
|$f(X)\mathcal {L}_4[\partial \mathcal {V}]$| is always a total derivative for any function f, making this term redundant.
The above properties hold only in flat space–time and do not apply to curved backgrounds. Thus, when developing a covariant theory, all four GP terms (|$\mathcal {L}_1[\partial \mathcal {V}]$| through |$\mathcal {L}_4[\partial \mathcal {V}]$|) must be considered.
Having these factors, we present the following Lagrangian
which is termed as the ‘Extended Proca-Nuevo’ (EPN) theory. This Lagrangian tribute four additional arbitrary functions, |$d_n(X)$|, beyond the original functions |$\alpha _n$| in four dimensions. It is important to note that we have permitted the two sets of operators to appear at different scales, |$\Lambda$| and |$\tilde{\Lambda }$|, and that |$\tilde{\mathcal {K}}$| and |$\tilde{\mathcal {X}}$| are the quantities defined earlier, but scaled by |$\tilde{\Lambda }$|.
2.2 Cosmological background to the covariant EPN theory with gravity
The search for the most general theory of a self-interacting massive spin-1 and interacting effective field theories involving fields of different spins are fascinating subjects that have seen significant advancements in the last ten years. The integration of these efficient field theories into a gravitational framework, especially for astrophysical and cosmological applications, is an interesting task as it advances the continuous endeavor to categorize practical extensions of GR.
This section presents the Covariant Extended Proca-Nuevo (CEPN) theory coupled with gravity (Rham et al. 2022a). The action for the CEPN theory is given by
R and |$\mathcal {L}_{\rm M}$| represent Ricci scalar and standard matter Lagrangian. Now we can define massive spin-1 Lagrangian as given below
where,
Non-minimal coupling terms, proportional to R in |$\mathcal {L}_2$| and the Einstein tensor |$G_{\mu \nu }$| in |$\mathcal {L}_3$|, are included in the Lagrangian. Along with the non-minimal couplings question, this Lagrangian does exclude the |$\mathcal {L}_4$| term that existed in flat space–time.
Now, the FLRW metric is our main emphasis
here, |$A(t)$| is the Universe scale factor and we may set |$N=1$|. Further, the vector field profile is defined below
Now, we yield a modified Friedmann equation by varying the action concerning the lapse (10)
In the above equation, |$\rho _{\rm m}$| and |$p_{\rm m}$|, respectively, represent the energy density and the pressure of the matter fluid. However, variation with respect to |$\phi (t)$| gives the following
An effective dark energy sector with energy density and pressure are given by
The scalar field equation (20) is non-dynamical, it is just a constraint that imposes an algebraic relationship between H and |$\phi$|. This is due to action integral (10) therefore the resulting Friedmann equations depend only on the Hubble function and not on the vector field. Therefore, the effective dark energy density and pressure can be shown as
Here, |$c_{\rm m}\equiv \frac{m^2M^2_{\rm Pl}}{\Lambda ^4}\sim 1$| and |$y=4\sqrt{\frac{6}{c_{\rm m}}}$| (refer Rham et al. (2022a)). Moreover, we consider the density parameters |$\Omega _{\rm M}\equiv \rho _{\rm M}/3M^2_{\rm Pl}H^2$| and |$\Omega _{\rm EPN}\equiv \tilde{\rho }_{\rm EPN}/3M^2_{\rm Pl}H^2$|. Now using the first Friedmann equation (18) at |$z=0$|, then equation (23) become
Upon substituting (25) into (23) and eliminating |$c_{\rm m} y^{2/3}$| and |$\Lambda$|, we obtain
Utilizing all these equations in (18), we obtain an algebraic equation defined as follows:
where H is the Hubble function. As presented in Anagnostopoulos & Saridakis (2024), the deviation from |$\Lambda$|CDM is evident from the presence of |$(H_0 H)^{-2/3}$| in the in the third term of LHS of equation (27). The powered-Hubble parameter |$H^{-2/3}$| becomes less prominent for infinitely large redshifts. When considering the present case, (27) mimics |$\Lambda$|CDM scenario. Hence, |$H^{-2/3}$| becomes significant in analysing the dynamics of the CEPN Universe at intermediating stages, where one can witness a shift from the standard cosmological model. To this end, one cannot ignore its presence in the equation (27). However, when this term is considered, it is challenging to find the exact solution of the Hubble parameter. Thus, in our analysis, we numerically account for this to provide a more detailed assessment.
3 DATA
This section presents the data sets used to investigate the dark energy model obtained in the previous section.
3.1 DESI BAO
As a consequence of acoustic density waves in the primordial plasma of the early cosmos, changes in the density of baryonic matter occur and is known as baryonic acoustic oscillations (BAO). This source offers information and correlations for both the distance variable |$(D_{\rm H}/r_{\rm d})$| and the comoving distance |$(D_{\rm M}/r_{\rm d})$| throughout the drag period. The drag epoch, denoted by the symbol |$z_{\rm d}$|, and the radius of the sound horizon |$r_{\rm d}$| is the farthest distance sound waves may have travelled between the Big Bang and the time at which baryons decoupled. In cases where there is a poor signal-to-noise ratio, the averaged quantity |$D_{\rm V}/r_{\rm d}$| is employed. It is noteworthy that the BAO signal from galaxy clustering has been detected by the ‘dark energy spectroscopic instrument (DESI)’ employing several tracers of matter, including galaxies, quasars, and Laman-|$\alpha$| forests (Adame et al. 2024a, b). It is only possible to ascertain the whole value of |$r_{\rm d}H_0$| when utilizing solely DESI BAO data. However, we can differentiate between |$r_{\rm d}$| and |$H_0$| independently when we combine DESI BAO data with other observational data sets.
To do this, we used observations from DESI BAO Data Release 1 in the redshift range of 0.3 to 2.33 (see table I of Pourojaghi, Malekjani & Davari (2024)). There are two isotropic BAO data sets, the Quasar (QSO) at an effective redshift of |$z_{\rm eff} = 1.49$| and Bright Galaxy Survey (BGS) at |$z_{\rm eff} = 0.30$|. Five data points are also included in the anisotropic BAO data sets: Luminous Red Galaxy (LRG) at |$z_{\rm eff}$| of 0.51 and 0.71, LRG + ELG at |$z_{\rm eff} = 0.93$|, Emission Line Galaxies (ELG) at |$z_{\rm eff} = 1.32$|, and Lyman-|$\alpha$| quasars (Lya QSOs) at |$z_{\rm eff} = 2.33$|.
At the baryon drag epoch, |$r_{\rm d}$|, the sound horizon can be identified as
where |$c_{\rm s}(z)$| is the fluid sound speed of baryons-photons. The transverse comoving distance to the tracers at each redshift bin can be calculated using this physical scale as a ruler. In a flat Universe, this distance is given by
as well as the Hubble factor at the tracer redshift, which can be used to calculate a distance
Here, c is the speed of light and |$H(z)$| is the Hubble parameter. Additionally, for the rest of this work, we simply refer to this data set as DESI.
3.2 Cosmic chronometers (CCh)
The Cosmic chronometers (CCh) method is a conceptually simple technique to measure the Hubble parameter as a function of redshift, |$H(z)$|, independent of the cosmological model adopted (Jimenez & Loeb 2002). While redshift measurements can achieve high precision (|$\delta z/z\le 0.001$|) through spectroscopy of extragalactic objects, the main challenge lies in accurately estimating the differential age evolution |${\rm d}t$|. This challenge necessitates the use of a ‘chronometer’. The primary advantage of the CCh approach is its ability to directly estimate the expansion history of the Universe without the need for any prior cosmological assumptions. We applied methods from Moresco et al. (2020) as well as research that takes into account statistical and systematic elements in our investigations. None the less, we employed data retrieved from CCh at redshifts between 0.07 and 1.26 (see table 1 in Sudharani et al. (2024a)).
Results of MCMC for parameters |$\Omega _{m_0}$|, and |$H_0$| (km s−1 Mpc−1) with |$1\sigma$| errors.
Data . | |$\Omega _{m_0}$| . | |$H_0$| . |
---|---|---|
DESI | |$0.298^{+0.034}_{-0.20}$| | |$72.0^{+1.50}_{-0.93}$| |
DESI + GRBs | |$0.344^{+0.030}_{-0.027}$| | |$71.1^{+1.1}_{-1.2}$| |
DESI + GRBs + CCh | |$0.336^{+0.026}_{-0.026}$| | |$71.3^{+1.1}_{-1.1}$| |
DESI + GRBs + CCh + SNeIa | |$0.317^{+0.011}_{-0.012}$| | |$73.89^{+0.19}_{-0.19}$| |
Data . | |$\Omega _{m_0}$| . | |$H_0$| . |
---|---|---|
DESI | |$0.298^{+0.034}_{-0.20}$| | |$72.0^{+1.50}_{-0.93}$| |
DESI + GRBs | |$0.344^{+0.030}_{-0.027}$| | |$71.1^{+1.1}_{-1.2}$| |
DESI + GRBs + CCh | |$0.336^{+0.026}_{-0.026}$| | |$71.3^{+1.1}_{-1.1}$| |
DESI + GRBs + CCh + SNeIa | |$0.317^{+0.011}_{-0.012}$| | |$73.89^{+0.19}_{-0.19}$| |
Results of MCMC for parameters |$\Omega _{m_0}$|, and |$H_0$| (km s−1 Mpc−1) with |$1\sigma$| errors.
Data . | |$\Omega _{m_0}$| . | |$H_0$| . |
---|---|---|
DESI | |$0.298^{+0.034}_{-0.20}$| | |$72.0^{+1.50}_{-0.93}$| |
DESI + GRBs | |$0.344^{+0.030}_{-0.027}$| | |$71.1^{+1.1}_{-1.2}$| |
DESI + GRBs + CCh | |$0.336^{+0.026}_{-0.026}$| | |$71.3^{+1.1}_{-1.1}$| |
DESI + GRBs + CCh + SNeIa | |$0.317^{+0.011}_{-0.012}$| | |$73.89^{+0.19}_{-0.19}$| |
Data . | |$\Omega _{m_0}$| . | |$H_0$| . |
---|---|---|
DESI | |$0.298^{+0.034}_{-0.20}$| | |$72.0^{+1.50}_{-0.93}$| |
DESI + GRBs | |$0.344^{+0.030}_{-0.027}$| | |$71.1^{+1.1}_{-1.2}$| |
DESI + GRBs + CCh | |$0.336^{+0.026}_{-0.026}$| | |$71.3^{+1.1}_{-1.1}$| |
DESI + GRBs + CCh + SNeIa | |$0.317^{+0.011}_{-0.012}$| | |$73.89^{+0.19}_{-0.19}$| |
3.3 Gamma-ray bursts (GRBs)
The extraordinarily powerful and luminous events known as gamma-ray bursts (GRBs) were discovered by the Vela satellites more than fifty years ago (Klebesadel, Strong & Olson 1973). Up to very high redshifts, these sources have been observed further; the highest redshifts are z = 8.2 (Tanvir et al. 2009) and 9.4 (Cucchiara et al. 2011). They are an important tool to shed new light on the significant cosmological tensions that exist today and to overcome the knowledge gap on the evolution of the Universe between the farthest type Ia supernovae and the cosmic microwave background radiation. To distinguish between many potential origins, a classification of GRBs according to their measured light curves is essential. Historically, GRBs have been classified into two primary classes: short GRBs (SGRBs) and long GRBs (LGRBs), depending on how long the prompt emission lasted. For an extensive review of the GRBs prompt correlations, we refer to Dainotti et al. (2017), Dainotti & Amati (2018), and Dainotti et al. (2022, 2023). Referring to table 5 of the publication (Demianski et al. 2017), we analysed a sample of 162 LGRBs, whose redshift distribution ranges widely, with values between |$0.03 \le z \le 9.3$|.
3.4 Type Ia Supernova
Supernovae of Type Ia (SNeIa) are ‘standard candles’ because of their intrinsic luminosity. These occur when a star is nearing the end of its life and produces a tremendous explosion that disperses stellar material across space. Researchers can determine their precise distances and provide a different method of measuring the expansion history of the Universe by analysing their apparent brightness and redshift.
In this work, we analyse one of the distinct samples of SNeIa, which includes the Pantheon + sample (Scolnic et al. 2022). This sample represents the most recent collection of spectroscopically confirmed SNeIa, examined across the entire redshift range between the value of 0 and |$2.3$|. Moreover, the Pantheon + compilation contains 1701 SN light curves, 77 of which originate from galaxies that host Cepheids in the low redshift interval |$0.001\le z \le 2.2613.$| The compilation includes 18 distinct investigations (Scolnic et al. 2018, 2022). We introduce the following vector
to reconcile the degeneracy between the Hubble constant |$H_0$| and the absolute magnitude M of the SNeIa. where the apparent magnitude and distance modulus of the i-th SNeIa are denoted by |$m_i$| and |$m_{i}-M$|, respectively.
4 METHODOLOGY
This section presents the methodology we employed in this work to do inference of cosmological parameters utilizing several data sets. The Markov Chain Monte Carlo (MCMC) technique is the methodology used in our work. MCMC is a probabilistic technique that generates a series of samples, each of which is a set of parameter values, in order to explore the parameter space. The method’s central tenet is the Markov property, which states that the sequence’s subsequent state depends only on the one before it or a current state. When MCMC is applied to cosmological parameter inference, it is frequently used to sample the posterior distribution of parameters given observational data. Essentially, it explores the parameter space by generating a chain of samples, the density of which reflects the posterior distribution, and then analysing the chain to determine the most probable values and uncertainties for cosmological parameters. From a Bayesian perspective, parameter estimation treats probability as a measure of belief. Parameters are seen as random variables that are updated based on data and prior beliefs. Initially, a prior distribution is set, and the data refines this through an iterative process to produce a posterior distribution. This posterior reflects all the knowledge gained, integrating prior beliefs with new evidence from the data.
From the likelihood and the prior, we can derive the posterior distribution using the Bayes theorem
However, Bayesian statistics does inference using the rule of probability directly and is based on a single tool, Bayes’ theorem, which finds the posterior density of the parameters provided by the data. It combines both the prior information we have in the prior |$g(\theta _1,\ldots ,\theta _{\rm p})$| and the information about the parameters contained in the observed data given likelihood |$f(y_1,\ldots ,y_,|\theta _1,\ldots ,\theta _{\rm p})$|.
The primary problem with estimating Bayesian parameters is that it is generally impossible to obtain the posterior distribution analytically, and even numerical analysis is frequently not feasible. Through the clever utilization of MCMC sampling, this enduring problem was solved, marking a major advancement in statistics throughout the 20th century. When applied to prolonged sequences, MCMC yields a sequence of parameter sets (Markov chain) whose empirical distribution approaches or converges to the posterior distribution.
As the sequence lengthens, the empirical distribution of the parameter sample generated by MCMC sampling converges to the true posterior distribution, providing a sophisticated solution for evaluating model parameters, especially when the corresponding posterior distribution cannot be accessed analytically. As a result, any question concerning the posterior parameter distribution can theoretically be answered simply by looking at the corresponding Markov chain.
Monte Carlo methods are designed to approximate a target density |$p(x)$|, where |$x \in X$| and X is a high-dimensional space. This is achieved by generating a set of independent and identically distributed samples |$\lbrace x^{(i)} \rbrace _{i=1}^{N}$|. These samples are then used to estimate integrals or maxima of the target function. While basic sampling techniques can handle simple forms of |$p(x)$|, more advanced methods, such as MCMC, are required for complex real-world problems.
One state in a Markov chain depends only on the one that came before it, a sequence of states produced by a stochastic process known as the ‘Markov property’,
Possible transitions between the states are specified by this transition matrix. A Markov chain is said to be homogeneous if its transition probabilities do not change over time, which is indicated by the equation |$T = \Delta \, T\left(x^{(i)} \mid x^{(i-1)}\right)$|. An invariant distribution |$p(x)$| refers to a probability distribution that remains unchanged as the Markov chain evolves. This means that regardless of the starting state and after numerous transitions, the chain will eventually settle into this distribution. This behaviour follows the objectives when a posterior distribution that cannot be assessed by traditional techniques is approximated using MCMC sampling. To obtain an invariant distribution, one must build an irreducible, homogeneous, stochastic transition matrix T that is aperiodic. Aperiodicity assures that the chain doesn’t get trapped in cycles, while irreducibility ensures that every state can be reached from any other state at some point. Reversibility, often known as the detailed balance requirement, is a sufficient but not necessary condition for the invariance of a target distribution |$p(x)$|.
Therefore, it is possible to guarantee the invariance of a target distribution p(x) by ensuring a careful balance.
The MCMC analysis is conducted using the emcee (Foreman-Mackey et al. 2013) package. To determine the best-fit parameters, we employ the likelihood function, which is expressed as:
It is important to note that minimizing |$\chi ^2$| is equivalent to maximizing the likelihood and minimizing the negative log-likelihood. In this study, we constrain model parameters using the specified data sets. For the MCMC analysis, we compute the |$\chi ^2$| function for each data set as follows:
where N is the number of data points, |$H_{\rm th}$| represents the theoretical Hubble parameter values for the model with parameters |$\Theta$|, |$H_{\rm obs}$| is the observed Hubble parameter from the data, and |$\sigma _{\rm H}$| denotes the error in the observed Hubble parameter |$H^{\rm Data}$|.
We further analyse the model by combining multiple data sets. To do this, we calculate the total |$\chi ^2$| function, which is the sum of the individual |$\chi ^2$| functions for each data set:
Here, |$D_i$| denotes the different data sets utilized in this analysis.
5 RESULTS AND DISCUSSION
All of the required gear is now in place to advance with our observational analysis of Covariant Extended Proca-Nuevo (CEPN) gravity on a cosmic scale. We carry out the analysis that was mentioned in the preceding section. Further, the constrained model parameters are depicted in Fig. 1, which shows a two-dimensional contour plot at a 99.7 per cent confidence level. These plots indicate that the model aligns well with the observed data. The best-fit values are summarized in Table 1. For the CEPN model analysis (DESI alone), constrained the sound horizon parameter |$r_{\rm d}$| along with |$\Omega _{m_0}$| and |$H_0$|. This yields the values |$144.0^{+1.9}_{-2.7}$| for |$r_{\rm d}$| and |$0.287 \pm 0.025$| for |$\Omega _{m_0}$| depicting the correlation with the variable |$H_0$| (see Fig. 1). However, the sound horizon measured at the drag epoch, which occurs shortly after recombination when photons and baryons decouple, manifests as a distinct peak in the correlation function or as a series of damped oscillations in the power spectrum (for a more detailed discussion, see Alam et al. (2017)).

Left: The contour plot for DESI data illustrating the constraints on the Hubble parameter |$H_0$|, the sound horizon |$r_{\rm d}$|, and |$\Omega _{m_0}$|. Right: A contour plot showing the model parameters |$H_0$| and |$\Omega _{m_0}$| obtained through |$\chi ^2$| analysis for the current model. This plot illustrates the results of a combined analysis of various data sets, with confidence levels up to |$3\sigma$|. For DESI data, |$r_{\rm d}$| is fixed which is the mean value obtained in the left-hand panel.
For different data sets, including DESI, DESI + GRBs, DESI + GRBs + CCh, and DESI + GRBs + CCh + SNeIa, the |$H(z)$| data and their corresponding error bars are displayed in the first column of Fig. 2. The theoretical curve that our CEPN model predicts is represented by the red line in the plot. The close alignment of error bars with these shaded regions indicates a great agreement between the model’s predictions and the observed data. The shaded areas indicate different confidence zones (up to 3|$\sigma$|).

Cosmographic parameter analysis: the first column displays |$H(z)$| data with theoretical predictions (solid line) and shaded confidence regions. The second column, with high-confidence error bars, shows the distance modulus |$\mu (z)$| for the 1701 SNeIa and 162 GRBs data points. The third column illustrates the deceleration parameter q.
In Fig. 2 (second column), we focus on the distance modulus function |$\mu (z)$|. Here, we present an error plot for the observed distance modulus of the 1701 SNeIa and 162 GRBs data sets. The red line depicts the mean theoretical curves obtained from our cosmological model with constrained model parameters by different data sets. The shaded regions represent the error bars at an impressively high confidence level of up to 99.7 per cent. The agreement between the theoretical predictions and the observed distance modulus provides strong support for the accuracy and validation of the CEPN algebraic model in describing the underlying cosmological processes.
Moreover, utilizing the mean values of the constrained parameters, examine the cosmographic parameters like the deceleration parameter q. The third column of the Fig. 2 illustrates the present-day value of |$q_{\rm CEPN}$| at |$(z=0)$|, denoted as |$\equiv q_{\rm CEPN,0}$|, for the given parameter vector. Thus, the obtained values of |$q_{\rm CEPN,0}$| as well as transition redshift |$z_{\rm t}$| are mentioned in the Table 2. However, this model reveals the quintessence nature of the Universe through this analysis utilizing a data-driven method.
Present day value (|$z=0$|) to both deceleration parameter and the transitional redshift.
Data . | |$q_{\rm CEPN,0}$| . | |$z_{\rm t}$| . |
---|---|---|
DESI | |$-0.6364$| | 0.7014 |
DESI + GRBs | |$-0.6112$| | 0.6573 |
DESI + GRBs + CCh | |$-0.5858$| | 0.6132 |
DESI + GRBs + CCh + SNeIa | |$-0.5750$| | 0.5971 |
Data . | |$q_{\rm CEPN,0}$| . | |$z_{\rm t}$| . |
---|---|---|
DESI | |$-0.6364$| | 0.7014 |
DESI + GRBs | |$-0.6112$| | 0.6573 |
DESI + GRBs + CCh | |$-0.5858$| | 0.6132 |
DESI + GRBs + CCh + SNeIa | |$-0.5750$| | 0.5971 |
Present day value (|$z=0$|) to both deceleration parameter and the transitional redshift.
Data . | |$q_{\rm CEPN,0}$| . | |$z_{\rm t}$| . |
---|---|---|
DESI | |$-0.6364$| | 0.7014 |
DESI + GRBs | |$-0.6112$| | 0.6573 |
DESI + GRBs + CCh | |$-0.5858$| | 0.6132 |
DESI + GRBs + CCh + SNeIa | |$-0.5750$| | 0.5971 |
Data . | |$q_{\rm CEPN,0}$| . | |$z_{\rm t}$| . |
---|---|---|
DESI | |$-0.6364$| | 0.7014 |
DESI + GRBs | |$-0.6112$| | 0.6573 |
DESI + GRBs + CCh | |$-0.5858$| | 0.6132 |
DESI + GRBs + CCh + SNeIa | |$-0.5750$| | 0.5971 |
In Fig. 3, we also present the DESI data, as discussed in Section 3 of subsection 3.1. The figure displays the mean curves (red line) for |$D_{\rm M}$|, |$D_{\rm H}$|, and |$D_{\rm V}$|, each normalized by |$r_{\rm d}$| and plotted against redshift. These curves align perfectly with the visible error bars.

Cosmographic parameter analysis: the plot features mean curves for the comoving angular diameter distance |$D_{\rm M}$|, Hubble distance |$D_{\rm H}$|, and volume distance |$D_{\rm V}$|, normalized by the sound horizon parameter |$r_{\rm d}$| and plotted against redshift. The solid curves representing theoretical predictions from the CEPN model align closely with the DESI data error bars, highlighting the model’s accuracy and consistency with observational data.
Currently, the DESI BAO data is highly influential, demonstrating superior constraining power compared to the Wigglez–BAO data (refer Blake et al. (2011)). In this study, we utilized this data to constrain the Hubble constant |$H_0$|, highlighting the dominance of DESI. Fig. 4 depicts the Gaussian and deviation from Gaussian distributions of the corresponding data through histogram analysis. This underscores the significance of the DESI in constraining cosmological model and revealing the nature of the cosmic Universe using the recent data.

Comparison of the constraining power of BAO data on the model parameter |$H_0$| from the WiggleZ and DESI surveys, based on the distribution of the curves.
For the direct comparison of the result with Anagnostopoulos & Saridakis (2024), we considered the combination of CCh and SNeIa data sets and obtained the results as shown in the Fig. 5. It is clearly evident from the values that the scenario presented in Anagnostopoulos & Saridakis (2024) can be regained when CCh + SNeIa is taken into account.

A contour plot showing the model parameters |$H_0$| and |$\Omega _{m_0}$| obtained through |$\chi ^2$| analysis for the current model. This plot illustrates the results of a combined analysis of CCh and SNeIa P18 data that consists of 1048 data points.
5.1 Analysing information criteria
As a final step, we utilize the Akaike Information Criterion (AIC; Akaike 1974), Bayesian Information Criterion (BIC) (Schwarz 1978), and The Deviance Information Criterion (DIC; Spiegelhalter et al. 2002) to compare the efficiency of the model with respect to |$\Lambda$|CDM and evaluate the compatibility of the observational scenarios.
The AIC is defined as
Here, |$\mathcal {L}_{\text{max}}$| represents the maximum likelihood that the model can achieve, and k denotes the number of parameters within the model. The optimal model is the one that minimizes the AIC. The AIC is formulated through an approximate minimization of the Kullback–Leibler information entropy, which quantifies the disparity between the actual data distribution and the distribution predicted by the model. On the other hand, the BIC criterion serves as an estimator of Bayesian evidence, expressed as
where N represents the number of data points utilized in the fitting process. The BIC is derived from approximating the evidence ratios of models, commonly referred to as the Bayes factor (Kass & Raftery 1995).
Finally, the DIC criterion is grounded in principles from both Bayesian statistics and information theory (Liddle 2007), and it can be expressed as
The variable |$\mathcal {P}$| represents the Bayesian complexity, defined as
where the overline indicates the standard mean value. Additionally, |$D(\Theta)$| refers to the Bayesian deviation, a measure closely associated with the effective degrees of freedom. This quantity is expressed as
To rank competing models based on their fit to observational data, we focus on the differences in Information Criterion (IC) values. Specifically, we calculate the difference |$\Delta \text{IC}_{\text{model}} = \text{IC}_{\text{model}}-\text{IC}_{\text{min}}$|, comparing each model’s IC to the minimum in the set. According to Jeffrey’s scale (Kass & Raftery 1995; Jeffreys 1998), if |$\Delta \text{IC}\le 2$|, the model is statistically compatible with the best one. If |$2 < \Delta \text{IC} < 6$|, there is moderate tension, and |$\Delta \text{IC} \ge 10$| implies a strong tension.
From the Table 3, it is evident that our model is consistent with |$\Lambda$|CDM model, which aligns well with observations. All model selection criteria (AIC, BIC, and DIC) support this condition, with the difference of |$\Delta \text{IC}\le 2$| in all cases, except for the combined data set of DESI + GRBs + CCh + SNeIa. This indicates that the CEPN model is statistically well-compatible with the data in all instances under consideration. However, for the combined DESI + GRBs + CCh + SNeIa data set, the model exhibits moderate tensions with the |$\Lambda$|CDM model.
Comparative statistical analysis of our model and |$\Lambda$|CDM (as reference model) using multiple data sets.
Models . | |$\chi ^2_{\text{min}}$| . | AIC . | |$\Delta {\rm AIC}$| . | BIC . | |$\Delta {\rm BIC}$| . | DIC . | |$\Delta {\rm DIC}$| . |
---|---|---|---|---|---|---|---|
|$\Lambda$|CDM-model | |||||||
DESI | 6.975 | 10.9754 | 0.0 | 13.534 | 0.0 | 7.434 | 0.0 |
DESI + GRBs | 233.587 | 237.587 | 0.0 | 239.948 | 0.0 | 237.000 | 0.0 |
DESI + GRBs + CCh | 248.628 | 252.628 | 0.0 | 255.082 | 0.0 | 249.117 | 0.0 |
DESI + GRBs + CCh + SNeIa | 1795.410 | 1799.41 | 0.0 | 1801.969 | 0.0 | 1795.509 | 0.0 |
CEPN-model | |||||||
DESI | 7.3864 | 11.386 | 0.4106 | 13.945 | 0.411 | 8.052 | 0.618 |
DESI + GRBs | 233.389 | 237.389 | 0.197 | 240.146 | 0.198 | 234.225 | 2.775 |
DESI + GRBs + CCh | 248.523 | 252.523 | 0.105 | 255.187 | 0.105 | 249.050 | 0.067 |
DESI + GRBs + CCh + SNeIa | 1799.331 | 1803.331 | 3.921 | 1805.890 | 3.921 | 1799.475 | 3.966 |
Models . | |$\chi ^2_{\text{min}}$| . | AIC . | |$\Delta {\rm AIC}$| . | BIC . | |$\Delta {\rm BIC}$| . | DIC . | |$\Delta {\rm DIC}$| . |
---|---|---|---|---|---|---|---|
|$\Lambda$|CDM-model | |||||||
DESI | 6.975 | 10.9754 | 0.0 | 13.534 | 0.0 | 7.434 | 0.0 |
DESI + GRBs | 233.587 | 237.587 | 0.0 | 239.948 | 0.0 | 237.000 | 0.0 |
DESI + GRBs + CCh | 248.628 | 252.628 | 0.0 | 255.082 | 0.0 | 249.117 | 0.0 |
DESI + GRBs + CCh + SNeIa | 1795.410 | 1799.41 | 0.0 | 1801.969 | 0.0 | 1795.509 | 0.0 |
CEPN-model | |||||||
DESI | 7.3864 | 11.386 | 0.4106 | 13.945 | 0.411 | 8.052 | 0.618 |
DESI + GRBs | 233.389 | 237.389 | 0.197 | 240.146 | 0.198 | 234.225 | 2.775 |
DESI + GRBs + CCh | 248.523 | 252.523 | 0.105 | 255.187 | 0.105 | 249.050 | 0.067 |
DESI + GRBs + CCh + SNeIa | 1799.331 | 1803.331 | 3.921 | 1805.890 | 3.921 | 1799.475 | 3.966 |
Comparative statistical analysis of our model and |$\Lambda$|CDM (as reference model) using multiple data sets.
Models . | |$\chi ^2_{\text{min}}$| . | AIC . | |$\Delta {\rm AIC}$| . | BIC . | |$\Delta {\rm BIC}$| . | DIC . | |$\Delta {\rm DIC}$| . |
---|---|---|---|---|---|---|---|
|$\Lambda$|CDM-model | |||||||
DESI | 6.975 | 10.9754 | 0.0 | 13.534 | 0.0 | 7.434 | 0.0 |
DESI + GRBs | 233.587 | 237.587 | 0.0 | 239.948 | 0.0 | 237.000 | 0.0 |
DESI + GRBs + CCh | 248.628 | 252.628 | 0.0 | 255.082 | 0.0 | 249.117 | 0.0 |
DESI + GRBs + CCh + SNeIa | 1795.410 | 1799.41 | 0.0 | 1801.969 | 0.0 | 1795.509 | 0.0 |
CEPN-model | |||||||
DESI | 7.3864 | 11.386 | 0.4106 | 13.945 | 0.411 | 8.052 | 0.618 |
DESI + GRBs | 233.389 | 237.389 | 0.197 | 240.146 | 0.198 | 234.225 | 2.775 |
DESI + GRBs + CCh | 248.523 | 252.523 | 0.105 | 255.187 | 0.105 | 249.050 | 0.067 |
DESI + GRBs + CCh + SNeIa | 1799.331 | 1803.331 | 3.921 | 1805.890 | 3.921 | 1799.475 | 3.966 |
Models . | |$\chi ^2_{\text{min}}$| . | AIC . | |$\Delta {\rm AIC}$| . | BIC . | |$\Delta {\rm BIC}$| . | DIC . | |$\Delta {\rm DIC}$| . |
---|---|---|---|---|---|---|---|
|$\Lambda$|CDM-model | |||||||
DESI | 6.975 | 10.9754 | 0.0 | 13.534 | 0.0 | 7.434 | 0.0 |
DESI + GRBs | 233.587 | 237.587 | 0.0 | 239.948 | 0.0 | 237.000 | 0.0 |
DESI + GRBs + CCh | 248.628 | 252.628 | 0.0 | 255.082 | 0.0 | 249.117 | 0.0 |
DESI + GRBs + CCh + SNeIa | 1795.410 | 1799.41 | 0.0 | 1801.969 | 0.0 | 1795.509 | 0.0 |
CEPN-model | |||||||
DESI | 7.3864 | 11.386 | 0.4106 | 13.945 | 0.411 | 8.052 | 0.618 |
DESI + GRBs | 233.389 | 237.389 | 0.197 | 240.146 | 0.198 | 234.225 | 2.775 |
DESI + GRBs + CCh | 248.523 | 252.523 | 0.105 | 255.187 | 0.105 | 249.050 | 0.067 |
DESI + GRBs + CCh + SNeIa | 1799.331 | 1803.331 | 3.921 | 1805.890 | 3.921 | 1799.475 | 3.966 |
6 CONCLUSION
In this paper, we addressed cosmological observations of massive PN gravity. The first is a recently proposed non-linear theory involving a huge spin-1 field inspired by dRGT massive gravity. It can be extended by including generalized Proca class operators without breaking the fundamental primary constraint necessary for consistency. After that, the theory can be covariantized and coupled to gravity in a way that produces cosmological solutions that are reliable and ghost-free. Furthermore, compared to usual massive gravity scenarios, these cosmological solutions perform nicely at the perturbative level, showing no signs of instabilities. In a cosmological context, massive PN gravity adds additional terms to the Friedmann equations, which can be combined into an effective dark-energy component.
In this study, we have obtained the constraints of the free parameters of the theory by a model-independent and data-driven technique employing DESI, CCh, GRBs, and SNeIa data. Specifically, we are provided with the parameters’ corresponding likelihood contours, error bar plots, best-fit values, and a confidence level of up to 3|$\sigma$|, indicating that the approach agrees with observations. Eventually, statistical consistency with the data simulation was demonstrated by analysing the deceleration parameter at its current value. The results indicate that the Universe’s expansion is quintessence, as reflected in the obtained values of |$q_{\rm EPN,0}$|. Furthermore, the efficiency and compatibility of the CEPN model with observational data, compared to the |$\Lambda$|CDM model, are confirmed using the information criteria method.
As per our observation, using DESI BAO data set constrains the value of the Hubble constant comparatively higher than other BAO surveys. This probably can lead to new milestones in alleviating |$H_0$| tension. Additionally, as we have presented, its power of constraining is also good. On comparing the BAO data from WiggleZ and DESI surveys, we have observed the Gaussianity in the distribution and corresponding constraining ability on the Hubble constant of the associated DESI data. Further, the value of |$H_0$| is a bit on the higher side, particularly for one combination with the value being 73.89 km s−1 Mpc−1. This is obtained for the combined data set of DESI, GRBs, CCh, and Pantheon + SH0ES. The higher drag is mainly because of the presence of Cepheid measurements along with DESI. However, we believe that DESI results act significant in alleviating tension. Some interesting literature in this regard can be seen in (Pan et al. 2019; Petronikolou, Basilakos & Saridakis 2022; Ren et al. 2022; Banerjee, Petronikolou & Saridakis 2023; Petronikolou & Saridakis 2023; Saridakis et al. 2023; Basilakos et al. 2024).
Analysing the interactions at a perturbative level, utilizing data from large-scale structures (such as f|$\sigma _8$| measurements) and additional methods, is beyond the scope of this current study. This will be addressed in future research.
ACKNOWLEDGEMENTS
LS, NSK, and VV acknowledge DST, New Delhi, India, for supporting research facilities under DST-FIST-2019. LS acknowledges Kuvempu University for providing University General Fellowship (File no. KU: B.C.M-3/145/2023-24 Dated: 04/08/2023). We would like to express our gratitude to the anonymous referee for providing thoughtful comments and suggestions on our manuscript.
DATA AVAILABILITY
There are no new data associated with this article.