×

Improved inference for first-order autocorrelation using likelihood analysis. (English) Zbl 1199.62016

The authors deal with a multiple linear regression model with an autoregressive error structure \(AR(1)\) \[ Y_t=\beta_0+\beta_1X_{1t}+\cdots+\beta_kX_{kt}+\epsilon_t,\;\epsilon_t=\rho\epsilon_{t-1}+\nu_t,\;| \rho| <1, \] where the random variables \(\nu_t\) are independent normally distributed with \(E[\nu_t]=0\) and \(E[\nu_t]^2=\sigma^2\). This multiple linear regression model with AR(1) Gaussian error structure may be represented in the form \(y=X\beta+\sigma\epsilon,\;\epsilon\sim N(0,\Omega),\) where \(\Omega=\{\omega_{ij}\},\) \(\omega_{ij}=\rho^{| i-j| }/(1-\rho^2),\) \(i,j=1,2,\dots,n\), \(y=(y_1,y_2,\dots,y_n)^{\top}\), \(\beta=(\beta_0,\beta_1,\dots,\beta_k)^{\top}\), \(\epsilon=(\epsilon_1,\epsilon_2,\dots,\epsilon_n)^{\top}\), where \(X\) is the corresponding matrix. It is known that in the presence of autocorrelation, the ordinary least squares estimator \(\hat{\beta}_{OLS}=(X^{\top}X)X^{\top}y\) for \(\beta\) is not the best linear unbiased estimator. To determine whether autocorrelation exists in time-series data, the null hypothesis of \(\rho=0\) is tested. Two common tests for assessing the autocorrelation parameter \(\rho\) are an asymptotic test and the Durbin-Watson test. The asymptotic test uses the OLS residuals \(\hat{\epsilon}=y-X\hat{\beta}_{OLS}\) to estimate \(\rho\) from the regression \(\hat{\epsilon}_t=\rho\hat{\epsilon}_{t-1}+\hat{\nu}_t\). The standardized test statistic for testing \(\rho=\rho_0\) is constructed as \(z=(\hat{\rho}-\rho_0)/(\sqrt{(1-\rho_0^2)/n})\). This random variable is distributed asymptotically as standard normal. The Durbin-Watson test for testing the hypothesis \(\rho=0\) uses the same OLS residuals to construct another test statistic \(d=\sum_{t=2}^n(\hat{\epsilon}_t-\hat{\epsilon}_{t-1})^2/\sum_{t=1}^n\hat{\epsilon}^2\). The distribution of \(d\) under the null hypothesis depends on the design matrix; formal critical bounds have been tabulated by J. Durbin and G.S. Watson [Biometrika 38, 159–178 (1951; Zbl 0042.38201)]. Asymptotic inference for \(\rho\) can also be obtained from some simple likelihood-based asymptotic methods; see, e.g., J.D. Hamilton [Time series analysis. NJ: Princeton Univ. Press (1994; Zbl 0831.62061)]. For the Gaussian AR(1) model, two different likelihood functions for \(\theta=(\beta,\rho,\sigma^2)\) can be constructed based on the exact log-likelihood function \( \log\{ f_{Y_1}(y_1;\beta,\rho,\sigma^2)\Pi_{t=2}^n f_{Y_t| y_{t-1}}(y_t| y_{t-1};\beta,\rho,\sigma^2)\}, \) where \(f_{Y_1}(y_1;\beta,\rho,\sigma^2)\) is the normal density of the first observation.
In this article, inference concerning the autocorrelation parameter is examined from the viewpoint of the likelihood asymptotics. The general theory developed by D.A.S. Fraser and N. Reid [Util. Math. 47, 33–53 (1995; Zbl 0829.62006)] is used to obtain p-values for testing particular values of \(\rho\) that have known \(O(n^{-3/2})\) accuracy. The focus of the article is on comparing the results from this approach to the asymptotic \(z\) test and to the signed log-likelihood departure derived from the indicated log-likelihood function. A numerical example and three simulations are provided to show that this new likelihood method provides higher order improvements and is superior in terms of central coverage even for autocorrelation parameter values close to unity.

MSC:

62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
62F12 Asymptotic properties of parametric estimators
62F05 Asymptotic properties of parametric tests
65C60 Computational problems in statistics (MSC2010)

Keywords:

p-values
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] DOI: 10.1093/biomet/78.3.557 · Zbl 1192.62052 · doi:10.1093/biomet/78.3.557
[2] DOI: 10.1023/A:1008654023721 · Zbl 0893.90032 · doi:10.1023/A:1008654023721
[3] DOI: 10.2307/2669758 · Zbl 1004.62071 · doi:10.2307/2669758
[4] DOI: 10.1093/biomet/77.1.77 · Zbl 0692.62033 · doi:10.1093/biomet/77.1.77
[5] DOI: 10.2307/2332325 · doi:10.2307/2332325
[6] DOI: 10.1093/biomet/90.2.327 · Zbl 1035.62012 · doi:10.1093/biomet/90.2.327
[7] Fraser D., Statistica Sinica 3 pp 67– (1993)
[8] Fraser D., Utilitas Mathematica 47 pp 33– (1995)
[9] Hamilton J. D., Time series analysis (1994) · Zbl 0831.62061
[10] King M., Specification Analysis in the Linear Models: Essays in Honour of Donald Cochrane (1987) · Zbl 0688.62060
[11] DOI: 10.1093/biomet/59.1.61 · Zbl 0232.62031 · doi:10.1093/biomet/59.1.61
[12] DOI: 10.2307/1426607 · Zbl 0425.60042 · doi:10.2307/1426607
[13] DOI: 10.2307/3315622 · Zbl 0858.62011 · doi:10.2307/3315622
[14] Reinsel G., Statistics & Probability Letters 62 pp 123– (2003) · Zbl 1101.62372 · doi:10.1016/S0167-7152(02)00437-6
[15] DOI: 10.2307/3214212 · Zbl 0638.62018 · doi:10.2307/3214212
[16] Severini T., Likelihood methods in statistics (2000) · Zbl 0984.62002
[17] Tunnicliffe-Wilson G., Journal of the Royal Statisticial Society Series B 51 pp 15– (1989)
[18] DOI: 10.1016/0165-1765(86)90118-7 · Zbl 06524977 · doi:10.1016/0165-1765(86)90118-7
[19] Wooldrige J., Introductory econometrics: A Modern Approach (2006)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.