Bédard, M.; Fraser, D. A. S.; Wong, A. Higher accuracy for Bayesian and frequentist inference: large sample theory for small sample likelihood. (English) Zbl 1246.62027 Stat. Sci. 22, No. 3, 301-321 (2007). Summary: Recent likelihood theory produces \(p\)-values that have remarkable accuracy and wide applicability. The calculations use familiar tools such as maximum likelihood values (MLEs), observed information and parameter rescaling. The usual evaluation of such \(p\)-values is by simulations, and such simulations do verify that the global distribution of the \(p\)-values is uniform \((0, 1)\), to high accuracy in repeated sampling. The derivation of the \(p\)-values, however, asserts a stronger statement, that they have a uniform \((0, 1)\) distribution conditionally, given identified precision information provided by the data. We take a simple regression example that involves exact precision information and use large sample techniques to extract highly accurate information as to the statistical position of the data point with respect to the parameter: specifically, we examine various \(p\)-values and Bayesian posterior survivor \(s\)-values for validity. With observed data we numerically evaluate the various \(p\)-values and \(s\)-values, and we also record the related general formulas. We then assess the numerical values for accuracy using Markov chain Monte Carlo (McMC) methods. We also propose some third-order likelihood-based procedures for obtaining means and variances of Bayesian posterior distributions, again followed by McMC assessment. Finally we propose some adaptive McMC methods to improve the simulation acceptance rates. All these methods are based on asymptotic analysis that derives from the effect of additional data. The methods use simple calculations based on familiar maximizing values and related information. The example illustrates the general formulas and the ease of calculations, while the McMC assessments demonstrate the numerical validity of the \(p\)-values as percentage position of a data point. The example, however, is very simple and transparent, and thus gives little indication that in a wide generality of models the formulas do accurately separate information for almost any parameter of interest, and then do give accurate \(p\)-value determinations from that information. As illustration an enigmatic problem in the literature is discussed and simulations are recorded; various examples in the literature are cited. Cited in 5 Documents MSC: 62F12 Asymptotic properties of parametric estimators 62F15 Bayesian inference 65C40 Numerical analysis or methods applied to Markov chains 65C20 Probabilistic models, generic numerical methods in probability and statistics Keywords:asymptotics; Bayesian posterior s-value; canonical parameter; default prior; higher order; likelihood; maximum likelihood departure; Metropolis-Hastings algorithm; \(p\)-value; regression example; third order × Cite Format Result Cite Review PDF Full Text: DOI arXiv Euclid References: [1] Andrews, D. F., Fraser, D. A. S. and Wong, A. C. M. (2005). Computation of distribution functions from likelihood information near observed data. J. Statist. Plann. Inference 134 180–193. · Zbl 1066.62021 · doi:10.1016/j.jspi.2003.12.021 [2] Barndorff-Nielsen, O. E. (1991). Modified signed log likelihood ratio. Biometrika 78 557–563. JSTOR: · Zbl 1192.62052 · doi:10.1093/biomet/78.3.557 [3] Bayes, T. (1763). An essay towards solving a problem in the doctrine of chances. Phil. Trans. Roy. Soc. London 53 370–418 and 54 296–325; reprinted Biometrika 45 (1958), 293–315. · Zbl 1250.60007 [4] Behrens, W. V. (1929). Ein Beitrag zur Fehlerbereichnung bei wenigen Beobachtungen. Landwirtschaftliche Jahresberichte 68 807–837. [5] Bernardo, J. M., Bayarri, M. J., Berger, J. O., Dawid, A. P., Heckerman, D., Smith, A. F. M. and West, M., eds. (2003). Bayesian Statistics 7 . Clarendon Press. · Zbl 1044.62002 [6] Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference (with discussion). J. Roy. Statist. Soc. Ser. B 41 113–147. JSTOR: · Zbl 0428.62004 [7] Brazzale, A. R. (2000). Practical small sample parametric inference. Ph.D. thesis, Ecole Polytechnique Fédérale de Lausanne. [8] Brown, L. D., Cai, T. T. and DasGupta, A. (2001). Interval estimation for a binomial proportion (with discussion). Statist. Sci. 16 101–133. · Zbl 1059.62533 · doi:10.1214/ss/1009213286 [9] Cakmak, S., Fraser, D. A. S. and Reid, N. (1994). Multivariate asymptotic model: Exponential and location approximations. Utilitas Math. 46 21–31. · Zbl 0814.62005 [10] Cakmak, S., Fraser, D. A. S., McDunnough, P., Reid, N. and Yuan, X. (1998). Likelihood centered asymptotic model exponential and location model versions. J. Statist. Plann. Inference 66 211–222. · Zbl 0953.62017 · doi:10.1016/S0378-3758(97)00085-2 [11] Casella, G., DiCiccio, T. and Wells, M. T. (1995). Discussion of “The roles of conditioning in inference” by N. Reid. Statist. Sci. 10 179–185. · Zbl 0955.62524 · doi:10.1214/ss/1177010027 [12] Cox, D. R. (1958). Some problems connected with statistical inference. Ann. Math. Statist. 29 357–372. · Zbl 0088.11702 · doi:10.1214/aoms/1177706618 [13] Daniels, H. E. (1954). Saddlepoint approximation in statistics. Ann. Math. Statist. 25 631–650. · Zbl 0058.35404 · doi:10.1214/aoms/1177728652 [14] Davison, A., Fraser, D. A. S. and Reid, N. (2006). Improved likelihood inference for discrete data. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 495–508. · Zbl 1110.62028 · doi:10.1111/j.1467-9868.2006.00548.x [15] Dawid, A. P., Stone, M. and Zidek, J. V. (1973). Marginalization paradoxes in Bayesian and structural inference (with discussion). J. Roy. Statist. Soc. Ser B 35 189–233. JSTOR: · Zbl 0271.62009 [16] DiCiccio, T. and Martin, M. A. (1991). Approximations of marginal tail probabilities for a class of smooth functions with applications to Bayesian and conditional inference. Biometrika 78 891–902. JSTOR: · Zbl 0753.62010 · doi:10.1093/biomet/78.4.891 [17] Fisher, R. A. (1935). The logic of inductive inference. J. Roy. Statist. Soc. 98 39–54. · JFM 61.1308.06 [18] Fraser, D. A. S. (1979). Inference and Linear Models . McGraw-Hill, New York. · Zbl 0455.62052 [19] Fraser, D. A. S. (2004) Ancillaries and conditional inference (with discussion). Statist. Sci. 19 333–369. · Zbl 1100.62534 · doi:10.1214/088342304000000323 [20] Fraser, D. A. S. and Reid, N. (1993). Third-order asymptotic models: Likelihood functions leading to accurate approximations for distribution functions. Statist. Sinica 3 67–82. · Zbl 0831.62016 [21] Fraser, D. A. S. and Reid, N. (1995). Ancillaries and third-order significance. Utilitas Math. 47 33–53. · Zbl 0829.62006 [22] Fraser, D. A. S. and Reid, N. (2001). Ancillary information for statistical inference. In Empirical Bayes and Likelihood Inference (S. E. Ahmed and N. Reid, eds.) 185–207. Springer, New York. [23] Fraser, D. A. S. and Reid, N. (2002). Strong matching of frequentist and Bayesian inference. J. Statist. Plann. Inference 103 263–285. · Zbl 1005.62005 · doi:10.1016/S0378-3758(01)00225-7 [24] Fraser, D. A. S., Reid, N., Li, R. and Wong, A. (2003). \(p\)-value formulas from likelihood asymptotics: Bridging the singularities. J. Statist. Res. 37 1–15. [25] Fraser, D. A. S., Reid, N. and Wong, A. (2005). What a model with data says about theta. Internat. J. Statist. Sci. 3 163–178. [26] Fraser, D. A. S., Reid, N. and Wu, J. (1999). A simple general formula for tail probabilities for frequentist and Bayesian inference. Biometrika 86 249–264. JSTOR: · Zbl 0932.62003 · doi:10.1093/biomet/86.2.249 [27] Fraser, D. A. S. and Wong, A. (2004). Algebraic extraction of the canonical asymptotic model: Scalar case. J. Statist. Studies 1 29–49. [28] Fraser, D. A. S., Wong, A. and Sun, Y. (2007). Bayes frequentist and enigmatic examples. Report, Dept. Mathematics and Statistics, York Univ. [29] Fraser, D. A. S., Wong, A. and Wu, J. (1999). Regression analysis, nonlinear or nonnormal: Simple and accurate \(p\)-values from likelihood analysis. J. Amer. Statist. Assoc. 94 1286–1295. JSTOR: · Zbl 0998.62059 · doi:10.2307/2669942 [30] Fraser, D. A. S., Wong, A. and Wu, J. (2004). Simple accurate and unique: The methods of modern likelihood theory. Pakistan J. Statist. 20 173–192. [31] Ghosh, M. and Kim, Y.-H. (2001). The Behrens–Fisher problem revisited: A Bayes-frequentist synthesis. Canad. J. Statist. 29 5–17. JSTOR: · Zbl 1015.62023 · doi:10.2307/3316047 [32] Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97–109. · Zbl 0219.65008 · doi:10.1093/biomet/57.1.97 [33] Jeffreys, H. (1946). An invariant form for the prior distribution in estimation problems. Proc. Roy. Soc. London Ser. A 186 453–461. · Zbl 0063.03050 · doi:10.1098/rspa.1946.0056 [34] Jeffreys, H. (1961). Theory of Probability , 3rd ed. Clarendon Press, Oxford · Zbl 0116.34904 [35] Laplace, P. S. (1812). Théorie analytique des probabilités . Paris. [36] Lugannani, R. and Rice, S. (1980). Saddle point approximation for the distribution function of the sum of independent variables. Adv. in Appl. Probab. 12 475–490. JSTOR: · Zbl 0425.60042 · doi:10.2307/1426607 [37] Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equations of state calculations by fast computing machines. J. Chem. Phys. 21 1087–1092. [38] Robert, C. P. and Casella, G. (2004). Monte Carlo Statistical Methods , 2nd ed. Springer, New York. · Zbl 1096.62003 [39] Strawderman, R. (2000). Higher-order asymptotic approximation: Laplace, saddlepoint, and related methods. J. Amer. Statist. Assoc. 95 1358–1364. JSTOR: · Zbl 1072.62523 · doi:10.2307/2669788 [40] Welch, B. and Peers, H. W. (1963). On formulae for confidence points based on integrals of weighted likelihoods. J. Roy. Statist. Soc. Ser. B 25 318–329. JSTOR: · Zbl 0117.14205 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.