Pseudo-maximization and self-normalized processes. (English) Zbl 1189.60057

Summary: Self-normalized processes are basic to many probabilistic and statistical studies. They arise naturally in the the study of stochastic integrals, martingale inequalities and limit theorems, likelihood-based methods in hypothesis testing and parameter estimation, and Studentized pivots and bootstrap-\(t\) methods for confidence intervals. In contrast to standard normalization, large values of the observations play a lesser role as they appear both in the numerator and its self-normalized denominator, thereby making the process scale invariant and contributing to its robustness. Herein we survey a number of results for self-normalized processes in the case of dependent variables and describe a key method called “pseudo-maximization” that has been used to derive these results. In the multivariate case, self-normalization consists of multiplying by the inverse of a positive definite matrix (instead of dividing by a positive random variable as in the scalar case) and is ubiquitous in statistical applications, examples of which are given.


60F10 Large deviations
60F15 Strong limit theorems
60E15 Inequalities; stochastic orderings
60-02 Research exposition (monographs, survey articles) pertaining to probability theory
Full Text: DOI arXiv EuDML