Kolmogorov’s early work on convergence theory and foundations.

*(English)*Zbl 0687.01008In a chronological attempt to enumerate Kolmogorov’s early work on convergence theory and foundations, his joint paper with Khintchine to the problem of convergence of infinite series of discretely distributed independent random variables (1925) must be mentioned. In 1928, Kolmogorov dropped the hypothesis of discretely distributed summands and proved that an infinite series of independent mean 0 random variables converges almost surely if the series of summands variances converge, proving furthermore, his famous Three Series Theorem. Generalizing Chebyshev’s inequality, at present called Kolmogorov’s inequality for maxima of partial sums of mean 0 independent random variables, he uses it as a basic tool. Around 1930, it was generally understood that the basic manipulation of mathematical probabilities were the same as those of measure theory, yet the relation between the two had not been given a usable formulation. By a 1933 monograph, Kolmogorov transformed the character of the calculus of probability, showing that there is a probability measure defined on the smallest \(\sigma\) algebra of subsets of \(R^ T\), making every coordinate function measurable, assigning the prescribed joint distributions to the finite sets of coordinate functions. However, there are differences between Kolmogorov’s measure conventions and the presently accepted ones, which may lead to unexpected contradictions.

Reviewer: C. Cusmir