# zbMATH — the first resource for mathematics

Confidence intervals for random forests: the jackknife and the infinitesimal jackknife. (English) Zbl 1319.62132
Summary: We study the variability of predictions made by bagged learners and random forests, and show how to estimate standard errors for these methods. Our work builds on variance estimates for bagging proposed by B. Efron [J. R. Stat. Soc., Ser. B 54, No. 1, 83–127 (1992; Zbl 0782.62051); “Estimation and accuracy after model selection”, J. Am. Math. Ass. 109, No. 507, 991–1007 (2014; doi:10.1080/01621459.2013.823775)] that are based on the jackknife and the infinitesimal jackknife (IJ). In practice, bagged predictors are computed using a finite number $$B$$ of bootstrap replicates, and working with a large $$B$$ can be computationally expensive. Direct applications of jackknife and IJ estimators to bagging require $$B = \Theta (n^{1.5})$$ bootstrap replicates to converge, where $$n$$ is the size of the training set. We propose improved versions that only require $$B = \Theta (n)$$ replicates. Moreover, we show that the IJ estimator requires 1.7 times less bootstrap replicates than the jackknife to achieve a given accuracy. Finally, we study the sampling distributions of the jackknife and IJ variance estimates themselves. We illustrate our findings with multiple experiments and simulation studies.

##### MSC:
 62H30 Classification and discrimination; cluster analysis (statistical aspects) 62F25 Parametric tolerance and confidence regions 62F40 Bootstrap, jackknife and other resampling methods 68T05 Learning and adaptive systems in artificial intelligence
##### Keywords:
jackknife methods; Monte Carlo noise; variance estimation
##### Software:
randomForest; tree; UCI-ml
Full Text: