The general statistical problems formulated by the authors are as follows: Let $F\sb 0$ be the true and $F\sb 1$ be the empirical distribution (based on a sample), respectively. Let further $f(F\sb 0,F\sb 1)$ be a known function of $F\sb 0$ and $F\sb 1$ and $E(\cdot \vert F\sb 0)$ the expectation given that the data came from $F\sb 0$. Many statistical problems have the form: choose f from a well-defined class so that $$ (1)\quad E[f(F\sb 0,F\sb 1)\vert F\sb 0]=0. $$ The characteristic $E(f\vert F\sb 0)$ is coverage error, derivative of mean interval length or level error in confidence estimation or testing, or it is bias in point estimation as is shown in four examples. Resampling solutions of solving equation (1) are proposed. Resampling means to draw a samesize sample at random, with replacement, from the original sample. Let $F\sb 2$ denote its empirical distribution function, conditional on $F\sb 1$. Instead of (1) $$ (2)\quad E[f(F\sb 1,F\sb 2)\vert F\sb 1]=0 $$ is solved. The resulting f is used as an approximation of f in (1). In chapter 3 it is shown how the solution can be improved by repeated resampling and iteration. The procedure is demonstrated by constructing confidence intervals (usual and Lehman-type ones) and, by shrinking an estimator towards a fixed point $(L\sp 1$-shrinkage).