This paper gives a systematic introduction to a number of iterative methods for symmetric positive definite problems. It presents a unified theory for iterative algorithms such as Jacobi and Gauss-Seidel iterations, diagonal preconditioning, domain decomposition techniques, multigrid methods, multilevel nodal basis preconditioners and hierarchical basis methods. By using the notions of space decomposition and subspace correction, all these algorithms are classified into two groups: parallel subspace correction (PSC) and successive subspace correction (SSC) methods. These two types of methods are similar in nature to the familiar Jacobi and Gauss-Seidel methods, respectively.
The above framework of theory is used to establish a quite general abstract theory of convergence which may be applied relatively simply to a particular problem. It is only necessary to specify a decomposition of the underlying space and the corresponding subspace solvers. The paper is organized as follows: §2 gives a brief discussion of self-adjoint operators and the conjugate gradient method. In §3 a general framework for linear iterative methods for symmetric positive problems is presented. In §4 an abstract theory of convergence is established for the algorithms in the framework of §3. As a preparation for applications of the theory §5 introduces a model finite element method. The rest of the paper is devoted to multilevel and domain decomposition methods.