This paper considers the following kind of stochastic optimal control problems: $$\align \min J \bigl(v (\cdot) \bigr) & = E \int^T_0 l\bigl(t,x,v(t) \bigr) dt+ Eh(x) \\ dx(t)& =g\bigl(t,x,v(t) \bigr)dt + \sigma \bigl(t,x,v(t) \bigr)dw(t) \\ x(0) & = x_0 \\ Ef(x)& = 0 \endalign$$ where $v(\cdot)$ is the control variable with values in a convex subset $U$ of $\bbfR^m$, $x(\cdot)$ is the state variable with values in $\bbfR^n$, $w(\cdot)$ is a standard $\bbfR^d$-valued Wiener process, and $g,\sigma,l,h,f$ are given mappings, which are dependent on the histories and the present state of $x(\cdot)$, i.e. defined on the functional space $C=C([0,T]; \bbfR^n)$ and nonanticipative.
A necessary condition, called the maximum principle, for such optimal control problems is obtained; the adjoint equation, which seems to be new in form, is derived and the existence and uniqueness of the solution of that equation are proved. The main idea to treat the adjoint equation consists in using the method developed in a work of the authors on the maximum principle for semilinear stochastic evolution control systems. This is based on the idea of using a feedback law and on the stochastic Fubini theorem.