Dynamic programming algorithms for solving stochastic discrete control problems.

*(English)*Zbl 1288.93094Summary: Stochastic versions of some classical discrete optimal control problems are studied. In deterministic control problems the vector of control parameters from the corresponding feasible set at every moment of time for an arbitrary state is assumed to be at our disposition, i.e each dynamical state of the system is assumed to be controllable. Here we consider control problems for which the discrete system in the control process may meet dynamic states where the vector of control parameters is changing in a random way according to given distribution functions of the probabilities on given feasible dynamic states. We call such states uncontrollable dynamic states. So, we consider control problems for which the dynamics may contain controllable states as well uncontrollable ones. These versions of problems can be formulated on stochastic networks and new approaches for their solving based on the concept of Markov processes and dynamic programming can be suggested. An algorithm is developed and grounded.