×

zbMATH — the first resource for mathematics

Gesteuerte Markovsche Ketten. (Czech) Zbl 0187.17902
Kybernetika, Praha 5, Beilage, 74 p. (1969).
References:
[1] R. Bellman: Dynamic programming. Princeton 1957. · Zbl 0995.90618
[2] R. Bellman S. Dreyfus: Applied dynamic programming. Princeton 1962. · Zbl 0106.34901
[3] W. Feller: An introduction to probability theory and its applications. New York, London 1957. · Zbl 0077.12201
[4] Ф. Р. Гантмахер: Теория матриц. Москва 1966. · Zbl 1155.78304
[5] T. E. Harris: The theory of branching processes. Berlin, Göttingen, Heidelberg 1963. · Zbl 0117.13002
[6] R. A. Howard: Dynamic programming and Markov processes. New York, London 1960. · Zbl 0091.16001
[7] Kai Lai Chung: Markov chains with stationary transition probabilities. Berlin, Göttingen, Heidelberg 1960. · Zbl 0092.34304
[8] J. G. Kemeny J. L. Snell: Finite Markov chains. Princeton 1960. · Zbl 0089.13704
[9] Т. А. Сарымсаков: Основы теории процессов Маркова. Москва 1954. · Zbl 0995.90535
[10] K. J. Åström: Optimal control of Markov processes with incomplete state information. J. of Math. Analysis and Appl. 10 (1965), 174-204. · Zbl 0137.35803
[11] R. Bellman: A Markovian decision process. J. of Math. and Mech. 6 (1957), 679-684. · Zbl 0078.34101
[12] D. Blackwell: Discrete dynamic programming. AMS 33 (1962), 719-726. · Zbl 0133.12906
[13] D. Blackwell: Memoryless strategies in finite stage dynamic programming. AMS 35 (1964), 863-865. · Zbl 0127.36406
[14] D. Blackwell: Discounted dynamic programming. AMS 36 (1965), 226-235. · Zbl 0133.42805
[15] C. Derman: Stable sequential control rules and Markov chains. J. of Math. Analysis and Appl. 6 (1963), 257-265. · Zbl 0129.30703
[16] C. Derman: Markovian sequential decision processes. Proc. of Symp. on Applied Math. XVI. Providence 1964, 281-289. · Zbl 0203.17801
[17] C. Derman: On sequential control processes. AMS 35 (1964), 341-349. · Zbl 0142.14402
[18] C. Derman: Markovian sequential control processes with denumerable state space. J. of Math. Analysis and Appl. 10 (1965), 295-301. · Zbl 0132.12706
[19] C. Derman: Denumerable state Markovian decision processes. Average cost criterion. AMS 37 (1966), 1545-1553. · Zbl 0144.43102
[20] C. Derman G. J. Lieberman: A Markovian decision model for a joint replacement and stocking problem. Management Sc. 13 (1967), 609-617. · Zbl 0158.38701
[21] C. Derman R. E. Strauch: A note on memoryless rules for controlling sequential control processes. AMS 37 (1966), 276-278. · Zbl 0138.13604
[22] Є. Б. Дынкин: Управляємыє случайныє послєдоватєльности. TBN X (1965), 3-18.
[23] J. H. Eaton L. A. Zadeh: Optimal pursuit strategies in discrete state probabilistic systems. Transactions of ASME Ser. D, J. of Basic Engineering 84 (1961), 23-29.
[24] H. Hatori: On Markov chains with rewards. Kodai math. seminar reports 18 (1966), 184-192. · Zbl 0139.34504
[25] H. Hatori: On continuous-time Markov processes with rewards I. Tamtéž 212-218. · Zbl 0147.16403
[26] H. Hatori T. Mori: On continuous-time Markov processes with rewards II. Tamtéž 353-356. · Zbl 0168.16406
[27] R. A. Howard: Research in semi-Markovian decision structures. J. of Operations Res. Soc. of Japan 6 (1964), 163-199.
[28] R. A. Howard: System analysis of semi-Markov processes. IEEE Transactions, Milit. Electron. 8 (1964), 114-124.
[29] R. A. Howard: Dynamic inference. Operations Res. 13 (1965), 712-733. · Zbl 0137.39002
[30] M. Iosifescu P. Mandl: Application des systèmes à liaisons complètes à un problème de réglage. Rev. roum. Math. pures et appl. XI (1966), 533-539. · Zbl 0154.43001
[31] W. S. Jewell: Markov renewal programming I, II. Operations Res. 11 (1963), 938-971. · Zbl 0126.15905
[32] Z. Koutský: O jedné úloze optimálního rozhodování v markovských procesech. Ekon. matem. obzor 1 (1965), 370-382.
[33] Н. В. Крылов: Построєниє оптимальной стратєгии для конєчной управляємой цєпи. TBN X (1965), 51-60.
[34] A. Maitra: Dynamic programming for countable state systems. Sankhyã A 27 (1965), 241-248. · Zbl 0171.40805
[35] P. Mandl: An iterative method for maximizing the characteristic root of positive matrices. Rev. roum. Math. pures et appl. XII (1967), 1305-1310. · Zbl 0189.03003
[36] И. В. Романовский: Сущєствованиє оптимального стационарного управлєния в марковском процєссє рєшєния. TBN X (1965), 130-133.
[37] В. В. Рыков: Управляємыє марковскиє процєссы с конєчными пространствами состояний и управлєний. TBN XI (1966), 343-351.
[38] R. E. Strauch: Negative dynamic programming. AMS 37 (1966), 871-890. · Zbl 0144.43201
[39] А. Н. Ширяєв: К тєории рєшающих фу\?кций и управлєнию процєссом наблюдєния по нєпольным данным. Transactions of the Third Prague Conference on Information Theory etc., Prague 1964, 657-682.
[40] А. Н. Ширяєв: Послєдоватєльный анализ и управляємыє случайныє процєссы (дискрєтноє врємя). Кибєрнєтика (1965), 1-24.
[41] А. Н. Ширяєв: Нєкоторыє новыє рєзультаты в тєории управляємых случайных процєссов. Transactions of the Fourth Prague Conference on Information Theory etc., Prague 1967, 131-203.
[42] О. В. Висков А. Н. Ширяєв: Об управлєниях приводящих к оптимальным стационарным рєжимам. Труды матєм. инст. им. В. А. Стєклова L XXI, Москва 1964, 35-45.
[43] P. Wolfe G. B. Dantzig: Linear programming in a Markov chain. Operations Res. 10 (1962), 702-710. · Zbl 0124.36403
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.