On adaptive control of a partially observed Markov chain
Giovanni Di Masi; Łukasz Stettner
Applicationes Mathematicae (1994)
- Volume: 22, Issue: 2, page 165-180
- ISSN: 1233-7234
Access Full Article
topAbstract
topHow to cite
topDi Masi, Giovanni, and Stettner, Łukasz. "On adaptive control of a partially observed Markov chain." Applicationes Mathematicae 22.2 (1994): 165-180. <http://eudml.org/doc/219089>.
@article{DiMasi1994,
abstract = {A control problem for a partially observable Markov chain depending on a parameter with long run average cost is studied. Using uniform ergodicity arguments it is shown that, for values of the parameter varying in a compact set, it is possible to consider only a finite number of nearly optimal controls based on the values of actually computable approximate filters. This leads to an algorithm that guarantees nearly selfoptimizing properties without identifiability conditions. The algorithm is based on probing control, whose cost is additionally assumed to be periodically observable.},
author = {Di Masi, Giovanni, Stettner, Łukasz},
journal = {Applicationes Mathematicae},
keywords = {uniform ergodicity; long run average cost; filtering process; adaptive control; approximate filter; partially observed systems; partially observable Markov chain depending on a parameter; optimal controls; approximate filters},
language = {eng},
number = {2},
pages = {165-180},
title = {On adaptive control of a partially observed Markov chain},
url = {http://eudml.org/doc/219089},
volume = {22},
year = {1994},
}
TY - JOUR
AU - Di Masi, Giovanni
AU - Stettner, Łukasz
TI - On adaptive control of a partially observed Markov chain
JO - Applicationes Mathematicae
PY - 1994
VL - 22
IS - 2
SP - 165
EP - 180
AB - A control problem for a partially observable Markov chain depending on a parameter with long run average cost is studied. Using uniform ergodicity arguments it is shown that, for values of the parameter varying in a compact set, it is possible to consider only a finite number of nearly optimal controls based on the values of actually computable approximate filters. This leads to an algorithm that guarantees nearly selfoptimizing properties without identifiability conditions. The algorithm is based on probing control, whose cost is additionally assumed to be periodically observable.
LA - eng
KW - uniform ergodicity; long run average cost; filtering process; adaptive control; approximate filter; partially observed systems; partially observable Markov chain depending on a parameter; optimal controls; approximate filters
UR - http://eudml.org/doc/219089
ER -
References
top- [1] A. Arapostathis and S. I. Marcus, Analysis of an identification algorithm arising in the adaptive estimation of Markov chains, Math. Control Signals Systems 3 (1990), 1-29. Zbl0685.93063
- [2] V. V. Baranov, A recursive algorithm in Markovian decision processes, Cybernetics 18 (1982), 499-506. Zbl0517.90089
- [3] D. P. Bertsekas, Dynamic Programming and Stochastic Control, Academic Press, New York, 1976.
- [4] J. L. Doob, Stochastic Processes, Wiley, New York, 1953. Zbl0053.26802
- [5] W. Feller, An Introduction to Probability Theory and Its Applications II, Wiley, New York, 1971. Zbl0219.60003
- [6] E. Fernández-Gaucherand, A. Arapostathis and S. I. Marcus, On the adaptive control of a partially observable Markov decision process, in: Proc. 27th IEEE Conf. on Decision and Control, 1988, 1204-1210.
- [7] E. Fernández-Gaucherand, A. Arapostathis and S. I. Marcus, On the adaptive control of a partially observable binary Markov decision process, in: Advances in Computing and Control, W. A. Porter, S. C. Kak and J. L. Aravena (eds.), Lecture Notes in Control and Inform. Sci. 130, Springer, New York, 1989, 217-228. Zbl0712.93063
- [8] L. G. Gubenko and E. S. Shtatland, On discrete-time Markov decision processes, Theory Probab. Math. Statist. 7 (1975), 47-61.
- [9] O. Hernández-Lerma, Adaptive Markov Control Processes, Springer, New York, 1989.
- [10] O. Hernández-Lerma and S. I. Marcus, Adaptive control of Markov processes with incomplete state information and unknown parameters, J. Optim. Theory Appl. 52 (1987), 227-241. Zbl0585.90090
- [11] O. Hernández-Lerma and S. I. Marcus, Nonparametric adaptive control of discrete-time partially observable stochastic systems, J. Math. Anal. Appl. 137 (1989), 312-334. Zbl0675.93055
- [12] A. H. Jazwinski, Stochastic Processes and Filtering Theory, Academic Press, New York, 1970. Zbl0203.50101
- [13] N. W. Kartashov, Criteria for uniform ergodicity and strong stability of Markov chains in general state space, Theory Probab. Math. Statist. 30 (1985), 71-89. Zbl0586.60058
- [14] P. R. Kumar and P. Varaiya, Stochastic Systems: Estimation, Identification and Adaptive Control, Prentice-Hall, Englewood Cliffs, 1986. Zbl0706.93057
- [15] H. J. Kushner and H. Huang, Approximation and limit results for nonlinear filters with wide bandwidth observation noise, Stochastics 16 (1986), 65-96. Zbl0595.60046
- [16] G. E. Monahan, A survey of partially observable Markov decision processes: theory, models and algorithms, Management Sci. 28 (1982), 1-16. Zbl0486.90084
- [17] W. J. Runggaldier and Ł. Stettner, Nearly optimal controls for stochastic ergodic problems with partial observation, SIAM J. Control Optim. 31 (1993), 180-218. Zbl0770.93092
- [18] Ł. Stettner, On nearly self-optimizing strategies for a discrete-time uniformly ergodic adaptive model, J. Appl. Math. Optim. 27 (1993), 161-177. Zbl0769.93084
Citations in EuDML Documents
topNotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.