# Ergodic control of partially observed Markov processes with equivalent transition probabilities

Applicationes Mathematicae (1993)

- Volume: 22, Issue: 1, page 25-38
- ISSN: 1233-7234

## Access Full Article

top## Abstract

top## How to cite

topStettner, Łukasz. "Ergodic control of partially observed Markov processes with equivalent transition probabilities." Applicationes Mathematicae 22.1 (1993): 25-38. <http://eudml.org/doc/219080>.

@article{Stettner1993,

abstract = {Optimal control with long run average cost functional of a partially observed Markov process is considered. Under the assumption that the transition probabilities are equivalent, the existence of the solution to the Bellman equation is shown, with the use of which optimal strategies are constructed.},

author = {Stettner, Łukasz},

journal = {Applicationes Mathematicae},

keywords = {partial observation; long run average cost; stochastic control; Bellman equation; optimal control; long run average cost functional; partially observed Markov process},

language = {eng},

number = {1},

pages = {25-38},

title = {Ergodic control of partially observed Markov processes with equivalent transition probabilities},

url = {http://eudml.org/doc/219080},

volume = {22},

year = {1993},

}

TY - JOUR

AU - Stettner, Łukasz

TI - Ergodic control of partially observed Markov processes with equivalent transition probabilities

JO - Applicationes Mathematicae

PY - 1993

VL - 22

IS - 1

SP - 25

EP - 38

AB - Optimal control with long run average cost functional of a partially observed Markov process is considered. Under the assumption that the transition probabilities are equivalent, the existence of the solution to the Bellman equation is shown, with the use of which optimal strategies are constructed.

LA - eng

KW - partial observation; long run average cost; stochastic control; Bellman equation; optimal control; long run average cost functional; partially observed Markov process

UR - http://eudml.org/doc/219080

ER -

## References

top- [1] G. B. Di Masi and Ł. Stettner, On adaptive control of a partially observed Markov chain, Applicationes Math., to appear Zbl0808.93070
- [2] E. Fernandez-Gaucherand, A. Arapostatis and S. J. Marcus, Adaptive control of a partially observed controlled Markov chain, in: Stochastic Theory and Adaptive Control, T. E. Duncan and B. Pasik-Duncan (eds.), Lecture Notes in Control and Inform. Sci. 184, Springer, 1992, 161-171 Zbl0813.93073
- [3] E. Fernandez-Gaucherand, A. Arapostatis and S. J. Marcus, On partially observable Markov decision processes with an average cost criterion, Proc. 28 CDC, Tampa, Florida, 1989, 1267-1272
- [4] O. Hernandez-Lerma, Adaptive Markov Control Processes, Springer, New York, 1989
- [5] H. Korezlioglu and G. Mazziotto, Estimation recursive en transmission numerique, Proc. Neuvième Colloque sur le Traitement du Signal et ses Applications, Nice, 1983
- [6] M. Kurano, On the existence of an optimal stationary J-policy in non-discounted Markovian decision processes with incomplete state information, Bull. Math. Statist. 17 (1977), 75-81 Zbl0374.90079
- [7] H. L. Royden, Real Analysis, Macmillan, New York, 1968
- [8] W. J. Runggaldier and Ł. Stettner, Nearly optimal controls for stochastic ergodic problems with partial observation, SIAM J. Control Optim. 31 (1993), 180-218 Zbl0770.93092
- [9] K. Wakuta, Semi-Markov decision processes with incomplete state observation-average cost criterion, J. Oper. Res. Soc. Japan 24 (1981), 95-108. Zbl0488.90073

## NotesEmbed ?

topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.