On undiscounted markovian decision processes with compact action spaces

Paul J. Schweitzer

RAIRO - Operations Research - Recherche Opérationnelle (1985)

  • Volume: 19, Issue: 1, page 71-86
  • ISSN: 0399-0559

How to cite

top

Schweitzer, Paul J.. "On undiscounted markovian decision processes with compact action spaces." RAIRO - Operations Research - Recherche Opérationnelle 19.1 (1985): 71-86. <http://eudml.org/doc/104871>.

@article{Schweitzer1985,
author = {Schweitzer, Paul J.},
journal = {RAIRO - Operations Research - Recherche Opérationnelle},
keywords = {average optimality; multichain undiscounted, stationary semi-Markov decision model; finite state space; compact action sets; continuous reward and transition functions; optimal stationary deterministic policy; necessary and sufficient pair of conditions; maximal-gain policy; higher order optimality criteria},
language = {eng},
number = {1},
pages = {71-86},
publisher = {EDP-Sciences},
title = {On undiscounted markovian decision processes with compact action spaces},
url = {http://eudml.org/doc/104871},
volume = {19},
year = {1985},
}

TY - JOUR
AU - Schweitzer, Paul J.
TI - On undiscounted markovian decision processes with compact action spaces
JO - RAIRO - Operations Research - Recherche Opérationnelle
PY - 1985
PB - EDP-Sciences
VL - 19
IS - 1
SP - 71
EP - 86
LA - eng
KW - average optimality; multichain undiscounted, stationary semi-Markov decision model; finite state space; compact action sets; continuous reward and transition functions; optimal stationary deterministic policy; necessary and sufficient pair of conditions; maximal-gain policy; higher order optimality criteria
UR - http://eudml.org/doc/104871
ER -

References

top
  1. 1. E. V. DENARDO and B. Fox, Muitichain Markov Renewal Programs, S.I.A.M. J. Appl. Math., Vol. 16, 1968, pp. 468-487. Zbl0201.19303MR234721
  2. 2. E.V. DENARDO, Markov Renewal Programs with Small Interest Rates, Ann. Math. Statist., Vol. 42, 1971, pp. 477-496. Zbl0234.60106MR290784
  3. 3. J. DOOB, Stochastic Processes, Wiley, New York, 1953. Zbl0696.60003MR58896
  4. 4. N. FURUKAWA, Markovian Decision Processes with Compact Action Spaces, Ann. Math. Statist., Vol. 43, 1972, pp. 1612-1622. Zbl0277.90083MR371418
  5. 5. A. HORDIJK, A Sufficient Condition for the Existence of an Optimal Policy with Respect to the Average Cost Criterion in Markovian Decision Processes, Transactions Sixth Prague Conf. on Information Theory, Statistical Decision Functions, Random Processes. Academia, Prague, 1971, pp. 263-274. Zbl0291.90072MR368792
  6. 6. A. HORDIJK, Dynamic Programming and Markov Potential Theory, Mathematical Centre Tract, Vol. 51, Amsterdam, 1974. Zbl0284.49012MR432227
  7. 7. R. A. HOWARD, Dynamic Programming and Markov Processes, Wiley, New York, 1960. Zbl0091.16001MR118514
  8. 8. A. MAITRA, Discounted Dynamic Programming on Compact Metric Spaces, Sankhya, Sec. A, Vol. 30, 1968, pp. 211-216. Zbl0187.17702MR237172
  9. 9. B. L. MILLER and A. F. Jr. VEINOTT, Discrete Dynamic Programming with a Small Interest Rate, Ann. Math. Statist., Vol. 40, 1969, pp. 366-370. Zbl0175.47302MR238561
  10. 10. M. SCHAL, Stationary Policies in Dynamic Programming Models Under Compactness Assumptions, Math, of O.R., Vol. 8, 1983, pp. 366-372. Zbl0533.90093MR716118
  11. 11. P. SCHWEITZER, Perturbation Theory and Finite Markov Chains, J. Appl. Prob., Vol. 5, 1968, pp. 401-413. Zbl0196.19803MR234527
  12. 12. P. J. SCHWEITZER, On the Solvability of Bellman's Functional Equations for Markov Renewal Programming, J. Math. Anal. Appl, Vol. 96, 1983, pp. 13-23. Zbl0526.90094MR717490
  13. 13. P. J. SCHWEITZER, On the Existence of Relative Values for Undiscounted Markovian Decision Processes with a Scalar Gain Rate, University of Rochester, Graduate School of Management, Working Paper Series No. QM8225, December 1982. To appear in J. Math. Anal. Appl. Zbl0598.90091MR765040
  14. 14. P. J. SCHWEITZER, Solving MDP Functional Equations by Lexicographie Optimization, Revue Française d'Automatique et de Recherche Operationnelle, Vol. 16, 1982, pp. 91-98. Zbl0485.90085MR679631
  15. 15. P. J. SCHWEITZER and A. FEDERGRUEN, The Functional Equations of Undiscounted Markov Renewal Programming, Math. of Operations Research, Vol. 3, 1978, pp. 308-322. Zbl0388.90083MR509667
  16. 16. S. S. SHEU and K.-J. FARN, A Sufficient Condition for the Existence of a Stationary 1-Optimal Plan in Compact Action Markovian Decision Processes, in Recent Developments in Markov Decision Processes, R. HARTLEY, L. C. THOMAS and D. J. WHITE Eds., Academic Press, London, 1980, pp. 111-126. 
  17. 17. A. F. Jr. VEINOTT, On Finding Optimal Policies in Discrete Dynamic Programming with no Discounting, Ann. Math. Statist., Vol. 37, 1966, pp. 1284-1294. Zbl0149.16301MR208992
  18. 18. A. F. Jr. VEINOTT, Discrete Dynamic Programming with Sensitive Discount Optimality Criteria, Ann. Math. Statist., Vol 40, 1969, pp. 1635-1660. Zbl0183.49102MR256712
  19. 19. A. HORDIJK and M. L. PUTERMAN, On the Convergence of Policy Iteration in Finite State Average Reward Markov Decision Processes ; the Unichain Case, Working Paper No. 948, Faculty of Commerce and Business Administration, University of British Columbia, Vancouver, British Columbia, May 1983. 

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.