Optimization of Discrete-Time, Stochastic Systems
Serdica Mathematical Journal (1995)
- Volume: 21, Issue: 4, page 267-282
- ISSN: 1310-6600
Access Full Article
topAbstract
topHow to cite
topPapageorgiou, Nikolaos. "Optimization of Discrete-Time, Stochastic Systems." Serdica Mathematical Journal 21.4 (1995): 267-282. <http://eudml.org/doc/11671>.
@article{Papageorgiou1995,
abstract = {* This research was supported by a grant from the Greek Ministry of Industry and Technology.In this paper we study discrete-time, finite horizon stochastic systems
with multivalued dynamics and obtain a necessary and sufficient condition for
optimality using the dynamic programming method. Then we examine a nonlinear
stochastic discrete-time system with feedback control constraints and for it, we
derive a necessary and sufficient condition for optimality which we then use to
establish the existence of an optimal policy.},
author = {Papageorgiou, Nikolaos},
journal = {Serdica Mathematical Journal},
keywords = {Bellman Function; Dynamic Programming; Conditional Expectation; Measurable Selection; Induction; discrete-time; multivalued dynamics; performance; stochastic optimal control},
language = {eng},
number = {4},
pages = {267-282},
publisher = {Institute of Mathematics and Informatics Bulgarian Academy of Sciences},
title = {Optimization of Discrete-Time, Stochastic Systems},
url = {http://eudml.org/doc/11671},
volume = {21},
year = {1995},
}
TY - JOUR
AU - Papageorgiou, Nikolaos
TI - Optimization of Discrete-Time, Stochastic Systems
JO - Serdica Mathematical Journal
PY - 1995
PB - Institute of Mathematics and Informatics Bulgarian Academy of Sciences
VL - 21
IS - 4
SP - 267
EP - 282
AB - * This research was supported by a grant from the Greek Ministry of Industry and Technology.In this paper we study discrete-time, finite horizon stochastic systems
with multivalued dynamics and obtain a necessary and sufficient condition for
optimality using the dynamic programming method. Then we examine a nonlinear
stochastic discrete-time system with feedback control constraints and for it, we
derive a necessary and sufficient condition for optimality which we then use to
establish the existence of an optimal policy.
LA - eng
KW - Bellman Function; Dynamic Programming; Conditional Expectation; Measurable Selection; Induction; discrete-time; multivalued dynamics; performance; stochastic optimal control
UR - http://eudml.org/doc/11671
ER -
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.