Robust Control of Linear Stochastic Systems with Fully Observable State
Applicationes Mathematicae (1996)
- Volume: 24, Issue: 1, page 35-46
- ISSN: 1233-7234
Access Full Article
topAbstract
topHow to cite
topPoznyak, Alexander, and Taksar, M.. "Robust Control of Linear Stochastic Systems with Fully Observable State." Applicationes Mathematicae 24.1 (1996): 35-46. <http://eudml.org/doc/219150>.
@article{Poznyak1996,
abstract = {We consider a multidimensional linear system with additive inputs (control) and Brownian noise. There is a cost associated with each control. The aim is to minimize the cost. However, we work with the model in which the parameters of the system may change in time and in addition the exact form of these parameters is not known, only intervals within which they vary are given. In the situation where minimization of a functional over the class of admissible controls makes no sense since the value of such a functional is different for different systems within the class, we should deal not with a single problem but with a family of problems. The objective in such a setting is twofold. First, we intend to establish existence of a state feedback linear robust control which stabilizes any system within the class. Then among all robust controls we find the one which yields the lowest bound on the cost within the class of all systems under consideration. We give the answer in terms of a solution to a matrix Riccati equation and we present necessary and sufficient conditions for such a solution to exist. We also state a criterion when the obtained bound on the cost is sharp, that is, the control we construct is actually a solution to the minimax problem.},
author = {Poznyak, Alexander, Taksar, M.},
journal = {Applicationes Mathematicae},
keywords = {robust control; Riccati equation; stochastic differential equations; stochastic control; optimal stochastic control; linear system; matrix Riccati equation},
language = {eng},
number = {1},
pages = {35-46},
title = {Robust Control of Linear Stochastic Systems with Fully Observable State},
url = {http://eudml.org/doc/219150},
volume = {24},
year = {1996},
}
TY - JOUR
AU - Poznyak, Alexander
AU - Taksar, M.
TI - Robust Control of Linear Stochastic Systems with Fully Observable State
JO - Applicationes Mathematicae
PY - 1996
VL - 24
IS - 1
SP - 35
EP - 46
AB - We consider a multidimensional linear system with additive inputs (control) and Brownian noise. There is a cost associated with each control. The aim is to minimize the cost. However, we work with the model in which the parameters of the system may change in time and in addition the exact form of these parameters is not known, only intervals within which they vary are given. In the situation where minimization of a functional over the class of admissible controls makes no sense since the value of such a functional is different for different systems within the class, we should deal not with a single problem but with a family of problems. The objective in such a setting is twofold. First, we intend to establish existence of a state feedback linear robust control which stabilizes any system within the class. Then among all robust controls we find the one which yields the lowest bound on the cost within the class of all systems under consideration. We give the answer in terms of a solution to a matrix Riccati equation and we present necessary and sufficient conditions for such a solution to exist. We also state a criterion when the obtained bound on the cost is sharp, that is, the control we construct is actually a solution to the minimax problem.
LA - eng
KW - robust control; Riccati equation; stochastic differential equations; stochastic control; optimal stochastic control; linear system; matrix Riccati equation
UR - http://eudml.org/doc/219150
ER -
References
top- [1] L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1974. Zbl0278.60039
- [2] W. H. Fleming and R. Rishel, Deterministic and Stochastic Optimal Control, Springer, New York, 1975. Zbl0323.49001
- [3] B. A. Francis, A Course in Control Theory, Lecture Notes in Control and Inform. Sci., Springer, New York, 1987.
- [4] K. Glover and D. Mustafa, Derivation of the maximum entropy -controller and a state-space formula for its entropy, Internat. J. Control 50 (1989), 899-916. Zbl0681.93028
- [5] N. V. Krylov, Controlled Diffusion Processes, Springer, New York, 1980. Zbl0459.93002
- [6] A. P. Kurdyukov and A. S. Poznyak, Sensitivity of -functionals to internal perturbations in controllable linear systems, Avtomat. i Telemekh. 1993 (4), 128-136 (in Russian).
- [7] A. Shiryayev, Probability, Springer, New York, 1984.
- [8] I. P. Petersen and C. V. Hollot, A Riccati equation approach to the stabilization of uncertain linear systems, Automatica 22 (1986), 397-411. Zbl0602.93055
- [9] H. Robbins and D. Siegmund, A convergence theorem for nonnegative almost supermartingales and some applications, in: Optimizing Methods in Statistics, J. S. Rustagi (ed.), Academic Press, New York, 1971. Zbl0286.60025
- [10] J. C. Willems, Least squares stationary optimal control and algebraic Riccati equation, IEEE Trans. Automat. Control AC-16 (1971), 621-634.
- [11] G. Zames, Feedback and optimal sensitivity: Model reference transformations, multiplicative seminorms and approximate inverses, ibid. AC-26 (1981), 301-320. Zbl0474.93025
- [12] K. Zhou and P. P. Karganekar, Robust stabilization of linear systems with norm bounded time varying uncertainty, Systems Control Lett. 10 (1988), 17-20.
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.