Displaying similar documents to “Growth rates and average optimality in risk-sensitive Markov decision chains”

Monotonicity and comparison results for nonnegative dynamic systems. Part I: Discrete-time case

Nico M. van Dijk, Karel Sladký (2006)

Kybernetika

Similarity:

In two subsequent parts, Part I and II, monotonicity and comparison results will be studied, as generalization of the pure stochastic case, for arbitrary dynamic systems governed by nonnegative matrices. Part I covers the discrete-time and Part II the continuous-time case. The research has initially been motivated by a reliability application contained in Part II. In the present Part I it is shown that monotonicity and comparison results, as known for Markov chains, do carry over rather...

Monotonicity of minimizers in optimization problems with applications to Markov control processes

Rosa M. Flores–Hernández, Raúl Montes-de-Oca (2007)

Kybernetika

Similarity:

Firstly, in this paper there is considered a certain class of possibly unbounded optimization problems on Euclidean spaces, for which conditions that permit to obtain monotone minimizers are given. Secondly, the theory developed in the first part of the paper is applied to Markov control processes (MCPs) on real spaces with possibly unbounded cost function, and with possibly noncompact control sets, considering both the discounted and the average cost as optimality criterion. In the...

Identification of optimal policies in Markov decision processes

Karel Sladký (2010)

Kybernetika

Similarity:

In this note we focus attention on identifying optimal policies and on elimination suboptimal policies minimizing optimality criteria in discrete-time Markov decision processes with finite state space and compact action set. We present unified approach to value iteration algorithms that enables to generate lower and upper bounds on optimal values, as well as on the current policy. Using the modified value iterations it is possible to eliminate suboptimal actions and to identify an optimal...

Risk-sensitive average optimality in Markov decision processes

Karel Sladký (2018)

Kybernetika

Similarity:

In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first...