Displaying similar documents to “Markov decision processes with time-varying discount factors and random horizon”

Partially observable Markov decision processes with partially observable random discount factors

E. Everardo Martinez-Garcia, J. Adolfo Minjárez-Sosa, Oscar Vega-Amaya (2022)

Kybernetika

Similarity:

This paper deals with a class of partially observable discounted Markov decision processes defined on Borel state and action spaces, under unbounded one-stage cost. The discount rate is a stochastic process evolving according to a difference equation, which is also assumed to be partially observable. Introducing a suitable control model and filtering processes, we prove the existence of optimal control policies. In addition, we illustrate our results in a class of GI/GI/1 queueing systems...

Risk-sensitive average optimality in Markov decision processes

Karel Sladký (2018)

Kybernetika

Similarity:

In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first...

Uniqueness of optimal policies as a generic property of discounted Markov decision processes: Ekeland's variational principle approach

R. Israel Ortega-Gutiérrez, Raúl Montes-de-Oca, Enrique Lemus-Rodríguez (2016)

Kybernetika

Similarity:

Many examples in optimization, ranging from Linear Programming to Markov Decision Processes (MDPs), present more than one optimal solution. The study of this non-uniqueness is of great mathematical interest. In this paper the authors show that in a specific family of discounted MDPs, non-uniqueness is a “fragile” property through Ekeland's Principle for each problem with at least two optimal policies; a perturbed model is produced with a unique optimal policy. This result not only supersedes...

Constrained optimality problem of Markov decision processes with Borel spaces and varying discount factors

Xiao Wu, Yanqiu Tang (2021)

Kybernetika

Similarity:

This paper focuses on the constrained optimality of discrete-time Markov decision processes (DTMDPs) with state-dependent discount factors, Borel state and compact Borel action spaces, and possibly unbounded costs. By means of the properties of so-called occupation measures of policies and the technique of transforming the original constrained optimality problem of DTMDPs into a convex program one, we prove the existence of an optimal randomized stationary policies under reasonable conditions. ...

Another set of verifiable conditions for average Markov decision processes with Borel spaces

Xiaolong Zou, Xianping Guo (2015)

Kybernetika

Similarity:

In this paper we give a new set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and unbounded reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the primitive data of the model of Markov decision processes and thus easy to verify. We also...

Second Order optimality in Markov decision chains

Karel Sladký (2017)

Kybernetika

Similarity:

The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not...