Displaying similar documents to “Another set of verifiable conditions for average Markov decision processes with Borel spaces”

First passage risk probability optimality for continuous time Markov decision processes

Haifeng Huo, Xian Wen (2019)

Kybernetika

Similarity:

In this paper, we study continuous time Markov decision processes (CTMDPs) with a denumerable state space, a Borel action space, unbounded transition rates and nonnegative reward function. The optimality criterion to be considered is the first passage risk probability criterion. To ensure the non-explosion of the state processes, we first introduce a so-called drift condition, which is weaker than the well known regular condition for semi-Markov decision processes (SMDPs). Furthermore,...

Risk-sensitive average optimality in Markov decision processes

Karel Sladký (2018)

Kybernetika

Similarity:

In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first...

Mean-variance optimality for semi-Markov decision processes under first passage criteria

Xiangxiang Huang, Yonghui Huang (2017)

Kybernetika

Similarity:

This paper deals with a first passage mean-variance problem for semi-Markov decision processes in Borel spaces. The goal is to minimize the variance of a total discounted reward up to the system's first entry to some target set, where the optimization is over a class of policies with a prescribed expected first passage reward. The reward rates are assumed to be possibly unbounded, while the discount factor may vary with states of the system and controls. We first develop some suitable...

Constrained optimality problem of Markov decision processes with Borel spaces and varying discount factors

Xiao Wu, Yanqiu Tang (2021)

Kybernetika

Similarity:

This paper focuses on the constrained optimality of discrete-time Markov decision processes (DTMDPs) with state-dependent discount factors, Borel state and compact Borel action spaces, and possibly unbounded costs. By means of the properties of so-called occupation measures of policies and the technique of transforming the original constrained optimality problem of DTMDPs into a convex program one, we prove the existence of an optimal randomized stationary policies under reasonable conditions. ...

Semi-Markov control processes with non-compact action spaces and discontinuous costs

Anna Jaśkiewicz (2009)

Applicationes Mathematicae

Similarity:

We establish the average cost optimality equation and show the existence of an (ε-)optimal stationary policy for semi-Markov control processes without compactness and continuity assumptions. The only condition we impose on the model is the V-geometric ergodicity of the embedded Markov chain governed by a stationary policy.

Average cost Markov control processes with weighted norms: existence of canonical policies

Evgueni Gordienko, Onésimo Hernández-Lerma (1995)

Applicationes Mathematicae

Similarity:

This paper considers discrete-time Markov control processes on Borel spaces, with possibly unbounded costs, and the long run average cost (AC) criterion. Under appropriate hypotheses on weighted norms for the cost function and the transition law, the existence of solutions to the average cost optimality inequality and the average cost optimality equation are shown, which in turn yield the existence of AC-optimal and AC-canonical policies respectively.

Second Order optimality in Markov decision chains

Karel Sladký (2017)

Kybernetika

Similarity:

The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not...