Displaying similar documents to “Note on stability estimation in average Markov control processes”

Estimates of stability of Markov control processes with unbounded costs

Evgueni I. Gordienko, Francisco Salem-Silva (2000)

Kybernetika

Similarity:

For a discrete-time Markov control process with the transition probability p , we compare the total discounted costs V β ( π β ) and V β ( π ˜ β ) , when applying the optimal control policy π β and its approximation π ˜ β . The policy π ˜ β is optimal for an approximating process with the transition probability p ˜ . A cost per stage for considered processes can be unbounded. Under certain ergodicity assumptions we establish the upper bound for the relative stability index [ V β ( π ˜ β ) - V β ( π β ) ] / V β ( π β ) . This bound does not depend...

Estimates for perturbations of general discounted Markov control chains

Raúl Montes-de-Oca, Alexander Sakhanenko, Francisco Salem-Silva (2003)

Applicationes Mathematicae

Similarity:

We extend previous results of the same authors ([11]) on the effects of perturbation in the transition probability of a Markov cost chain for discounted Markov control processes. Supposing valid, for each stationary policy, conditions of Lyapunov and Harris type, we get upper bounds for the index of perturbations, defined as the difference of the total expected discounted costs for the original Markov control process and the perturbed one. We present examples that satisfy our conditions. ...

On risk sensitive control of regular step Markov processes

Roman Sadowy (2001)

Applicationes Mathematicae

Similarity:

Risk-sensitive control problem of regular step Markov processes is considered, firstly when the control parameters are changed at shift times and then in the general case.

Semi-Markov control processes with non-compact action spaces and discontinuous costs

Anna Jaśkiewicz (2009)

Applicationes Mathematicae

Similarity:

We establish the average cost optimality equation and show the existence of an (ε-)optimal stationary policy for semi-Markov control processes without compactness and continuity assumptions. The only condition we impose on the model is the V-geometric ergodicity of the embedded Markov chain governed by a stationary policy.

Asymptotic stability condition for stochastic Markovian systems of differential equations

Efraim Shmerling (2010)

Mathematica Bohemica

Similarity:

Asymptotic stability of the zero solution for stochastic jump parameter systems of differential equations given by d X ( t ) = A ( ξ ( t ) ) X ( t ) d t + H ( ξ ( t ) ) X ( t ) d w ( t ) , where ξ ( t ) is a finite-valued Markov process and w(t) is a standard Wiener process, is considered. It is proved that the existence of a unique positive solution of the system of coupled Lyapunov matrix equations derived in the paper is a necessary asymptotic stability condition.

Deterministic optimal policies for Markov control processes with pathwise constraints

Armando F. Mendoza-Pérez, Onésimo Hernández-Lerma (2012)

Applicationes Mathematicae

Similarity:

This paper deals with discrete-time Markov control processes in Borel spaces with unbounded rewards. Under suitable hypotheses, we show that a randomized stationary policy is optimal for a certain expected constrained problem (ECP) if and only if it is optimal for the corresponding pathwise constrained problem (pathwise CP). Moreover, we show that a certain parametric family of unconstrained optimality equations yields convergence properties that lead to an approximation scheme which...