Displaying 161 – 180 of 324

Showing per page

Modelling and optimal control of networked systems with stochastic communication protocols

Chaoqun Zhu, Bin Yang, Xiang Zhu (2020)

Kybernetika

This paper is concerned with the finite and infinite horizon optimal control issue for a class of networked control systems with stochastic communication protocols. Due to the limitation of networked bandwidth, only the limited number of sensors and actuators are allowed to get access to network mediums according to stochastic access protocols. A discrete-time Markov chain with a known transition probability matrix is employed to describe the scheduling behaviors of the stochastic access protocols,...

Monotone optimal policies in discounted Markov decision processes with transition probabilities independent of the current state: existence and approximation

Rosa María Flores-Hernández (2013)

Kybernetika

In this paper there are considered Markov decision processes (MDPs) that have the discounted cost as the objective function, state and decision spaces that are subsets of the real line but are not necessarily finite or denumerable. The considered MDPs have a cost function that is possibly unbounded, and dynamic independent of the current state. The considered decision sets are possibly non-compact. In the context described, conditions to obtain either an increasing or decreasing optimal stationary...

Monotonicity of minimizers in optimization problems with applications to Markov control processes

Rosa M. Flores–Hernández, Raúl Montes-de-Oca (2007)

Kybernetika

Firstly, in this paper there is considered a certain class of possibly unbounded optimization problems on Euclidean spaces, for which conditions that permit to obtain monotone minimizers are given. Secondly, the theory developed in the first part of the paper is applied to Markov control processes (MCPs) on real spaces with possibly unbounded cost function, and with possibly noncompact control sets, considering both the discounted and the average cost as optimality criterion. In the context described,...

Note on stability estimation in average Markov control processes

Jaime Martínez Sánchez, Elena Zaitseva (2015)

Kybernetika

We study the stability of average optimal control of general discrete-time Markov processes. Under certain ergodicity and Lipschitz conditions the stability index is bounded by a constant times the Prokhorov distance between distributions of random vectors determinating the “original and the perturbated” control processes.

On adaptive control of a partially observed Markov chain

Giovanni Di Masi, Łukasz Stettner (1994)

Applicationes Mathematicae

A control problem for a partially observable Markov chain depending on a parameter with long run average cost is studied. Using uniform ergodicity arguments it is shown that, for values of the parameter varying in a compact set, it is possible to consider only a finite number of nearly optimal controls based on the values of actually computable approximate filters. This leads to an algorithm that guarantees nearly selfoptimizing properties without identifiability conditions. The algorithm is based...

On additive and multiplicative (controlled) Poisson equations

G. B. Di Masi, Ł. Stettner (2006)

Banach Center Publications

Assuming that a Markov process satisfies the minorization property, existence and properties of the solutions to the additive and multiplicative Poisson equations are studied using splitting techniques. The problem is then extended to the study of risk sensitive and risk neutral control problems and corresponding Bellman equations.

On nearly selfoptimizing strategies for multiarmed bandit problems with controlled arms

Ewa Drabik (1996)

Applicationes Mathematicae

Two kinds of strategies for a multiarmed Markov bandit problem with controlled arms are considered: a strategy with forcing and a strategy with randomization. The choice of arm and control function in both cases is based on the current value of the average cost per unit time functional. Some simulation results are also presented.

On near-optimal necessary and sufficient conditions for forward-backward stochastic systems with jumps, with applications to finance

Mokhtar Hafayed, Petr Veverka, Syed Abbas (2014)

Applications of Mathematics

We establish necessary and sufficient conditions of near-optimality for nonlinear systems governed by forward-backward stochastic differential equations with controlled jump processes (FBSDEJs in short). The set of controls under consideration is necessarily convex. The proof of our result is based on Ekeland's variational principle and continuity in some sense of the state and adjoint processes with respect to the control variable. We prove that under an additional hypothesis, the near-maximum...

Currently displaying 161 – 180 of 324