Page 1 Next

Displaying 1 – 20 of 120

Showing per page

A consumption-investment problem modelled as a discounted Markov decision process

Hugo Cruz-Suárez, Raúl Montes-de-Oca, Gabriel Zacarías (2011)

Kybernetika

In this paper a problem of consumption and investment is presented as a model of a discounted Markov decision process with discrete-time. In this problem, it is assumed that the wealth is affected by a production function. This assumption gives the investor a chance to increase his wealth before the investment. For the solution of the problem there is established a suitable version of the Euler Equation (EE) which characterizes its optimal policy completely, that is, there are provided conditions...

A Markov chain model for traffic equilibrium problems

Giandomenico Mastroeni (2010)

RAIRO - Operations Research

We consider a stochastic approach in order to define an equilibrium model for a traffic-network problem. In particular, we assume a Markovian behaviour of the users in their movements throughout the zones of the traffic area. This assumption turns out to be effective at least in the context of urban traffic, where, in general, the users tend to travel by choosing the path they find more convenient and not necessarily depending on the already travelled part. The developed model is a homogeneous...

A Markov chain model for traffic equilibrium problems

Giandomenico Mastroeni (2002)

RAIRO - Operations Research - Recherche Opérationnelle

We consider a stochastic approach in order to define an equilibrium model for a traffic-network problem. In particular, we assume a markovian behaviour of the users in their movements throughout the zones of the traffic area. This assumption turns out to be effective at least in the context of urban traffic, where, in general, the users tend to travel by choosing the path they find more convenient and not necessarily depending on the already travelled part. The developed model is a homogeneous Markov...

A Separation Theorem for Expected Value and Feared Value Discrete Time Control

Pierre Bernhard (2010)

ESAIM: Control, Optimisation and Calculus of Variations

We show how the use of a parallel between the ordinary (+, X) and the (max, +) algebras, Maslov measures that exploit this parallel, and more specifically their specialization to probabilities and the corresponding cost measures of Quadrat, offer a completely parallel treatment of stochastic and minimax control of disturbed nonlinear discrete time systems with partial information. This paper is based upon, and improves, the discrete time part of the earlier paper [9].

A stopping rule for discounted Markov decision processes with finite action sets

Raúl Montes-de-Oca, Enrique Lemus-Rodríguez, Daniel Cruz-Suárez (2009)

Kybernetika

In a Discounted Markov Decision Process (DMDP) with finite action sets the Value Iteration Algorithm, under suitable conditions, leads to an optimal policy in a finite number of steps. Determining an upper bound on the necessary number of steps till gaining convergence is an issue of great theoretical and practical interest as it would provide a computationally feasible stopping rule for value iteration as an algorithm for finding an optimal policy. In this paper we find such a bound depending only...

About stability of risk-seeking optimal stopping

Raúl Montes-de-Oca, Elena Zaitseva (2014)

Kybernetika

We offer the quantitative estimation of stability of risk-sensitive cost optimization in the problem of optimal stopping of Markov chain on a Borel space X . It is supposed that the transition probability p ( · | x ) , x X is approximated by the transition probability p ˜ ( · | x ) , x X , and that the stopping rule f ˜ * , which is optimal for the process with the transition probability p ˜ is applied to the process with the transition probability p . We give an upper bound (expressed in term of the total variation distance: sup x X p ( · | x ) - p ˜ ( · | x ) ) for...

Adaptive control for discrete-time Markov processes with unbounded costs: Discounted criterion

Evgueni I. Gordienko, J. Adolfo Minjárez-Sosa (1998)

Kybernetika

We study the adaptive control problem for discrete-time Markov control processes with Borel state and action spaces and possibly unbounded one-stage costs. The processes are given by recurrent equations x t + 1 = F ( x t , a t , ξ t ) , t = 0 , 1 , ... with i.i.d. k -valued random vectors ξ t whose density ρ is unknown. Assuming observability of ξ t we propose the procedure of statistical estimation of ρ that allows us to prove discounted asymptotic optimality of two types of adaptive policies used early for the processes with bounded costs.

An optimality system for finite average Markov decision chains under risk-aversion

Alfredo Alanís-Durán, Rolando Cavazos-Cadena (2012)

Kybernetika

This work concerns controlled Markov chains with finite state space and compact action sets. The decision maker is risk-averse with constant risk-sensitivity, and the performance of a control policy is measured by the long-run average cost criterion. Under standard continuity-compactness conditions, it is shown that the (possibly non-constant) optimal value function is characterized by a system of optimality equations which allows to obtain an optimal stationary policy. Also, it is shown that the...

An SMDP model for a multiclass multi-server queueing control problem considering conversion times

Zhicong Zhang, Na Li, Shuai Li, Xiaohui Yan, Jianwen Guo (2014)

RAIRO - Operations Research - Recherche Opérationnelle

We address a queueing control problem considering service times and conversion times following normal distributions. We formulate the multi-server queueing control problem by constructing a semi-Markov decision process (SMDP) model. The mechanism of state transitions is developed through mathematical derivation of the transition probabilities and transition times. We also study the property of the queueing control system and show that optimizing the objective function of the addressed queueing control...

An unbounded Berge's minimum theorem with applications to discounted Markov decision processes

Raúl Montes-de-Oca, Enrique Lemus-Rodríguez (2012)

Kybernetika

This paper deals with a certain class of unbounded optimization problems. The optimization problems taken into account depend on a parameter. Firstly, there are established conditions which permit to guarantee the continuity with respect to the parameter of the minimum of the optimization problems under consideration, and the upper semicontinuity of the multifunction which applies each parameter into its set of minimizers. Besides, with the additional condition of uniqueness of the minimizer, its...

Another set of verifiable conditions for average Markov decision processes with Borel spaces

Xiaolong Zou, Xianping Guo (2015)

Kybernetika

In this paper we give a new set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and unbounded reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the primitive data of the model of Markov decision processes and thus easy to verify. We also give two...

Approximation and estimation in Markov control processes under a discounted criterion

J. Adolfo Minjárez-Sosa (2004)

Kybernetika

We consider a class of discrete-time Markov control processes with Borel state and action spaces, and k -valued i.i.d. disturbances with unknown density ρ . Supposing possibly unbounded costs, we combine suitable density estimation methods of ρ with approximation procedures of the optimal cost function, to show the existence of a sequence { f ^ t } of minimizers converging to an optimal stationary policy f .

Currently displaying 1 – 20 of 120

Page 1 Next