### A connection between controlled Markov chains and martingales

Skip to main content (access key 's'),
Skip to navigation (access key 'n'),
Accessibility information (access key '0')

In this paper a problem of consumption and investment is presented as a model of a discounted Markov decision process with discrete-time. In this problem, it is assumed that the wealth is affected by a production function. This assumption gives the investor a chance to increase his wealth before the investment. For the solution of the problem there is established a suitable version of the Euler Equation (EE) which characterizes its optimal policy completely, that is, there are provided conditions...

This paper analyses the implementation of the generalized finite differences method for the HJB equation of stochastic control, introduced by two of the authors in [Bonnans and Zidani, SIAM J. Numer. Anal. 41 (2003) 1008–1021]. The computation of coefficients needs to solve at each point of the grid (and for each control) a linear programming problem. We show here that, for two dimensional problems, this linear programming problem can be solved in $O\left({p}_{max}\right)$ operations, where ${p}_{max}$ is the size of the stencil....

This paper analyses the implementation of the generalized finite differences method for the HJB equation of stochastic control, introduced by two of the authors in [Bonnans and Zidani, SIAM J. Numer. Anal.41 (2003) 1008–1021]. The computation of coefficients needs to solve at each point of the grid (and for each control) a linear programming problem. We show here that, for two dimensional problems, this linear programming problem can be solved in O(pmax) operations, where pmax is the size of...

We provide a generalization of Ueno's inequality for n-step transition probabilities of Markov chains in a general state space. Our result is relevant to the study of adaptive control problems and approximation problems in the theory of discrete-time Markov decision processes and stochastic games.

The dual attainment of the Monge–Kantorovich transport problem is analyzed in a general setting. The spaces X,Y are assumed to be polish and equipped with Borel probability measures μ and ν. The transport cost function c : X × Y → [0,∞] is assumed to be Borel measurable. We show that a dual optimizer always exists, provided we interpret it as a projective limit of certain finitely additive measures. Our methods are functional analytic and rely on Fenchel’s perturbation technique.

The dual attainment of the Monge–Kantorovich transport problem is analyzed in a general setting. The spaces X,Y are assumed to be polish and equipped with Borel probability measures μ and ν. The transport cost function c : X × Y → [0,∞] is assumed to be Borel measurable. We show that a dual optimizer always exists, provided we interpret it as a projective limit of certain finitely additive measures. Our methods are functional analytic...

A singular stochastic control problem in n dimensions with timedependent coefficients on a finite time horizon is considered. We show that the value function for this problem is a generalized solution of the corresponding HJB equation with locally bounded second derivatives with respect to the space variables and the first derivative with respect to time. Moreover, we prove that an optimal control exists and is unique

We deal with the optimal portfolio problem in discrete-time setting. Employing the discrete Itô formula, which is developed by Fujita, we establish the discrete Hamilton–Jacobi–Bellman (d-HJB) equation for the value function. Simple examples of the d-HJB equation are also discussed.

In this paper we propose and study a continuous time stochastic model of optimal allocation for a defined contribution pension fund in the accumulation phase. The level of wealth is constrained to stay above a "solvency level". The fund manager can invest in a riskless asset and in a risky asset, but borrowing and short selling are prohibited. The model is naturally formulated as an optimal stochastic control problem with state constraints and is treated by the dynamic programming approach. We show...

In this paper, we present a result on relaxability of partially observed control problems for infinite dimensional stochastic systems in a Hilbert space. This is motivated by the fact that measure valued controls, also known as relaxed controls, are difficult to construct practically and so one must inquire if it is possible to approximate the solutions corresponding to measure valued controls by those corresponding to ordinary controls. Our main result is the relaxation theorem which states that...