Displaying similar documents to “Uniform value in dynamic programming”

Stochastic dynamic programming with random disturbances

Regina Hildenbrandt (2003)

Discussiones Mathematicae Probability and Statistics

Similarity:

Several peculiarities of stochastic dynamic programming problems where random vectors are observed before the decision ismade at each stage are discussed in the first part of this paper. Surrogate problems are given for such problems with distance properties (for instance, transportation problems) in the second part.

Interactive compromise hypersphere method and its applications

Sebastian Sitarz (2012)

RAIRO - Operations Research - Recherche Opérationnelle

Similarity:

The paper focuses on multi-criteria problems. It presents the interactive compromise hypersphere method with sensitivity analysis as a decision tool in multi-objective programming problems. The method is based on finding a hypersphere (in the criteria space) which is closest to the set of chosen nondominated solutions. The proposed modifications of the compromise hypersphere method are based on using various metrics and analyzing their influence on the original method. Applications of...

Interactive compromise hypersphere method and its applications

Sebastian Sitarz (2012)

RAIRO - Operations Research

Similarity:

The paper focuses on multi-criteria problems. It presents the interactive compromise hypersphere method with sensitivity analysis as a decision tool in multi-objective programming problems. The method is based on finding a hypersphere (in the criteria space) which is closest to the set of chosen nondominated solutions. The proposed modifications of the compromise hypersphere method are based on using various metrics and analyzing their influence on the original method. Applications of...

Another set of verifiable conditions for average Markov decision processes with Borel spaces

Xiaolong Zou, Xianping Guo (2015)

Kybernetika

Similarity:

In this paper we give a new set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and unbounded reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the primitive data of the model of Markov decision processes and thus easy to verify. We also...

Markov decision processes with time-varying discount factors and random horizon

Rocio Ilhuicatzi-Roldán, Hugo Cruz-Suárez, Selene Chávez-Rodríguez (2017)

Kybernetika

Similarity:

This paper is related to Markov Decision Processes. The optimal control problem is to minimize the expected total discounted cost, with a non-constant discount factor. The discount factor is time-varying and it could depend on the state and the action. Furthermore, it is considered that the horizon of the optimization problem is given by a discrete random variable, that is, a random horizon is assumed. Under general conditions on Markov control model, using the dynamic programming approach,...

Dynamic programming for an investment/consumption problem in illiquid markets with regime-switching

Paul Gassiat, Fausto Gozzi, Huyên Pham (2015)

Banach Center Publications

Similarity:

We consider an illiquid financial market with different regimes modeled by a continuous time finite-state Markov chain. The investor can trade a stock only at the discrete arrival times of a Cox process with intensity depending on the market regime. Moreover, the risky asset price is subject to liquidity shocks, which change its rate of return and volatility, and induce jumps on its dynamics. In this setting, we study the problem of an economic agent optimizing her expected utility from...