Displaying 41 – 60 of 324

Showing per page

An explicit solution for optimal investment problems with autoregressive prices and exponential utility

Sándor Deák, Miklós Rásonyi (2015)

Applicationes Mathematicae

We calculate explicitly the optimal strategy for an investor with exponential utility function when the price of a single risky asset (stock) follows a discrete-time autoregressive Gaussian process. We also calculate its performance and analyse it when the trading horizon tends to infinity. Dependence of the asymptotic performance on the autoregression parameter is determined. This provides, to the best of our knowledge, the first instance of a theorem linking directly the memory of the asset price...

An optimality system for finite average Markov decision chains under risk-aversion

Alfredo Alanís-Durán, Rolando Cavazos-Cadena (2012)

Kybernetika

This work concerns controlled Markov chains with finite state space and compact action sets. The decision maker is risk-averse with constant risk-sensitivity, and the performance of a control policy is measured by the long-run average cost criterion. Under standard continuity-compactness conditions, it is shown that the (possibly non-constant) optimal value function is characterized by a system of optimality equations which allows to obtain an optimal stationary policy. Also, it is shown that the...

An unbounded Berge's minimum theorem with applications to discounted Markov decision processes

Raúl Montes-de-Oca, Enrique Lemus-Rodríguez (2012)

Kybernetika

This paper deals with a certain class of unbounded optimization problems. The optimization problems taken into account depend on a parameter. Firstly, there are established conditions which permit to guarantee the continuity with respect to the parameter of the minimum of the optimization problems under consideration, and the upper semicontinuity of the multifunction which applies each parameter into its set of minimizers. Besides, with the additional condition of uniqueness of the minimizer, its...

Another set of verifiable conditions for average Markov decision processes with Borel spaces

Xiaolong Zou, Xianping Guo (2015)

Kybernetika

In this paper we give a new set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and unbounded reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the primitive data of the model of Markov decision processes and thus easy to verify. We also give two...

Approximation, estimation and control of stochastic systems under a randomized discounted cost criterion

Juan González-Hernández, Raquiel R. López-Martínez, J. Adolfo Minjárez-Sosa (2009)

Kybernetika

The paper deals with a class of discrete-time stochastic control processes under a discounted optimality criterion with random discount rate, and possibly unbounded costs. The state process x t and the discount process α t evolve according to the coupled difference equations x t + 1 = F ( x t , α t , a t , ξ t ) , α ...

Asymptotic properties and optimization of some non-Markovian stochastic processes

Evgueni I. Gordienko, Antonio Garcia, Juan Ruiz de Chavez (2009)

Kybernetika

We study the limit behavior of certain classes of dependent random sequences (processes) which do not possess the Markov property. Assuming these processes depend on a control parameter we show that the optimization of the control can be reduced to a problem of nonlinear optimization. Under certain hypotheses we establish the stability of such optimization problems.

Average cost Markov control processes with weighted norms: existence of canonical policies

Evgueni Gordienko, Onésimo Hernández-Lerma (1995)

Applicationes Mathematicae

This paper considers discrete-time Markov control processes on Borel spaces, with possibly unbounded costs, and the long run average cost (AC) criterion. Under appropriate hypotheses on weighted norms for the cost function and the transition law, the existence of solutions to the average cost optimality inequality and the average cost optimality equation are shown, which in turn yield the existence of AC-optimal and AC-canonical policies respectively.

Average cost Markov control processes with weighted norms: value iteration

Evgueni Gordienko, Onésimo Hernández-Lerma (1995)

Applicationes Mathematicae

This paper shows the convergence of the value iteration (or successive approximations) algorithm for average cost (AC) Markov control processes on Borel spaces, with possibly unbounded cost, under appropriate hypotheses on weighted norms for the cost function and the transition law. It is also shown that the aforementioned convergence implies strong forms of AC-optimality and the existence of forecast horizons.

Bayesian estimation of the mean holding time in average semi-Markov control processes

J. Adolfo Minjárez-Sosa, José A. Montoya (2015)

Applicationes Mathematicae

We consider semi-Markov control models with Borel state and action spaces, possibly unbounded costs, and holding times with a generalized exponential distribution with unknown mean θ. Assuming that such a distribution does not depend on the state-action pairs, we introduce a Bayesian estimation procedure for θ, which combined with a variant of the vanishing discount factor approach yields average cost optimal policies.

Bellman approach to some problems in harmonic analysis

Alexander Volberg (2001/2002)

Séminaire Équations aux dérivées partielles

The stochastic optimal control uses the differential equation of Bellman and its solution - the Bellman function. Recently the Bellman function proved to be an efficient tool for solving some (sometimes old) problems in harmonic analysis.

Currently displaying 41 – 60 of 324