On the adaptive control of countable Markov chains
Let be the collection of all -optimal solutions for a stochastic process with locally bounded trajectories defined on a topological space. For sequences of such stochastic processes and of nonnegative random variables we give sufficient conditions for the (closed) random sets to converge in distribution with respect to the Fell-topology and to the coarser Missing-topology.
The asymptotics of utility from terminal wealth is studied. First, a finite horizon problem for any utility function is considered. To study a long run infinite horizon problem, a certain positive homogeneity (PH) assumption is imposed. It is then shown that assumption (PH) is practically satisfied only by power and logarithmic utility functions.
In the present paper optimal time-invariant state feedback controllers are designed for a class of discrete time-varying control systems with Markov jumping parameter and quadratic performance index. We assume that the coefficients have limits as time tends to infinity and the boundary system is absolutely observable and stabilizable. Moreover, following the same line of reasoning, an adaptive controller is proposed in the case when system parameters are unknown but their strongly consistent estimators...
In this paper we solve the basic fractional analogue of the classical infinite time horizon linear-quadratic gaussian regulator problem. For a completely observable controlled linear system driven by a fractional brownian motion, we describe explicitely the optimal control policy which minimizes an asymptotic quadratic performance criterion.
In this paper we solve the basic fractional analogue of the classical infinite time horizon linear-quadratic Gaussian regulator problem. For a completely observable controlled linear system driven by a fractional Brownian motion, we describe explicitely the optimal control policy which minimizes an asymptotic quadratic performance criterion.
We are concerned with the optimal control of a nonlinear stochastic heat equation on a bounded real interval with Neumann boundary conditions. The specificity here is that both the control and the noise act on the boundary. We start by reformulating the state equation as an infinite dimensional stochastic evolution equation. The first main result of the paper is the proof of existence and uniqueness of a mild solution for the corresponding Hamilton-Jacobi-Bellman (HJB) equation. The C1 regularity...
In this paper, we consider optimal feedback control for stochastc infinite dimensional systems. We present some new results on the solution of associated HJB equations in infinite dimensional Hilbert spaces. In the process, we have also developed some new mathematical tools involving distributions on Hilbert spaces which may have many other interesting applications in other fields. We conclude with an application to optimal stationary feedback control.
In this paper, we consider a class of infinite dimensional stochastic impulsive evolution inclusions driven by vector measures. We use stochastic vector measures as controls adapted to an increasing family of complete sigma algebras and prove the existence of optimal controls.
In this paper we study the existence of the optimal (minimizing) control for a tracking problem, as well as a quadratic cost problem subject to linear stochastic evolution equations with unbounded coefficients in the drift. The backward differential Riccati equation (BDRE) associated with these problems (see [chen], for finite dimensional stochastic equations or [UC], for infinite dimensional equations with bounded coefficients) is in general different from the conventional BDRE (see [1990], [ukl])....