The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
In this paper, we present a result on relaxability of partially observed control problems for infinite dimensional stochastic systems in a Hilbert space. This is motivated by the fact that measure valued controls, also known as relaxed controls, are difficult to construct practically and so one must inquire if it is possible to approximate the solutions corresponding to measure valued controls by those corresponding to ordinary controls. Our main result is the relaxation theorem which states that...
We show how the use of a parallel between the ordinary (+, X) and the
(max, +) algebras, Maslov measures that exploit this parallel, and more
specifically their specialization to probabilities and
the corresponding cost measures of Quadrat, offer a completely parallel
treatment of stochastic and minimax control of disturbed nonlinear discrete
time systems with partial information. This paper is based upon, and
improves, the discrete time part of the earlier paper [9].
In a Discounted Markov Decision Process (DMDP) with finite action sets the Value Iteration Algorithm, under suitable conditions, leads to an optimal policy in a finite number of steps. Determining an upper bound on the necessary number of steps till gaining convergence is an issue of great theoretical and practical interest as it would provide a computationally feasible stopping rule for value iteration as an algorithm for finding an optimal policy. In this paper we find such a bound depending only...
In this paper we solve the basic fractional analogue of the classical linear-quadratic gaussian regulator problem in continuous time. For a completely observable controlled linear system driven by a fractional brownian motion, we describe explicitely the optimal control policy which minimizes a quadratic performance criterion.
In this paper we solve the basic fractional
analogue of the classical linear-quadratic Gaussian
regulator problem in continuous time. For a completely
observable controlled linear system driven by a fractional
Brownian motion, we describe explicitely the optimal control
policy which minimizes a quadratic performance criterion.
Some discrete time controlled Markov processes in a locally compact metric space whose transition operators depend on an unknown parameter are described. The adaptive controls are constructed using the large deviations of empirical distributions which are uniform in the parameter that takes values in a compact set. The adaptive procedure uses a finite family of continuous, almost optimal controls. Using the large deviations property it is shown that an adaptive control which is a fixed almost optimal...
This paper is dedicated to the analysis of backward stochastic differential equations (BSDEs) with jumps, subject to an additional global constraint involving all the components of the solution. We study the existence and uniqueness of a minimal solution for these so-called constrained BSDEs with jumps via a penalization procedure. This new type of BSDE offers a nice and practical unifying framework to the notions of constrained BSDEs presented in [S. Peng and M. Xu, Preprint. (2007)] and BSDEs...
In questa conferenza, vengono esposte le idee essenziali che stanno alla base del classico problema di gestire un portafoglio in modo da rendere massima l'utilità media. I metodi tipici del controllo stocastico sono confrontati con le idee della dualità convessa infinito-dimensionale.
Currently displaying 21 –
40 of
56