The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
The search session has expired. Please query the service again.
We consider a financial market with memory effects in which wealth processes are driven by mean-field stochastic Volterra equations. In this financial market, the classical dynamic programming method can not be used to study the optimal investment problem, because the solution of mean-field stochastic Volterra equation is not a Markov process. In this paper, a new method through Malliavin calculus introduced in [1], can be used to obtain the optimal investment in a Volterra type financial market....
This paper is related to Markov Decision Processes. The optimal control problem is to minimize the expected total discounted cost, with a non-constant discount factor. The discount factor is time-varying and it could depend on the state and the action. Furthermore, it is considered that the horizon of the optimization problem is given by a discrete random variable, that is, a random horizon is assumed. Under general conditions on Markov control model, using the dynamic programming approach, an optimality...
The maximum principle for optimal control problems of fully coupled forward-backward doubly stochastic differential equations (FBDSDEs in short) in the global form is obtained, under the assumptions that the diffusion coefficients do not contain the control variable, but the control domain need not to be convex. We apply our stochastic maximum principle (SMP in short) to investigate the optimal control problems of a class of stochastic partial differential equations (SPDEs in short). And as an example...
The maximum principle for optimal control problems of fully coupled
forward-backward doubly stochastic differential equations (FBDSDEs in short)
in the global form is obtained, under the assumptions that the diffusion
coefficients do not contain the control variable, but the control domain
need not to be convex. We apply our stochastic maximum principle (SMP in
short) to investigate the optimal control problems of a class of stochastic
partial differential equations (SPDEs in short). And as an...
This paper deals with the optimal control problem in which the controlled system is described by a fully coupled anticipated forward-backward stochastic differential delayed equation. The maximum principle for this problem is obtained under the assumption that the diffusion coefficient does not contain the control variables and the control domain is not necessarily convex. Both the necessary and sufficient conditions of optimality are proved. As illustrating examples, two kinds of linear quadratic...
This paper presents an overview of some recent results concerning the emerging theory of minimax LQG control for uncertain systems with a relative entropy constraint uncertainty description. This is an important new robust control system design methodology providing minimax optimal performance in terms of a quadratic cost functional. The paper first considers some standard uncertainty descriptions to motivate the relative entropy constraint uncertainty description. The minimax LQG problem under...
This paper is concerned with the finite and infinite horizon optimal control issue for a class of networked control systems with stochastic communication protocols. Due to the limitation of networked bandwidth, only the limited number of sensors and actuators are allowed to get access to network mediums according to stochastic access protocols. A discrete-time Markov chain with a known transition probability matrix is employed to describe the scheduling behaviors of the stochastic access protocols,...
In this paper there are considered Markov decision processes (MDPs) that have the discounted cost as the objective function, state and decision spaces that are subsets of the real line but are not necessarily finite or denumerable. The considered MDPs have a cost function that is possibly unbounded, and dynamic independent of the current state. The considered decision sets are possibly non-compact. In the context described, conditions to obtain either an increasing or decreasing optimal stationary...
Firstly, in this paper there is considered a certain class of possibly unbounded optimization problems on Euclidean spaces, for which conditions that permit to obtain monotone minimizers are given. Secondly, the theory developed in the first part of the paper is applied to Markov control processes (MCPs) on real spaces with possibly unbounded cost function, and with possibly noncompact control sets, considering both the discounted and the average cost as optimality criterion. In the context described,...
Currently displaying 1 –
17 of
17