Loading [MathJax]/extensions/MathZoom.js
Displaying 61 –
80 of
862
In this paper we investigate the behavior of the discrete time AR (Auto Regressive) representations over a finite time interval, in terms of the finite and infinite spectral structure of the polynomial matrix involved in the AR-equation. A boundary mapping equation and a closed formula for the determination of the solution, in terms of the boundary conditions, are also gived.
In a Discounted Markov Decision Process (DMDP) with finite action sets the Value Iteration Algorithm, under suitable conditions, leads to an optimal policy in a finite number of steps. Determining an upper bound on the necessary number of steps till gaining convergence is an issue of great theoretical and practical interest as it would provide a computationally feasible stopping rule for value iteration as an algorithm for finding an optimal policy. In this paper we find such a bound depending only...
In this paper we solve the basic fractional analogue of the classical linear-quadratic gaussian regulator problem in continuous time. For a completely observable controlled linear system driven by a fractional brownian motion, we describe explicitely the optimal control policy which minimizes a quadratic performance criterion.
In this paper we solve the basic fractional
analogue of the classical linear-quadratic Gaussian
regulator problem in continuous time. For a completely
observable controlled linear system driven by a fractional
Brownian motion, we describe explicitely the optimal control
policy which minimizes a quadratic performance criterion.
The focus of this paper is on stochastic change detection applied in connection with active fault diagnosis (AFD). An auxiliary input signal is applied in AFD. This signal injection in the system will in general allow us to obtain a fast change detection/isolation by considering the output or an error output from the system. The classical cumulative sum (CUSUM) test will be modified with respect to the AFD approach applied. The CUSUM method will be altered such that it will be able to detect a change...
We study the adaptive control problem for discrete-time Markov control processes with Borel state and action spaces and possibly unbounded one-stage costs. The processes are given by recurrent equations with i.i.d. -valued random vectors whose density is unknown. Assuming observability of we propose the procedure of statistical estimation of that allows us to prove discounted asymptotic optimality of two types of adaptive policies used early for the processes with bounded costs.
Some discrete time controlled Markov processes in a locally compact metric space whose transition operators depend on an unknown parameter are described. The adaptive controls are constructed using the large deviations of empirical distributions which are uniform in the parameter that takes values in a compact set. The adaptive procedure uses a finite family of continuous, almost optimal controls. Using the large deviations property it is shown that an adaptive control which is a fixed almost optimal...
Currently displaying 61 –
80 of
862