Loading [MathJax]/extensions/MathZoom.js
Displaying 41 –
60 of
229
We present a general method which allows to use Malliavin Calculus for additive functionals of stochastic equations with irregular drift. This method uses the Girsanov theorem combined with Itô–Taylor expansion in order to obtain regularity properties for this density. We apply the methodology to the case of the Lebesgue integral of a diffusion with bounded and measurable drift.
This paper deals with the relationship between two-dimensional parameter Gaussian random fields verifying a particular Markov property and the solutions of stochastic differential equations. In the non Gaussian case some diffusion conditions are introduced, obtaining a backward equation for the evolution of transition probability functions.
We present a Monte Carlo technique for sampling from the
canonical distribution in molecular dynamics. The method is built upon
the Nosé-Hoover constant temperature formulation and the generalized
hybrid Monte Carlo method. In contrast to standard hybrid Monte Carlo methods
only the thermostat degree of freedom is stochastically resampled
during a Monte Carlo step.
In this article, we study the numerical approximation of stochastic differential equations driven by a multidimensional fractional Brownian motion (fBm) with Hurst parameter greater than 1/3. We introduce an implementable scheme for these equations, which is based on a second-order Taylor expansion, where the usual Lévy area terms are replaced by products of increments of the driving fBm. The convergence of our scheme is shown by means of a combination of rough paths techniques and error bounds...
A singular stochastic control problem in n dimensions with timedependent coefficients on a finite time horizon is considered. We show that the value function for this problem is a generalized solution of the corresponding HJB equation with locally bounded second derivatives with respect to the space variables and the first derivative with respect to time. Moreover, we prove that an optimal control exists and is unique
In this paper, we present a new proof of the celebrated theorem of Kellerer, stating that every integrable process, which increases in the convex order, has the same one-dimensional marginals as a martingale. Our proof proceeds by approximations, and calls upon martingales constructed as solutions of stochastic differential equations. It relies on a uniqueness result, due to Pierre, for a Fokker-Planck equation.
In this paper, we present a new proof of the celebrated theorem of Kellerer, stating that every integrable process, which increases in the convex order, has the same one-dimensional marginals as a martingale. Our proof proceeds by approximations, and calls upon martingales constructed as solutions of stochastic differential equations. It relies on a uniqueness result, due to Pierre, for a Fokker-Planck equation.
The purpose of this paper is to introduce a new noise denoted by P'(u). It has the space parameter u, being compared with the usual noise depending on the time t. We first explain why such a noise arises naturally. Then, we come to the analysis of functionals of this new noise. We shall emphasize the significance of generalized functionals of P'(u), in particular, linear and quadratic.
We study a model of motion of a passive tracer particle in a turbulent flow that is strongly mixing in time variable. In [8] we have shown that there exists a probability measure equivalent to the underlying physical probability under which the quasi-Lagrangian velocity process, i.e. the velocity of the flow observed from the vintage point of the moving particle, is stationary and ergodic. As a consequence, we proved the existence of the mean of the quasi-Lagrangian velocity, the so-called Stokes...
Currently displaying 41 –
60 of
229