Loading [MathJax]/extensions/MathZoom.js
A method for solving second order matrix differential equations avoiding the increase of the dimension of the problem is presented. Explicit approximate solutions and an error bound of them in terms of data are given.
In this paper we give some new results concerning solvability of the 1-dimensional differential equation with initial conditions. We study the basic theorem due to Picard. First we prove that the existence and uniqueness result remains true if is a Lipschitz function with respect to the first argument. In the second part we give a contractive method for the proof of Picard theorem. These considerations allow us to develop two new methods for finding an approximation sequence for the solution....
The present paper does not introduce a new approximation but it modifies a certain known method. This method for obtaining a periodic approximation of a periodic solution of a linear nonhomogeneous differential equation with periodic coefficients and periodic right-hand side is used in technical practice. However, the conditions ensuring the existence of a periodic solution may be violated and therefore the purpose of this paper is to modify the method in order that these conditions remain valid....
A generalized quasilinearization technique is applied to obtain a sequence of approximate solutions converging monotonically and quadratically to the unique solution of the forced Duffing equation with nonlocal discontinuous type integral boundary conditions.
We present an approximation method for Picard second order boundary value problems with Carathéodory righthand side. The method is based on the idea of replacing a measurable function in the right-hand side of the problem with its Kantorovich polynomial. We will show that this approximation scheme recovers essential solutions to the original BVP. We also consider the corresponding finite dimensional problem. We suggest a suitable mapping of solutions to finite dimensional problems to piecewise constant...
In this paper we propose a procedure to construct approximations of the inverse of a class of differentiable mappings. First of all we determine in terms of the data a neighbourhood where the inverse mapping is well defined. Then it is proved that the theoretical inverse can be expressed in terms of the solution of a differential equation depending on parameters. Finally, using one-step matrix methods we construct approximate inverse mappings of a prescribed accuracy.
We consider a differential equation with a random rapidly varying coefficient. The random coefficient is a gaussian process with slowly decaying correlations and compete with a periodic component. In the asymptotic framework corresponding to the separation of scales present in the problem, we prove that the solution of the differential equation converges in distribution to the solution of a stochastic differential equation driven by a classical brownian motion in some cases, by a fractional brownian...
We consider a differential equation with a random rapidly varying coefficient.
The random coefficient is a
Gaussian process with slowly decaying correlations and compete with a periodic component. In the
asymptotic framework corresponding to the separation of scales present in the
problem, we prove that the solution of the differential equation
converges in distribution to the solution of a stochastic differential equation
driven by a classical Brownian motion in some cases, by a fractional Brownian
motion...
Currently displaying 21 –
33 of
33