-аппроксимация функции правдоподобия
For general Bayes decision rules there are considered perceptron approximations based on sufficient statistics inputs. A particular attention is paid to Bayes discrimination and classification. In the case of exponentially distributed data with known model it is shown that a perceptron with one hidden layer is sufficient and the learning is restricted to synaptic weights of the output neuron. If only the dimension of the exponential model is known, then the number of hidden layers will increase...
Bayesian probability theory provides a framework for data modeling. In this framework it is possible to find models that are well-matched to the data, and to use these models to make nearly optimal predictions. In connection to neural networks and especially to neural network learning, the theory is interpreted as an inference of the most probable parameters for the model and the given training data. This article describes an application of Neural Networks using the Bayesian training to the problem...
We summarize the main results on probabilistic neural networks recently published in a series of papers. Considering the framework of statistical pattern recognition we assume approximation of class-conditional distributions by finite mixtures of product components. The probabilistic neurons correspond to mixture components and can be interpreted in neurophysiological terms. In this way we can find possible theoretical background of the functional properties of neurons. For example, the general...
To filter perturbed local measurements on a random medium, a dynamic model jointly with an observation transfer equation are needed. Some media given by PDE could have a local probabilistic representation by a Lagrangian stochastic process with mean-field interactions. In this case, we define the acquisition process of locally homogeneous medium along a random path by a Lagrangian Markov process conditioned to be in a domain following the path and conditioned to the observations. The nonlinear...
Conditions under which the linear process is non-negative are investigated in the paper. In the definition of the linear process a strict white noise is used. Explicit results are presented also for the models AR(1) and AR(2).
This paper is concerned with nonparametric estimation of the Lévy density of a pure jump Lévy process. The sample path is observed at n discrete instants with fixed sampling interval. We construct a collection of estimators obtained by deconvolution methods and deduced from appropriate estimators of the characteristic function and its first derivative. We obtain a bound for the -risk, under general assumptions on the model. Then we propose a penalty function that allows to build an adaptive estimator....
This paper is devoted to the nonparametric estimation of the jump rate and the cumulative rate for a general class of non-homogeneous marked renewal processes, defined on a separable metric space. In our framework, the estimation needs only one observation of the process within a long time. Our approach is based on a generalization of the multiplicative intensity model, introduced by Aalen in the seventies. We provide consistent estimators of these two functions, under some assumptions related to...
Given a sample from a discretely observed Lévy process X = (Xt)t≥0 of the finite jump activity, the problem of nonparametric estimation of the Lévy density ρ corresponding to the process X is studied. An estimator of ρ is proposed that is based on a suitable inversion of the Lévy–Khintchine formula and a plug-in device. The main results of the paper deal with upper risk bounds for estimation of ρ over suitable classes of Lévy triplets. The corresponding lower bounds are also discussed.