The search session has expired. Please query the service again.
In the present paper we investigate performance of the -depth-nearest classifier. This classifier, proposed recently by Vencálek, uses the concept of data depth to improve the classification method known as the -nearest neighbour. Simulation study which is presented here deals with the two-class classification problem in which the considered distributions belong to the family of skewed normal distributions.
The dynamic linear model with a non-linear non-Gaussian observation relation is considered in this paper. Masreliez's theorem (see Masreliez's (1975)) of approximate non-Gaussian filtering with linear state and observation relations is extended to the case of a non-linear observation relation that can be approximated by a second-order Taylor expansion.
High-dimensional data models abound in genomics studies, where often inadequately small sample sizes create impasses for incorporation of standard statistical tools. Conventional assumptions of linearity of regression, homoscedasticity and (multi-) normality of errors may not be tenable in many such interdisciplinary setups. In this study, Kendall's tau-type rank statistics are employed for statistical inference, avoiding most of parametric assumptions to a greater extent. The proposed procedures...
This paper proposes a deterministic model for the spread of an epidemic. We extend the classical Kermack–McKendrick model, so that a more general contact rate is chosen and a vaccination added. The model is governed by a differential equation (DE) for the time dynamics of the susceptibles, infectives and removals subpopulation. We present some conditions on the existence and uniqueness of a solution to the nonlinear DE. The existence of limits and uniqueness of maximum of infected individuals are...
It turns out that for standard kernel estimators no inequality like that of Dvoretzky-Kiefer-Wolfowitz can be constructed, and as a result it is impossible to answer the question of how many observations are needed to guarantee a prescribed level of accuracy of the estimator. A remedy is to adapt the bandwidth to the sample at hand.
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory....
We derive the two-sample Kolmogorov-Smirnov type test when a nuisance linear regression is present. The test is based on regression rank scores and provides a natural extension of the classical Kolmogorov-Smirnov test. Its asymptotic distributions under the hypothesis and the local alternatives coincide with those of the classical test.
Currently displaying 1 –
20 of
28