Displaying 141 – 160 of 257

Showing per page

Intrinsic dimensionality and small sample properties of classifiers

Šarūnas Raudys (1998)

Kybernetika

Small learning-set properties of the Euclidean distance, the Parzen window, the minimum empirical error and the nonlinear single layer perceptron classifiers depend on an “intrinsic dimensionality” of the data, however the Fisher linear discriminant function is sensitive to all dimensions. There is no unique definition of the “intrinsic dimensionality”. The dimensionality of the subspace where the data points are situated is not a sufficient definition of the “intrinsic dimensionality”. An exact...

k -Depth-nearest Neighbour Method and its Performance on Skew-normal Distributons

Ondřej Vencálek (2013)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

In the present paper we investigate performance of the k -depth-nearest classifier. This classifier, proposed recently by Vencálek, uses the concept of data depth to improve the classification method known as the k -nearest neighbour. Simulation study which is presented here deals with the two-class classification problem in which the considered distributions belong to the family of skewed normal distributions.

Kernel Ho-Kashyap classifier with generalization control

Jacek Łęski (2004)

International Journal of Applied Mathematics and Computer Science

This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory....

La articulación entre lo cuantitativo y lo cualitativo: de las grandes encuestas a la recogida de datos intensiva.

Vicent Borrás, Pedro López, Carlos Lozares (1999)

Qüestiió

Son casi innumerables las reflexiones que se han hecho en el campo de la metodología de las ciencias sociales, sobre la dicotomía, real o inexistente, entre las perspectivas de análisis cuantitativo y cualitativo, mientras que han sido menos abundantes los trabajos teóricos, aunque van siendo más abundantes los empíricos, que han tratado de compatibilizar y/o complementar ambas perspectivas. En muchos casos los trabajos cualitativos cubren solamente los primeros pasos de la investigación social,...

Least empirical risk procedures in statistical inference

Wojciech Niemiro (1993)

Applicationes Mathematicae

We consider the empirical risk function Q n ( α ) = 1 n i = 1 n · f ( α , Z i ) (for iid Z i ’s) under the assumption that f(α,z) is convex with respect to α. Asymptotics of the minimum of Q n ( α ) is investigated. Tests for linear hypotheses are derived. Our results generalize some of those concerning LAD estimators and related tests.

Linear discriminant analysis with a generalization of the Moore-Penrose pseudoinverse

Tomasz Górecki, Maciej Łuczak (2013)

International Journal of Applied Mathematics and Computer Science

The Linear Discriminant Analysis (LDA) technique is an important and well-developed area of classification, and to date many linear (and also nonlinear) discrimination methods have been put forward. A complication in applying LDA to real data occurs when the number of features exceeds that of observations. In this case, the covariance estimates do not have full rank, and thus cannot be inverted. There are a number of ways to deal with this problem. In this paper, we propose improving LDA in this...

Currently displaying 141 – 160 of 257