Page 1

Displaying 1 – 18 of 18

Showing per page

Effect of choice of dissimilarity measure on classification efficiency with nearest neighbor method

Tomasz Górecki (2005)

Discussiones Mathematicae Probability and Statistics

In this paper we will precisely analyze the nearest neighbor method for different dissimilarity measures, classical and weighed, for which methods of distinguishing were worked out. We will propose looking for weights in the space of discriminant coordinates. Experimental results based on a number of real data sets are presented and analyzed to illustrate the benefits of the proposed methods. As classical dissimilarity measures we will use the Euclidean metric, Manhattan and post office metric....

Eigenanalysis and metric multidimensional scaling on hierarchical structures.

Carles Maria Cuadras, Josep-Maria Oller (1987)

Qüestiió

The known hierarchical clustering scheme is equivalent to the concept of ultrametric distance. Every distance can be represented in a spatial model using multidimensional scaling. We relate both classes of representations of proximity data in an algebraic way, obtaining some results and relations on clusters and the eigenvalues of the inner product matrix for an ultrametric distance. Principal coordinate analysis on an ultrametric distance gives two classes of independent coordinates, describing...

Empirical significance test of the goodness-of-fit for some pyramidal clustering procedures.

Carles Capdevila Marquès, Antoni Arcas Pons (1995)

Qüestiió

Through a series of simulation tests by Monte Carlo methods, some aspects relating to the inference concerning pyramidal trees built by the maximum and minimum methods are considered. In this sense, the quantiles of the γ-Goodman-Kruskal statistic allow us to tabulate a significance test of the goodness-of-fit of a pyramidal clustering procedure. On the other side, the pyramidal method of maximum is observed to be clearly better (more efficient) than that of the minimum in terms of the expected...

Employing different loss functions for the classification of images via supervised learning

Radu Boţ, André Heinrich, Gert Wanka (2014)

Open Mathematics

Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions....

Existence, Consistency and computer simulation for selected variants of minimum distance estimators

Václav Kůs, Domingo Morales, Jitka Hrabáková, Iva Frýdlová (2018)

Kybernetika

The paper deals with sufficient conditions for the existence of general approximate minimum distance estimator (AMDE) of a probability density function f 0 on the real line. It shows that the AMDE always exists when the bounded φ -divergence, Kolmogorov, Lévy, Cramér, or discrepancy distance is used. Consequently, n - 1 / 2 consistency rate in any bounded φ -divergence is established for Kolmogorov, Lévy, and discrepancy estimators under the condition that the degree of variations of the corresponding family...

Exponential rates for the error probabilities in selection procedures

Friedrich Liese, Klaus J. Miescke (1999)

Kybernetika

For a sequence of statistical experiments with a finite parameter set the asymptotic behavior of the maximum risk is studied for the problem of classification into disjoint subsets. The exponential rates of the optimal decision rule is determined and expressed in terms of the normalized limit of moment generating functions of likelihood ratios. Necessary and sufficient conditions for the existence of adaptive classification rules in the sense of Rukhin [Ru1] are given. The results are applied to...

Currently displaying 1 – 18 of 18

Page 1