Page 1

Displaying 1 – 10 of 10

Showing per page

On a distance between estimable functions.

Concepción Arenas Solá (1989)

Qüestiió

In this paper we study the main properties of a distance introduced by C.M. Cuadras (1974). This distance is a generalization of the well-known Mahalanobis distance between populations to a distance between parametric estimable functions inside the multivariate analysis of variance model. Reduction of dimension properties, invariant properties under linear automorphisms, estimation of the distance, distribution under normality as well as the interpretation as a geodesic distance are studied and...

On metric divergences of probability measures

Igor Vajda (2009)

Kybernetika

Standard properties of φ -divergences of probability measures are widely applied in various areas of information processing. Among the desirable supplementary properties facilitating employment of mathematical methods is the metricity of φ -divergences, or the metricity of their powers. This paper extends the previously known family of φ -divergences with these properties. The extension consists of a continuum of φ -divergences which are squared metric distances and which are mostly new but include...

On selecting the best features in a noisy environment

Jan Flusser, Tomáš Suk (1998)

Kybernetika

This paper introduces a novel method for selecting a feature subset yielding an optimal trade-off between class separability and feature space dimensionality. We assume the following feature properties: (a) the features are ordered into a sequence, (b) robustness of the features decreases with an increasing order and (c) higher-order features supply more detailed information about the objects. We present a general algorithm how to find under those assumptions the optimal feature subset. Its performance...

On the optimality of the max-depth and max-rank classifiers for spherical data

Ondřej Vencálek, Houyem Demni, Amor Messaoud, Giovanni C. Porzio (2020)

Applications of Mathematics

The main goal of supervised learning is to construct a function from labeled training data which assigns arbitrary new data points to one of the labels. Classification tasks may be solved by using some measures of data point centrality with respect to the labeled groups considered. Such a measure of centrality is called data depth. In this paper, we investigate conditions under which depth-based classifiers for directional data are optimal. We show that such classifiers are equivalent to the Bayes...

On the order equivalence relation of binary association measures

Mariusz Paradowski (2015)

International Journal of Applied Mathematics and Computer Science

Over a century of research has resulted in a set of more than a hundred binary association measures. Many of them share similar properties. An overview of binary association measures is presented, focused on their order equivalences. Association measures are grouped according to their relations. Transformations between these measures are shown, both formally and visually. A generalization coefficient is proposed, based on joint probability and marginal probabilities. Combining association measures...

Currently displaying 1 – 10 of 10

Page 1