Data depth is an important concept of nonparametric approach to multivariate data analysis. The main aim of the paper is to review possible applications of the data depth, including outlier detection, robust and affine-equivariant estimates of location, rank tests for multivariate scale difference, control charts for multivariate processes, and depth-based classifiers solving discrimination problem.

In the present paper we investigate performance of the $k$-depth-nearest classifier. This classifier, proposed recently by Vencálek, uses the concept of data depth to improve the classification method known as the $k$-nearest neighbour. Simulation study which is presented here deals with the two-class classification problem in which the considered distributions belong to the family of skewed normal distributions.

We propose a new nonparametric procedure to solve the problem of classifying objects represented by $d$-dimensional vectors into $K\ge 2$ groups. The newly proposed classifier was inspired by the $k$ nearest neighbour (kNN) method. It is based on the idea of a depth-based distributional neighbourhood and is called $k$ nearest depth neighbours (kNDN) classifier. The kNDN classifier has several desirable properties: in contrast to the classical kNN, it can utilize global properties of the considered distributions...

Generalised halfspace depth function is proposed. Basic properties of this depth function including the strong consistency are studied. We show, on several examples that our depth function may be considered to be more appropriate for nonsymetric distributions or for mixtures of distributions.

The main goal of supervised learning is to construct a function from labeled training data which assigns arbitrary new data points to one of the labels. Classification tasks may be solved by using some measures of data point centrality with respect to the labeled groups considered. Such a measure of centrality is called data depth. In this paper, we investigate conditions under which depth-based classifiers for directional data are optimal. We show that such classifiers are equivalent to the Bayes...

Download Results (CSV)