Page 1 Next

Displaying 1 – 20 of 25

Showing per page

Adaptive non-asymptotic confidence balls in density estimation

Matthieu Lerasle (2012)

ESAIM: Probability and Statistics

We build confidence balls for the common density s of a real valued sample X1,...,Xn. We use resampling methods to estimate the projection of s onto finite dimensional linear spaces and a model selection procedure to choose an optimal approximation space. The covering property is ensured for all n ≥ 2 and the balls are adaptive over a collection of linear spaces.

Adaptive non-asymptotic confidence balls in density estimation

Matthieu Lerasle (2012)

ESAIM: Probability and Statistics

We build confidence balls for the common density s of a real valued sample X1,...,Xn. We use resampling methods to estimate the projection of s onto finite dimensional linear spaces and a model selection procedure to choose an optimal approximation space. The covering property is ensured for all n ≥ 2 and the balls are adaptive over a collection of linear spaces.

Concept of Data Depth and Its Applications

Ondřej Vencálek (2011)

Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica

Data depth is an important concept of nonparametric approach to multivariate data analysis. The main aim of the paper is to review possible applications of the data depth, including outlier detection, robust and affine-equivariant estimates of location, rank tests for multivariate scale difference, control charts for multivariate processes, and depth-based classifiers solving discrimination problem.

Density estimation with quadratic loss: a confidence intervals method

Pierre Alquier (2008)

ESAIM: Probability and Statistics

We propose a feature selection method for density estimation with quadratic loss. This method relies on the study of unidimensional approximation models and on the definition of confidence regions for the density thanks to these models. It is quite general and includes cases of interest like detection of relevant wavelets coefficients or selection of support vectors in SVM. In the general case, we prove that every selected feature actually improves the performance of the estimator. In the case...

Estimates of the covariance matrix of vectors of u-statistics and confidence regions for vectors of Kendall's tau

František Rublík (2016)

Kybernetika

Consistent estimators of the asymptotic covariance matrix of vectors of U -statistics are used in constructing asymptotic confidence regions for vectors of Kendall’s correlation coefficients corresponding to various pairs of components of a random vector. The regions are products of intervals computed by means of a critical value from multivariate normal distribution. The regularity of the asymptotic covariance matrix of the vector of Kendall’s sample coefficients is proved in the case of sampling...

Iterative feature selection in least square regression estimation

Pierre Alquier (2008)

Annales de l'I.H.P. Probabilités et statistiques

This paper presents a new algorithm to perform regression estimation, in both the inductive and transductive setting. The estimator is defined as a linear combination of functions in a given dictionary. Coefficients of the combinations are computed sequentially using projection on some simple sets. These sets are defined as confidence regions provided by a deviation (PAC) inequality on an estimator in one-dimensional models. We prove that every projection the algorithm actually improves the performance...

On construction of confidence intervals for a mean of dependent data

Jan Ćwik, Jan Mielniczuk (2001)

Discussiones Mathematicae Probability and Statistics

In the report, the performance of several methods of constructing confidence intervals for a mean of stationary sequence is investigated using extensive simulation study. The studied approaches are sample reuse block methods which do not resort to bootstrap. It turns out that the performance of some known methods strongly depends on a model under consideration and on whether a two-sided or one-sided interval is used. Among the methods studied, the block method based on weak convergence result by...

On the asymptotic form of convex hulls of Gaussian random fields

Youri Davydov, Vygantas Paulauskas (2014)

Open Mathematics

We consider a centered Gaussian random field X = X t : t ∈ T with values in a Banach space 𝔹 defined on a parametric set T equal to ℝm or ℤm. It is supposed that the distribution of X t is independent of t. We consider the asymptotic behavior of closed convex hulls W n = convX t : t ∈ T n, where (T n) is an increasing sequence of subsets of T. We show that under some conditions of weak dependence for the random field under consideration and some sequence (b n)n≥1 with probability 1, (in the sense...

On the consistency of sieve bootstrap prediction intervals for stationary time series

Roman Różański, Adam Zagdański (2004)

Discussiones Mathematicae Probability and Statistics

In the article, we consider construction of prediction intervals for stationary time series using Bühlmann's [8], [9] sieve bootstrapapproach. Basic theoretical properties concerning consistency are proved. We extend the results obtained earlier by Stine [21], Masarotto and Grigoletto [13] for an autoregressive time series of finite order to the rich class of linear and invertible stationary models. Finite sample performance of the constructed intervals is investigated by computer simulations.

One Bootstrap suffices to generate sharp uniform bounds in functional estimation

Paul Deheuvels (2011)

Kybernetika

We consider, in the framework of multidimensional observations, nonparametric functional estimators, which include, as special cases, the Akaike–Parzen–Rosenblatt kernel density estimators ([1, 18, 20]), and the Nadaraya–Watson kernel regression estimators ([16, 22]). We evaluate the sup-norm, over a given set 𝐈 , of the difference between the estimator and a non-random functional centering factor (which reduces to the estimator mean for kernel density estimation). We show that, under suitable general...

Ordenes de convergencia para las aproximaciones normal y bootstrap en estimación no paramétrica de la función de densidad.

Ricardo Cao Abad (1990)

Trabajos de Estadística

Este artículos concierne las distribuciones usadas para construir intervalos de confianza para la función de densidad en una situación no paramétrica. Se comparan los órdenes de convergencia para el límite normal, su aproximación "plug in" y el método bootstrap. Se deduce que el bootstrap se comporta mejor que las otras dos aproximaciones tanto en su forma clásica como con la aproximación bootstrap normal.

Toward the best constant factor for the Rademacher-Gaussian tail comparison

Iosif Pinelis (2007)

ESAIM: Probability and Statistics

It is proved that the best constant factor in the Rademacher-Gaussian tail comparison is between two explicitly defined absolute constants c1 and c2 such that c2≈1.01 c1. A discussion of relative merits of this result versus limit theorems is given.

Currently displaying 1 – 20 of 25

Page 1 Next