Displaying 121 – 140 of 160

Showing per page

Penalized estimators for non linear inverse problems

Jean-Michel Loubes, Carenne Ludeña (2010)

ESAIM: Probability and Statistics

In this article we tackle the problem of inverse non linear ill-posed problems from a statistical point of view. We discuss the problem of estimating an indirectly observed function, without prior knowledge of its regularity, based on noisy observations. For this we consider two approaches: one based on the Tikhonov regularization procedure, and another one based on model selection methods for both ordered and non ordered subsets. In each case we prove consistency of the estimators and show...

Plug-in estimators for higher-order transition densities in autoregression

Anton Schick, Wolfgang Wefelmeyer (2009)

ESAIM: Probability and Statistics

In this paper we obtain root-n consistency and functional central limit theorems in weighted L1-spaces for plug-in estimators of the two-step transition density in the classical stationary linear autoregressive model of order one, assuming essentially only that the innovation density has bounded variation. We also show that plugging in a properly weighted residual-based kernel estimator for the unknown innovation density improves on plugging in an unweighted residual-based kernel estimator....

Poisson sampling for spectral estimation in periodically correlated processes

Vincent Monsan (1994)

Applicationes Mathematicae

We study estimation problems for periodically correlated, non gaussian processes. We estimate the correlation functions and the spectral densities from continuous-time samples. From a random time sample, we construct three types of estimators for the spectral densities and we prove their consistency.

Recursive bias estimation for multivariate regression smoothers

Pierre-André Cornillon, N. W. Hengartner, E. Matzner-Løber (2014)

ESAIM: Probability and Statistics

This paper presents a practical and simple fully nonparametric multivariate smoothing procedure that adapts to the underlying smoothness of the true regression function. Our estimator is easily computed by successive application of existing base smoothers (without the need of selecting an optimal smoothing parameter), such as thin-plate spline or kernel smoothers. The resulting smoother has better out of sample predictive capabilities than the underlying base smoother, or competing structurally...

Redescending M-estimators in regression analysis, cluster analysis and image analysis

Christine H. Müller (2004)

Discussiones Mathematicae Probability and Statistics

We give a review on the properties and applications of M-estimators with redescending score function. For regression analysis, some of these redescending M-estimators can attain the maximum breakdown point which is possible in this setup. Moreover, some of them are the solutions of the problem of maximizing the efficiency under bounded influence function when the regression coefficient and the scale parameter are estimated simultaneously. Hence redescending M-estimators satisfy several outlier robustness...

Remarks on optimum kernels and optimum boundary kernels

Jitka Poměnková (2008)

Applications of Mathematics

Kernel smoothers belong to the most popular nonparametric functional estimates used for describing data structure. They can be applied to the fix design regression model as well as to the random design regression model. The main idea of this paper is to present a construction of the optimum kernel and optimum boundary kernel by means of the Gegenbauer and Legendre polynomials.

Risk bounds for mixture density estimation

Alexander Rakhlin, Dmitry Panchenko, Sayan Mukherjee (2005)

ESAIM: Probability and Statistics

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Estimator (MLE) and the greedy procedure described by Li and Barron (1999) under the additional assumption of boundedness of densities. We prove an O ( 1 n ) bound on the estimation error which does not depend on the number of densities in the estimated combination. Under the boundedness assumption, this improves the bound of Li and Barron by...

Risk bounds for mixture density estimation

Alexander Rakhlin, Dmitry Panchenko, Sayan Mukherjee (2010)

ESAIM: Probability and Statistics

In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Estimator (MLE) and the greedy procedure described by Li and Barron (1999) under the additional assumption of boundedness of densities. We prove an O ( 1 n ) bound on the estimation error which does not depend on the number of densities in the estimated combination. Under the boundedness assumption, this improves the bound of Li and Barron...

Segmentation of the Poisson and negative binomial rate models: a penalized estimator

Alice Cleynen, Emilie Lebarbier (2014)

ESAIM: Probability and Statistics

We consider the segmentation problem of Poisson and negative binomial (i.e. overdispersed Poisson) rate distributions. In segmentation, an important issue remains the choice of the number of segments. To this end, we propose a penalized -likelihood estimator where the penalty function is constructed in a non-asymptotic context following the works of L. Birgé and P. Massart. The resulting estimator is proved to satisfy an oracle inequality. The performances of our criterion is assessed using simulated...

Selección de la ventana en suavización tipo núcleo de la parte no paramétrica de un modelo parcialmente lineal con errores autorregresivos.

Germán Aneiros Pérez (2000)

Qüestiió

Supongamos que yi = ζiT β + m(ti) + εi, i = 1, ..., n, donde el vector (p x 1) β y la función m(·) son desconocidos, y los errores εi provienen de un proceso autorregresivo de orden uno (AR(1)) estacionario. Discutimos aquí el problema de la selección del parámetro ventana de un estimador tipo núcleo de la función m(·) basado en un estimador Generalizado de Mínimos Cuadrados de β. Obtenemos la expresión asintótica de una ventana óptima y proponemos un método para estimarla, de modo que dé lugar...

Semiparametric deconvolution with unknown noise variance

Catherine Matias (2002)

ESAIM: Probability and Statistics

This paper deals with semiparametric convolution models, where the noise sequence has a gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal’s density...

Semiparametric deconvolution with unknown noise variance

Catherine Matias (2010)

ESAIM: Probability and Statistics

This paper deals with semiparametric convolution models, where the noise sequence has a Gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal's density...

Semiparametric estimation of the parameters of multivariate copulas

Eckhard Liebscher (2009)

Kybernetika

In the paper we investigate properties of maximum pseudo-likelihood estimators for the copula density and minimum distance estimators for the copula. We derive statements on the consistency and the asymptotic normality of the estimators for the parameters.

Smoothing and preservation of irregularities using local linear fitting

Irène Gijbels (2008)

Applications of Mathematics

For nonparametric estimation of a smooth regression function, local linear fitting is a widely-used method. The goal of this paper is to briefly review how to use this method when the unknown curve possibly has some irregularities, such as jumps or peaks, at unknown locations. It is then explained how the same basic method can be used when estimating unsmooth probability densities and conditional variance functions.

Smoothness of Metropolis-Hastings algorithm and application to entropy estimation

Didier Chauveau, Pierre Vandekerkhove (2013)

ESAIM: Probability and Statistics

The transition kernel of the well-known Metropolis-Hastings (MH) algorithm has a point mass at the chain’s current position, which prevent direct smoothness properties to be derived for the successive densities of marginals issued from this algorithm. We show here that under mild smoothness assumption on the MH algorithm “input” densities (the initial, proposal and target distributions), propagation of a Lipschitz condition for the iterative densities can be proved. This allows us to build a consistent...

Currently displaying 121 – 140 of 160