Displaying 41 – 60 of 76

Showing per page

New M-estimators in semi-parametric regression with errors in variables

Cristina Butucea, Marie-Luce Taupin (2008)

Annales de l'I.H.P. Probabilités et statistiques

In the regression model with errors in variables, we observe n i.i.d. copies of (Y, Z) satisfying Y=fθ0(X)+ξ and Z=X+ɛ involving independent and unobserved random variables X, ξ, ɛ plus a regression function fθ0, known up to a finite dimensional θ0. The common densities of the Xi’s and of the ξi’s are unknown, whereas the distribution of ɛ is completely known. We aim at estimating the parameter θ0 by using the observations (Y1, Z1), …, (Yn, Zn). We propose an estimation procedure based on the least...

Nonparametric regression estimation based on spatially inhomogeneous data: minimax global convergence rates and adaptivity

Anestis Antoniadis, Marianna Pensky, Theofanis Sapatinas (2014)

ESAIM: Probability and Statistics

We consider the nonparametric regression estimation problem of recovering an unknown response function f on the basis of spatially inhomogeneous data when the design points follow a known density g with a finite number of well-separated zeros. In particular, we consider two different cases: when g has zeros of a polynomial order and when g has zeros of an exponential order. These two cases correspond to moderate and severe data losses, respectively. We obtain asymptotic (as the sample size increases)...

On orthogonal series estimation of bounded regression functions

Waldemar Popiński (2001)

Applicationes Mathematicae

The problem of nonparametric estimation of a bounded regression function f L ² ( [ a , b ] d ) , [a,b] ⊂ ℝ, d ≥ 1, using an orthonormal system of functions e k , k=1,2,..., is considered in the case when the observations follow the model Y i = f ( X i ) + η i , i=1,...,n, where X i and η i are i.i.d. copies of independent random variables X and η, respectively, the distribution of X has density ϱ, and η has mean zero and finite variance. The estimators are constructed by proper truncation of the function f ̂ ( x ) = k = 1 N ( n ) c ̂ k e k ( x ) , where the coefficients c ̂ , . . . , c ̂ N ( n ) are determined...

On pointwise adaptive curve estimation based on inhomogeneous data

Stéphane Gaïffas (2007)

ESAIM: Probability and Statistics

We want to recover a signal based on noisy inhomogeneous data (the amount of data can vary strongly on the estimation domain). We model the data using nonparametric regression with random design, and we focus on the estimation of the regression at a fixed point x0 with little, or much data. We propose a method which adapts both to the local amount of data (the design density is unknown) and to the local smoothness of the regression function. The procedure consists of a local polynomial...

On robust GMM estimation with applications in economics and finance

Ansgar Steland (2000)

Discussiones Mathematicae Probability and Statistics

Generalized Methods of Moments (GMM) estimators are a popular tool in econometrics since introduced by Hansen (1982), because this approach provides feasible solutions for many problems present in economic data where least squares or maximum likelihood methods fail when naively applied. These problems may arise in errors-in-variable regression, estimation of labor demand curves, and asset pricing in finance, which are discussed here. In this paper we study a GMM estimator for the rank modelingapproach...

On the adaptive wavelet estimation of a multidimensional regression function under α -mixing dependence: Beyond the standard assumptions on the noise

Christophe Chesneau (2013)

Commentationes Mathematicae Universitatis Carolinae

We investigate the estimation of a multidimensional regression function f from n observations of an α -mixing process ( Y , X ) , where Y = f ( X ) + ξ , X represents the design and ξ the noise. We concentrate on wavelet methods. In most papers considering this problem, either the proposed wavelet estimator is not adaptive (i.e., it depends on the knowledge of the smoothness of f in its construction) or it is supposed that ξ is bounded or/and has a known distribution. In this paper, we go far beyond this classical framework....

On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer***

Peter L. Bartlett, Shahar Mendelson, Petra Philips (2010)

ESAIM: Probability and Statistics

We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates...

On the optimality of the empirical risk minimization procedure for the convex aggregation problem

Guillaume Lecué, Shahar Mendelson (2013)

Annales de l'I.H.P. Probabilités et statistiques

We study the performance of empirical risk minimization (ERM), with respect to the quadratic risk, in the context of convex aggregation, in which one wants to construct a procedure whose risk is as close as possible to the best function in the convex hull of an arbitrary finite class F . We show that ERM performed in the convex hull of F is an optimal aggregation procedure for the convex aggregation problem. We also show that if this procedure is used for the problem of model selection aggregation,...

One Bootstrap suffices to generate sharp uniform bounds in functional estimation

Paul Deheuvels (2011)

Kybernetika

We consider, in the framework of multidimensional observations, nonparametric functional estimators, which include, as special cases, the Akaike–Parzen–Rosenblatt kernel density estimators ([1, 18, 20]), and the Nadaraya–Watson kernel regression estimators ([16, 22]). We evaluate the sup-norm, over a given set 𝐈 , of the difference between the estimator and a non-random functional centering factor (which reduces to the estimator mean for kernel density estimation). We show that, under suitable general...

On-line nonparametric estimation.

Rafail Khasminskii (2004)

SORT

A survey of some recent results on nonparametric on-line estimation is presented. The first result deals with an on-line estimation for a smooth signal S(t) in the classic 'signal plus Gaussian white noise' model. Then an analogous on-line estimator for the regression estimation problem with equidistant design is described and justified. Finally some preliminary results related to the on-line estimation for the diffusion observed process are described.

Optimal estimators in learning theory

V. N. Temlyakov (2006)

Banach Center Publications

This paper is a survey of recent results on some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning-from-examples, refers to a process that builds on the base of available data of inputs x i and outputs y i , i = 1,...,m, a function that best represents the relation between the inputs x ∈ X and the corresponding outputs y ∈ Y. The goal is to find an estimator f z on the base of given data z : = ( ( x , y ) , . . . , ( x m , y m ) ) that approximates well the regression function f ρ of...

Orthogonal series estimation of band-limited regression functions

Waldemar Popiński (2014)

Applicationes Mathematicae

The problem of nonparametric function fitting using the complete orthogonal system of Whittaker cardinal functions s k , k = 0,±1,..., for the observation model y j = f ( u j ) + η j , j = 1,...,n, is considered, where f ∈ L²(ℝ) ∩ BL(Ω) for Ω > 0 is a band-limited function, u j are independent random variables uniformly distributed in the observation interval [-T,T], η j are uncorrelated or correlated random variables with zero mean value and finite variance, independent of the observation points. Conditions for convergence...

Orthogonal series regression estimation under long-range dependent errors

Waldemar Popiński (2001)

Applicationes Mathematicae

This paper is concerned with general conditions for convergence rates of nonparametric orthogonal series estimators of the regression function. The estimators are obtained by the least squares method on the basis of an observation sample Y i = f ( X i ) + η i , i=1,...,n, where X i A d are independently chosen from a distribution with density ϱ ∈ L¹(A) and η i are zero mean stationary errors with long-range dependence. Convergence rates of the error n - 1 i = 1 n ( f ( X i ) - f ̂ N ( X i ) ) ² for the estimator f ̂ N ( x ) = k = 1 N c ̂ k e k ( x ) , constructed using an orthonormal system e k , k=1,2,...,...

Orthogonal series regression estimators for an irregularly spaced design

Waldemar Popiński (2000)

Applicationes Mathematicae

Nonparametric orthogonal series regression function estimation is investigated in the case of a fixed point design where the observation points are irregularly spaced in a finite interval [a,b]i ⊂ ℝ. Convergence rates for the integrated mean-square error and pointwise mean-square error are obtained in the case of estimators constructed using the Legendre polynomials and Haar functions for regression functions satisfying the Lipschitz condition.

Partition-based conditional density estimation

S. X. Cohen, E. Le Pennec (2013)

ESAIM: Probability and Statistics

We propose a general partition-based strategy to estimate conditional density with candidate densities that are piecewise constant with respect to the covariate. Capitalizing on a general penalized maximum likelihood model selection result, we prove, on two specific examples, that the penalty of each model can be chosen roughly proportional to its dimension. We first study a classical strategy in which the densities are chosen piecewise conditional according to the variable. We then consider Gaussian...

Penalization versus Goldenshluger − Lepski strategies in warped bases regression

Gaëlle Chagny (2013)

ESAIM: Probability and Statistics

This paper deals with the problem of estimating a regression function f, in a random design framework. We build and study two adaptive estimators based on model selection, applied with warped bases. We start with a collection of finite dimensional linear spaces, spanned by orthonormal bases. Instead of expanding directly the target function f on these bases, we rather consider the expansion of h = f ∘ G-1, where G is the cumulative distribution function of the design, following Kerkyacharian and...

Probabilistic methods for semilinear partial differential equations. Applications to finance

Dan Crisan, Konstantinos Manolarakis (2010)

ESAIM: Mathematical Modelling and Numerical Analysis

With the pioneering work of [Pardoux and Peng, Syst. Contr. Lett.14 (1990) 55–61; Pardoux and Peng, Lecture Notes in Control and Information Sciences176 (1992) 200–217]. We have at our disposal stochastic processes which solve the so-called backward stochastic differential equations. These processes provide us with a Feynman-Kac representation for the solutions of a class of nonlinear partial differential equations (PDEs) which appear in many applications in the field of Mathematical Finance....

Currently displaying 41 – 60 of 76