Displaying similar documents to “ L 1 -penalization in functional linear regression with subgaussian design”

Orthogonal series regression estimation under long-range dependent errors

Waldemar Popiński (2001)

Applicationes Mathematicae

Similarity:

This paper is concerned with general conditions for convergence rates of nonparametric orthogonal series estimators of the regression function. The estimators are obtained by the least squares method on the basis of an observation sample Y i = f ( X i ) + η i , i=1,...,n, where X i A d are independently chosen from a distribution with density ϱ ∈ L¹(A) and η i are zero mean stationary errors with long-range dependence. Convergence rates of the error n - 1 i = 1 n ( f ( X i ) - f ̂ N ( X i ) ) ² for the estimator f ̂ N ( x ) = k = 1 N c ̂ k e k ( x ) , constructed using an orthonormal system...

Estimator selection in the gaussian setting

Yannick Baraud, Christophe Giraud, Sylvie Huet (2014)

Annales de l'I.H.P. Probabilités et statistiques

Similarity:

We consider the problem of estimating the mean f of a Gaussian vector Y with independent components of common unknown variance σ 2 . Our estimation procedure is based on estimator selection. More precisely, we start with an arbitrary and possibly infinite collection 𝔽 of estimators of f based on Y and, with the same data Y , aim at selecting an estimator among 𝔽 with the smallest Euclidean risk. No assumptions on the estimators are made and their dependencies with respect to Y may be unknown....

Instrumental weighted variables under heteroscedasticity Part I – Consistency

Jan Ámos Víšek (2017)

Kybernetika

Similarity:

The proof of consistency instrumental weighted variables, the robust version of the classical instrumental variables is given. It is proved that all solutions of the corresponding normal equations are contained, with high probability, in a ball, the radius of which can be selected - asymptotically - arbitrarily small. Then also n -consistency is proved. An extended numerical study (the Part II of the paper) offers a picture of behavior of the estimator for finite samples under various...

Orthogonal series estimation of band-limited regression functions

Waldemar Popiński (2014)

Applicationes Mathematicae

Similarity:

The problem of nonparametric function fitting using the complete orthogonal system of Whittaker cardinal functions s k , k = 0,±1,..., for the observation model y j = f ( u j ) + η j , j = 1,...,n, is considered, where f ∈ L²(ℝ) ∩ BL(Ω) for Ω > 0 is a band-limited function, u j are independent random variables uniformly distributed in the observation interval [-T,T], η j are uncorrelated or correlated random variables with zero mean value and finite variance, independent of the observation points. Conditions...

Polynomials associated with exponential regression

J. Bukac (2001)

Applicationes Mathematicae

Similarity:

Fitting exponentials a + b e c x to data by the least squares method is discussed. It is shown how the polynomials associated with this problem can be factored. The closure of the set of this type of functions defined on a finite domain is characterized and an existence theorem derived.

On orthogonal series estimation of bounded regression functions

Waldemar Popiński (2001)

Applicationes Mathematicae

Similarity:

The problem of nonparametric estimation of a bounded regression function f L ² ( [ a , b ] d ) , [a,b] ⊂ ℝ, d ≥ 1, using an orthonormal system of functions e k , k=1,2,..., is considered in the case when the observations follow the model Y i = f ( X i ) + η i , i=1,...,n, where X i and η i are i.i.d. copies of independent random variables X and η, respectively, the distribution of X has density ϱ, and η has mean zero and finite variance. The estimators are constructed by proper truncation of the function f ̂ ( x ) = k = 1 N ( n ) c ̂ k e k ( x ) , where the coefficients c ̂ , . . . , c ̂ N ( n ) ...

Optimal estimators in learning theory

V. N. Temlyakov (2006)

Banach Center Publications

Similarity:

This paper is a survey of recent results on some problems of supervised learning in the setting formulated by Cucker and Smale. Supervised learning, or learning-from-examples, refers to a process that builds on the base of available data of inputs x i and outputs y i , i = 1,...,m, a function that best represents the relation between the inputs x ∈ X and the corresponding outputs y ∈ Y. The goal is to find an estimator f z on the base of given data z : = ( ( x , y ) , . . . , ( x m , y m ) ) that approximates well the regression function...

Method of averaging for the system of functional-differential inclusions

Teresa Janiak, Elżbieta Łuczak-Kumorek (1996)

Discussiones Mathematicae, Differential Inclusions, Control and Optimization

Similarity:

The basic idea of this paper is to give the existence theorem and the method of averaging for the system of functional-differential inclusions of the form ⎧ ( t ) F ( t , x t , y t ) (0) ⎨ ⎩ ( t ) G ( t , x t , y t ) (1)

Solutions for the p-order Feigenbaum’s functional equation h ( g ( x ) ) = g p ( h ( x ) )

Min Zhang, Jianguo Si (2014)

Annales Polonici Mathematici

Similarity:

This work deals with Feigenbaum’s functional equation ⎧ h ( g ( x ) ) = g p ( h ( x ) ) , ⎨ ⎩ g(0) = 1, -1 ≤ g(x) ≤ 1, x∈[-1,1] where p ≥ 2 is an integer, g p is the p-fold iteration of g, and h is a strictly monotone odd continuous function on [-1,1] with h(0) = 0 and |h(x)| < |x| (x ∈ [-1,1], x ≠ 0). Using a constructive method, we discuss the existence of continuous unimodal even solutions of the above equation.

The law of large numbers and a functional equation

Maciej Sablik (1998)

Annales Polonici Mathematici

Similarity:

We deal with the linear functional equation (E) g ( x ) = i = 1 r p i g ( c i x ) , where g:(0,∞) → (0,∞) is unknown, ( p , . . . , p r ) is a probability distribution, and c i ’s are positive numbers. The equation (or some equivalent forms) was considered earlier under different assumptions (cf. [1], [2], [4], [5] and [6]). Using Bernoulli’s Law of Large Numbers we prove that g has to be constant provided it has a limit at one end of the domain and is bounded at the other end.

Minimax nonparametric prediction

Maciej Wilczyński (2001)

Applicationes Mathematicae

Similarity:

Let U₀ be a random vector taking its values in a measurable space and having an unknown distribution P and let U₁,...,Uₙ and V , . . . , V m be independent, simple random samples from P of size n and m, respectively. Further, let z , . . . , z k be real-valued functions defined on the same space. Assuming that only the first sample is observed, we find a minimax predictor d⁰(n,U₁,...,Uₙ) of the vector Y m = j = 1 m ( z ( V j ) , . . . , z k ( V j ) ) T with respect to a quadratic errors loss function.