Page 1 Next

Displaying 1 – 20 of 36

Showing per page

Marginal problem, statistical estimation, and Möbius formula

Martin Janžura (2007)

Kybernetika

A solution to the marginal problem is obtained in a form of parametric exponential (Gibbs–Markov) distribution, where the unknown parameters are obtained by an optimization procedure that agrees with the maximum likelihood (ML) estimate. With respect to a difficult performance of the method we propose also an alternative approach, providing the original basis of marginals can be appropriately extended. Then the (numerically feasible) solution can be obtained either by the maximum pseudo-likelihood...

Mean square error of the estimator of the conditional hazard function

Abbes Rabhi, Samir Benaissa, El Hadj Hamel, Boubaker Mechab (2013)

Applicationes Mathematicae

This paper deals with a scalar response conditioned by a functional random variable. The main goal is to estimate the conditional hazard function. An asymptotic formula for the mean square error of this estimator is calculated considering as usual the bias and variance.

Minimax and bayes estimation in deconvolution problem*

Mikhail Ermakov (2008)

ESAIM: Probability and Statistics

We consider a deconvolution problem of estimating a signal blurred with a random noise. The noise is assumed to be a stationary Gaussian process multiplied by a weight function function εh where h ∈ L2(R1) and ε is a small parameter. The underlying solution is assumed to be infinitely differentiable. For this model we find asymptotically minimax and Bayes estimators. In the case of solutions having finite number of derivatives similar results were obtained in [G.K. Golubev and R.Z. Khasminskii,...

Minimax nonparametric hypothesis testing for ellipsoids and Besov bodies

Yuri I. Ingster, Irina A. Suslina (2010)

ESAIM: Probability and Statistics

We observe an infinitely dimensional Gaussian random vector x = ξ + v where ξ is a sequence of standard Gaussian variables and v ∈ l2 is an unknown mean. We consider the hypothesis testing problem H0 : v = 0versus alternatives H ε , τ : v V ε for the sets V ε = V ε ( τ , ρ ε ) l 2 . The sets Vε are lq-ellipsoids of semi-axes ai = i-s R/ε with lp-ellipsoid of semi-axes bi = i-r pε/ε removed or similar Besov bodies Bq,t;s (R/ε) with Besov bodies Bp,h;r (pε/ε) removed. Here τ = ( κ , R ) or τ = ( κ , h , t , R ) ; κ = ( p , q , r , s ) are the parameters which define the sets Vε for given radii...

Minimax nonparametric prediction

Maciej Wilczyński (2001)

Applicationes Mathematicae

Let U₀ be a random vector taking its values in a measurable space and having an unknown distribution P and let U₁,...,Uₙ and V , . . . , V m be independent, simple random samples from P of size n and m, respectively. Further, let z , . . . , z k be real-valued functions defined on the same space. Assuming that only the first sample is observed, we find a minimax predictor d⁰(n,U₁,...,Uₙ) of the vector Y m = j = 1 m ( z ( V j ) , . . . , z k ( V j ) ) T with respect to a quadratic errors loss function.

Minimax results for estimating integrals of analytic processes

Karim Benhenni, Jacques Istas (2010)

ESAIM: Probability and Statistics

The problem of predicting integrals of stochastic processes is considered. Linear estimators have been constructed by means of samples at N discrete times for processes having a fixed Hölderian regularity s > 0 in quadratic mean. It is known that the rate of convergence of the mean squared error is of order N-(2s+1). In the class of analytic processes Hp, p ≥ 1, we show that among all estimators, the linear ones are optimal. Moreover, using optimal coefficient estimators derived through...

Model selection and estimation of a component in additive regression

Xavier Gendre (2014)

ESAIM: Probability and Statistics

Let Y ∈ ℝn be a random vector with mean s and covariance matrix σ2PntPn where Pn is some known n × n-matrix. We construct a statistical procedure to estimate s as well as under moment condition on Y or Gaussian hypothesis. Both cases are developed for known or unknown σ2. Our approach is free from any prior assumption on s and is based on non-asymptotic model selection methods. Given some linear spaces collection {Sm, m ∈ ℳ}, we consider, for any m ∈ ℳ, the least-squares estimator ŝm of s in Sm....

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2001)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for (auto-)regression with dependent data

Yannick Baraud, F. Comte, G. Viennet (2010)

ESAIM: Probability and Statistics

In this paper, we study the problem of non parametric estimation of an unknown regression function from dependent data with sub-Gaussian errors. As a particular case, we handle the autoregressive framework. For this purpose, we consider a collection of finite dimensional linear spaces (e.g. linear spaces spanned by wavelets or piecewise polynomials on a possibly irregular grid) and we estimate the regression function by a least-squares estimator built on a data driven selected linear space among...

Model selection for estimating the non zero components of a Gaussian vector

Sylvie Huet (2006)

ESAIM: Probability and Statistics

We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.

Currently displaying 1 – 20 of 36

Page 1 Next