Displaying 141 – 160 of 298

Showing per page

L 1 -penalization in functional linear regression with subgaussian design

Vladimir Koltchinskii, Stanislav Minsker (2014)

Journal de l’École polytechnique — Mathématiques

We study functional regression with random subgaussian design and real-valued response. The focus is on the problems in which the regression function can be well approximated by a functional linear model with the slope function being “sparse” in the sense that it can be represented as a sum of a small number of well separated “spikes”. This can be viewed as an extension of now classical sparse estimation problems to the case of infinite dictionaries. We study an estimator of the regression function...

L 2 -type contraction for systems of conservation laws

Denis Serre, Alexis F. Vasseur (2014)

Journal de l’École polytechnique — Mathématiques

The semi-group associated with the Cauchy problem for a scalar conservation law is known to be a contraction in L 1 . However it is not a contraction in L p for any p > 1 . Leger showed in [20] that for a convex flux, it is however a contraction in L 2 up to a suitable shift. We investigate in this paper whether such a contraction may happen for systems. The method is based on the relative entropy method. Our general analysis leads us to the new geometrical notion of Genuinely non-Temple systems. We treat in...

Mean square error of the estimator of the conditional hazard function

Abbes Rabhi, Samir Benaissa, El Hadj Hamel, Boubaker Mechab (2013)

Applicationes Mathematicae

This paper deals with a scalar response conditioned by a functional random variable. The main goal is to estimate the conditional hazard function. An asymptotic formula for the mean square error of this estimator is calculated considering as usual the bias and variance.

Minimax and bayes estimation in deconvolution problem*

Mikhail Ermakov (2008)

ESAIM: Probability and Statistics

We consider a deconvolution problem of estimating a signal blurred with a random noise. The noise is assumed to be a stationary Gaussian process multiplied by a weight function function εh where h ∈ L2(R1) and ε is a small parameter. The underlying solution is assumed to be infinitely differentiable. For this model we find asymptotically minimax and Bayes estimators. In the case of solutions having finite number of derivatives similar results were obtained in [G.K. Golubev and R.Z. Khasminskii,...

Minimax nonparametric prediction

Maciej Wilczyński (2001)

Applicationes Mathematicae

Let U₀ be a random vector taking its values in a measurable space and having an unknown distribution P and let U₁,...,Uₙ and V , . . . , V m be independent, simple random samples from P of size n and m, respectively. Further, let z , . . . , z k be real-valued functions defined on the same space. Assuming that only the first sample is observed, we find a minimax predictor d⁰(n,U₁,...,Uₙ) of the vector Y m = j = 1 m ( z ( V j ) , . . . , z k ( V j ) ) T with respect to a quadratic errors loss function.

Minimax results for estimating integrals of analytic processes

Karim Benhenni, Jacques Istas (2010)

ESAIM: Probability and Statistics

The problem of predicting integrals of stochastic processes is considered. Linear estimators have been constructed by means of samples at N discrete times for processes having a fixed Hölderian regularity s > 0 in quadratic mean. It is known that the rate of convergence of the mean squared error is of order N-(2s+1). In the class of analytic processes Hp, p ≥ 1, we show that among all estimators, the linear ones are optimal. Moreover, using optimal coefficient estimators derived through...

Model selection for estimating the non zero components of a Gaussian vector

Sylvie Huet (2006)

ESAIM: Probability and Statistics

We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.

Model selection for quantum homodyne tomography

Jonas Kahn (2009)

ESAIM: Probability and Statistics

This paper deals with a non-parametric problem coming from physics, namely quantum tomography. That consists in determining the quantum state of a mode of light through a homodyne measurement. We apply several model selection procedures: penalized projection estimators, where we may use pattern functions or wavelets, and penalized maximum likelihood estimators. In all these cases, we get oracle inequalities. In the former we also have a polynomial rate of convergence for the non-parametric problem....

Moderate deviations for the Durbin–Watson statistic related to the first-order autoregressive process

S. Valère Bitseki Penda, Hacène Djellout, Frédéric Proïa (2014)

ESAIM: Probability and Statistics

The purpose of this paper is to investigate moderate deviations for the Durbin–Watson statistic associated with the stable first-order autoregressive process where the driven noise is also given by a first-order autoregressive process. We first establish a moderate deviation principle for both the least squares estimator of the unknown parameter of the autoregressive process as well as for the serial correlation estimator associated with the driven noise. It enables us to provide a moderate deviation...

Currently displaying 141 – 160 of 298