Semiparametric deconvolution with unknown noise variance

Catherine Matias

ESAIM: Probability and Statistics (2002)

  • Volume: 6, page 271-292
  • ISSN: 1292-8100

Abstract

top
This paper deals with semiparametric convolution models, where the noise sequence has a gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal’s density g is [3]. Nevertheless, regularity assumptions on the signal density g improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be gaussian centered) has a variance σ 2 that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of g are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of g over a class of regular densities is lower bounded by a constant over log n . We construct two estimators of σ 2 , and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model.

How to cite

top

Matias, Catherine. "Semiparametric deconvolution with unknown noise variance." ESAIM: Probability and Statistics 6 (2002): 271-292. <http://eudml.org/doc/245543>.

@article{Matias2002,
abstract = {This paper deals with semiparametric convolution models, where the noise sequence has a gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal’s density $g$ is [3]. Nevertheless, regularity assumptions on the signal density $g$ improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be gaussian centered) has a variance $\sigma ^2$ that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of $g$ are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of $g$ over a class of regular densities is lower bounded by a constant over $\log n$. We construct two estimators of $\sigma ^2$, and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model.},
author = {Matias, Catherine},
journal = {ESAIM: Probability and Statistics},
keywords = {convolution; deconvolution; density estimation; mixing distribution; normal mean mixture model; semiparametric mixture model; noise; variance estimation; minimax risk},
language = {eng},
pages = {271-292},
publisher = {EDP-Sciences},
title = {Semiparametric deconvolution with unknown noise variance},
url = {http://eudml.org/doc/245543},
volume = {6},
year = {2002},
}

TY - JOUR
AU - Matias, Catherine
TI - Semiparametric deconvolution with unknown noise variance
JO - ESAIM: Probability and Statistics
PY - 2002
PB - EDP-Sciences
VL - 6
SP - 271
EP - 292
AB - This paper deals with semiparametric convolution models, where the noise sequence has a gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal’s density $g$ is [3]. Nevertheless, regularity assumptions on the signal density $g$ improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be gaussian centered) has a variance $\sigma ^2$ that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of $g$ are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of $g$ over a class of regular densities is lower bounded by a constant over $\log n$. We construct two estimators of $\sigma ^2$, and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model.
LA - eng
KW - convolution; deconvolution; density estimation; mixing distribution; normal mean mixture model; semiparametric mixture model; noise; variance estimation; minimax risk
UR - http://eudml.org/doc/245543
ER -

References

top
  1. [1] R.J. Carroll and P. Hall, Optimal rates of convergence for deconvolving a density. J. Amer. Statist. Assoc. 83 (1988) 1184-1186. Zbl0673.62033MR997599
  2. [2] L. Devroye, Consistent deconvolution in density estimation. Canad. J. Statist. 17 (1989) 235-239. Zbl0679.62029MR1033106
  3. [3] J. Fan, Asymptotic normality for deconvolution kernel density estimators. Sankhya Ser. A 53 (1991) 97-110. Zbl0729.62034MR1177770
  4. [4] J. Fan, Global behavior of deconvolution kernel estimates. Statist. Sinica 1 (1991) 541-551. Zbl0823.62032MR1130132
  5. [5] J. Fan, On the optimal rates of convergence for nonparametric deconvolution problems. Ann. Statist. 19 (1991) 1257-1272. Zbl0729.62033MR1126324
  6. [6] J. Fan, Adaptively local one-dimensional subproblems with application to a deconvolution problem. Ann. Statist. 21 (1993) 600-610. Zbl0785.62038MR1232507
  7. [7] W. Feller, An introduction to probability theory and its applications, Vol. II. John Wiley & Sons Inc., New York (1971). Zbl0219.60003MR270403
  8. [8] R.D. Gill and B.Y. Levit, Applications of the Van Trees inequality: A Bayesian Cramér-Rao bound. Bernoulli 1 (1995) 59-79. Zbl0830.62035MR1354456
  9. [9] H. Ishwaran, Information in semiparametric mixtures of exponential families. Ann. Statist. 27 (1999) 159-177. Zbl0932.62039MR1701106
  10. [10] B.G. Lindsay, Exponential family mixture models (with least-squares estimators). Ann. Statist. 14 (1986) 124-137. Zbl0587.62057MR829558
  11. [11] M.C. Liu and R.L. Taylor, A consistent nonparametric density estimator for the deconvolution problem. Canad. J. Statist. 17 (1989) 427-438. Zbl0694.62017MR1047309
  12. [12] C. Matias and M.-L. Taupin, Minimax estimation of some linear functionals in the convolution model, Manuscript. Université Paris-Sud (2001). Zbl1130.62323
  13. [13] P. Medgyessy, Decomposition of superposition of density functions on discrete distributions. II. Magyar Tud. Akad. Mat. Fiz. Oszt. Közl. 21 (1973) 261-382. Zbl0275.60023MR440660
  14. [14] M.H. Neumann, On the effect of estimating the error density in nonparametric deconvolution. J. Nonparametr. Statist. 7 (1997) 307-330. Zbl1003.62514MR1460203
  15. [15] M. Pensky and B. Vidakovic, Adaptive wavelet estimator for nonparametric density deconvolution. Ann. Statist. 27 (1999) 2033-2053. Zbl0962.62030MR1765627
  16. [16] L. Stefanski and R.J. Carroll, Deconvoluting kernel density estimators. Statistics 21 (1990) 169-184. Zbl0697.62035MR1054861
  17. [17] L.A. Stefanski, Rates of convergence of some estimators in a class of deconvolution problems. Statist. Probab. Lett. 9 (1990) 229-235. Zbl0686.62026MR1045189
  18. [18] M.L. Taupin. Semi-parametric estimation in the non-linear errors-in-variables model. Ann. Statist. 29 (2001) 66-93. Zbl1029.62039MR1833959
  19. [19] A.W. van der Vaart, Asymptotic statistics. Cambridge University Press, Cambridge (1998). Zbl0910.62001MR1652247
  20. [20] A.W. van der Vaart and J.A. Wellner, Weak convergence and empirical processes. Springer-Verlag, New York (1996). With applications to statistics. Zbl0862.60002MR1385671
  21. [21] C.-H. Zhang, Fourier methods for estimating mixing densities and distributions. Ann. Statist. 18 (1990) 806-831. Zbl0778.62037MR1056338

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.