Semiparametric deconvolution with unknown noise variance

Catherine Matias

ESAIM: Probability and Statistics (2010)

  • Volume: 6, page 271-292
  • ISSN: 1292-8100

Abstract

top
This paper deals with semiparametric convolution models, where the noise sequence has a Gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal's density g is [5]. Nevertheless, regularity assumptions on the signal density g improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be Gaussian centered) has a variance σ2 that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of g are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of g over a class of regular densities is lower bounded by a constant over log n. We construct two estimators of σ2, and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model.

How to cite

top

Matias, Catherine. "Semiparametric deconvolution with unknown noise variance." ESAIM: Probability and Statistics 6 (2010): 271-292. <http://eudml.org/doc/104293>.

@article{Matias2010,
abstract = { This paper deals with semiparametric convolution models, where the noise sequence has a Gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal's density g is [5]. Nevertheless, regularity assumptions on the signal density g improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be Gaussian centered) has a variance σ2 that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of g are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of g over a class of regular densities is lower bounded by a constant over log n. We construct two estimators of σ2, and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model. },
author = {Matias, Catherine},
journal = {ESAIM: Probability and Statistics},
keywords = {Convolution; deconvolution; density estimation; mixing distribution; normal mean mixture model; semiparametric mixture model; noise; variance estimation; minimax risk.},
language = {eng},
month = {3},
pages = {271-292},
publisher = {EDP Sciences},
title = {Semiparametric deconvolution with unknown noise variance},
url = {http://eudml.org/doc/104293},
volume = {6},
year = {2010},
}

TY - JOUR
AU - Matias, Catherine
TI - Semiparametric deconvolution with unknown noise variance
JO - ESAIM: Probability and Statistics
DA - 2010/3//
PB - EDP Sciences
VL - 6
SP - 271
EP - 292
AB - This paper deals with semiparametric convolution models, where the noise sequence has a Gaussian centered distribution, with unknown variance. Non-parametric convolution models are concerned with the case of an entirely known distribution for the noise sequence, and they have been widely studied in the past decade. The main property of those models is the following one: the more regular the distribution of the noise is, the worst the rate of convergence for the estimation of the signal's density g is [5]. Nevertheless, regularity assumptions on the signal density g improve those rates of convergence [15]. In this paper, we show that when the noise (assumed to be Gaussian centered) has a variance σ2 that is unknown (actually, it is always the case in practical applications), the rates of convergence for the estimation of g are seriously deteriorated, whatever its regularity is supposed to be. More precisely, the minimax risk for the pointwise estimation of g over a class of regular densities is lower bounded by a constant over log n. We construct two estimators of σ2, and more particularly, an estimator which is consistent as soon as the signal has a finite first order moment. We also mention as a consequence the deterioration of the rate of convergence in the estimation of the parameters in the nonlinear errors-in-variables model.
LA - eng
KW - Convolution; deconvolution; density estimation; mixing distribution; normal mean mixture model; semiparametric mixture model; noise; variance estimation; minimax risk.
UR - http://eudml.org/doc/104293
ER -

References

top
  1. R.J. Carroll and P. Hall, Optimal rates of convergence for deconvolving a density. J. Amer. Statist. Assoc.83 (1988) 1184-1186.  
  2. L. Devroye, Consistent deconvolution in density estimation. Canad. J. Statist.17 (1989) 235-239.  
  3. J. Fan, Asymptotic normality for deconvolution kernel density estimators. Sankhya Ser. A53 (1991) 97-110.  
  4. J. Fan, Global behavior of deconvolution kernel estimates. Statist. Sinica1 (1991) 541-551.  
  5. J. Fan, On the optimal rates of convergence for nonparametric deconvolution problems. Ann. Statist.19 (1991) 1257-1272.  
  6. J. Fan, Adaptively local one-dimensional subproblems with application to a deconvolution problem. Ann. Statist.21 (1993) 600-610.  
  7. W. Feller, An introduction to probability theory and its applications, Vol. II. John Wiley & Sons Inc., New York (1971).  
  8. R.D. Gill and B.Y. Levit, Applications of the Van Trees inequality: A Bayesian Cramér-Rao bound. Bernoulli1 (1995) 59-79.  
  9. H. Ishwaran, Information in semiparametric mixtures of exponential families. Ann. Statist.27 (1999) 159-177.  
  10. B.G. Lindsay, Exponential family mixture models (with least-squares estimators). Ann. Statist.14 (1986) 124-137.  
  11. M.C. Liu and R.L. Taylor, A consistent nonparametric density estimator for the deconvolution problem. Canad. J. Statist.17 (1989) 427-438.  
  12. C. Matias and M.-L. Taupin, Minimax estimation of some linear functionals in the convolution model, Manuscript. Université Paris-Sud (2001).  
  13. P. Medgyessy, Decomposition of superposition of density functions on discrete distributions. II. Magyar Tud. Akad. Mat. Fiz. Oszt. Közl.21 (1973) 261-382.  
  14. M.H. Neumann, On the effect of estimating the error density in nonparametric deconvolution. J. Nonparametr. Statist.7 (1997) 307-330.  
  15. M. Pensky and B. Vidakovic, Adaptive wavelet estimator for nonparametric density deconvolution. Ann. Statist.27 (1999) 2033-2053.  
  16. L. Stefanski and R.J. Carroll, Deconvoluting kernel density estimators. Statistics21 (1990) 169-184.  
  17. L.A. Stefanski, Rates of convergence of some estimators in a class of deconvolution problems. Statist. Probab. Lett.9 (1990) 229-235.  
  18. M.L. Taupin. Semi-parametric estimation in the non-linear errors-in-variables model. Ann. Statist. 29 (2001) 66-93.  
  19. A.W. van der Vaart, Asymptotic statistics. Cambridge University Press, Cambridge (1998).  
  20. A.W. van der Vaart and J.A. Wellner, Weak convergence and empirical processes. Springer-Verlag, New York (1996). With applications to statistics.  
  21. C.-H. Zhang, Fourier methods for estimating mixing densities and distributions. Ann. Statist.18 (1990) 806-831.  

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.