Model selection for estimating the non zero components of a Gaussian vector

Sylvie Huet

ESAIM: Probability and Statistics (2006)

  • Volume: 10, page 164-183
  • ISSN: 1292-8100

Abstract

top
We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.

How to cite

top

Huet, Sylvie. "Model selection for estimating the non zero components of a Gaussian vector." ESAIM: Probability and Statistics 10 (2006): 164-183. <http://eudml.org/doc/249747>.

@article{Huet2006,
abstract = { We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk. },
author = {Huet, Sylvie},
journal = {ESAIM: Probability and Statistics},
keywords = {Kullback risk; model selection; penalised likelihood criteria.; penalised likelihood criteria},
language = {eng},
month = {3},
pages = {164-183},
publisher = {EDP Sciences},
title = {Model selection for estimating the non zero components of a Gaussian vector},
url = {http://eudml.org/doc/249747},
volume = {10},
year = {2006},
}

TY - JOUR
AU - Huet, Sylvie
TI - Model selection for estimating the non zero components of a Gaussian vector
JO - ESAIM: Probability and Statistics
DA - 2006/3//
PB - EDP Sciences
VL - 10
SP - 164
EP - 183
AB - We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk.
LA - eng
KW - Kullback risk; model selection; penalised likelihood criteria.; penalised likelihood criteria
UR - http://eudml.org/doc/249747
ER -

References

top
  1. F. Abramovich, Y. Benjamini, D. Donoho and I. Johnston, Adapting to unknown sparsity by controlloing the false discovery rate. Technical Report 2000-19, Department of Statistics, Stanford University (2000).  
  2. H. Akaike, Information theory and an extension of the maximum likelihood principle, in 2nd International Symposium on Information Theory, B.N. Petrov and F. Csaki Eds., Budapest Akademia Kiado (1973) 267–281.  Zbl0283.62006
  3. H. Akaike, A bayesian analysis of the minimum aic procedure. Ann. Inst. Statist. Math.30 (1978) 9–14.  Zbl0441.62007
  4. A. Antoniadis, I. Gijbels and G. Grégoire, Model selection using wavelet decomposition and applications. Biometrika84 (1997) 751–763.  Zbl0892.62016
  5. Y. Baraud, S. Huet and B. Laurent, Adaptive tests of qualitative hypotheses. ESAIM: PS7 (2003) 147–159.  Zbl1014.62052
  6. A. Barron, L. Birgé and P. Massart, Risk bounds for model selection via penalization. Probab. Theory Rel. Fields113 (1999) 301–413.  Zbl0946.62036
  7. Y. Benjamini and Y. Hochberg, Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Statist. Soc. B57 (1995) 289–300.  Zbl0809.62014
  8. L. Birgé and P. Massart, Gaussian model selection. J. Eur. Math. Soc. (JEMS)3 (2001) 203–268.  Zbl1037.62001
  9. L. Birgé and P. Massart, A generalized cp criterion for gaussian model selection. Technical report, Univ. Paris 6, Paris 7, Paris (2001).  Zbl1037.62001
  10. B.S. Cirel'son, I.A. Ibragimov and V.N. Sudakov, Norm of gaussian sample function, in Proceedings of the 3rd Japan-URSS. Symposium on Probability Theory, Berlin, Springer-Verlag. Springer Lect. Notes Math.550 (1976) 20–41.  
  11. H.A. David, Order Statistics. Wiley series in Probability and mathematical Statistics. John Wiley and Sons, NY (1981).  
  12. E.P. Box and R.D. Meyer, An analysis for unreplicated fractional factorials. Technometrics28 (1986) 11–18.  Zbl0586.62168
  13. D.P. Foster and R.A. Stine, Adaptive variable selection competes with bayes expert. Technical report, The Wharton School of the University of Pennsylvania, Philadelphia (2002).  
  14. S. Huet, Comparison of methods for estimating the non zero components of a gaussian vector. Technical report, INRA, MIA-Jouy, www.inra.fr/miaj/apps/cgi-bin/raptech.cgi (2005).  
  15. M.C. Hurvich and C.L. Tsai, Regression and time series model selection in small samples. Biometrika76 (1989) 297–307.  Zbl0669.62085
  16. I. Johnston and B. Silverman, Empirical bayes selection of wavelet thresholds. Available from www.stats.ox.ac.uk/ silverma/papers.html (2003).  
  17. B. Laurent and P. Massart, Adaptive estimation of a quadratic functional by model selection. Ann. Statist.28 (2000) 1302–1338.  Zbl1105.62328
  18. R. Nishii, Maximum likelihood principle and model selection when the true model is unspecified. J. Multivariate Anal.27 (1988) 392–403.  Zbl0684.62026
  19. P.D. Haaland and M.A. O'Connell, Inference for effect-saturated fractional factorials. Technometrics37 (1995) 82–93.  Zbl0825.62656
  20. J. Rissanen, Universal coding, information, prediction and estimation. IEEE Trans. Infor. Theory30 (1984) 629–636.  Zbl0574.62003
  21. R.V. Lenth, Quick and easy analysis of unreplicated factorials. Technometrics31(4) (1989) 469–473.  
  22. G. Schwarz, Estimating the dimension of a model. Ann. Statist.6 (1978) 461–464.  Zbl0379.62005

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.