Data-driven penalty calibration: A case study for Gaussian mixture model selection
ESAIM: Probability and Statistics (2012)
- Volume: 15, page 320-339
- ISSN: 1292-8100
Access Full Article
topAbstract
topHow to cite
topMaugis, Cathy, and Michel, Bertrand. "Data-driven penalty calibration: A case study for Gaussian mixture model selection." ESAIM: Probability and Statistics 15 (2012): 320-339. <http://eudml.org/doc/222487>.
@article{Maugis2012,
abstract = {
In the companion paper [C. Maugis and B. Michel,
A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: P&S15 (2011) 41–68] , a penalized likelihood
criterion is proposed to select a Gaussian mixture model among a
specific model collection. This criterion depends on unknown
constants which have to be calibrated in practical situations. A
“slope heuristics” method is described and experimented to deal
with this practical problem. In a model-based clustering context,
the specific form of the considered Gaussian mixtures allows us to
detect the noisy variables in order to improve the data clustering
and its interpretation. The behavior of our data-driven criterion
is highlighted on simulated datasets, a curve clustering example
and a genomics application.
},
author = {Maugis, Cathy, Michel, Bertrand},
journal = {ESAIM: Probability and Statistics},
keywords = {Slope heuristics; Penalized likelihood criterion; Model-based clustering; noisy variable detection; slope heuristics; penalized likelihood criterion; model-based clustering},
language = {eng},
month = {1},
pages = {320-339},
publisher = {EDP Sciences},
title = {Data-driven penalty calibration: A case study for Gaussian mixture model selection},
url = {http://eudml.org/doc/222487},
volume = {15},
year = {2012},
}
TY - JOUR
AU - Maugis, Cathy
AU - Michel, Bertrand
TI - Data-driven penalty calibration: A case study for Gaussian mixture model selection
JO - ESAIM: Probability and Statistics
DA - 2012/1//
PB - EDP Sciences
VL - 15
SP - 320
EP - 339
AB -
In the companion paper [C. Maugis and B. Michel,
A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: P&S15 (2011) 41–68] , a penalized likelihood
criterion is proposed to select a Gaussian mixture model among a
specific model collection. This criterion depends on unknown
constants which have to be calibrated in practical situations. A
“slope heuristics” method is described and experimented to deal
with this practical problem. In a model-based clustering context,
the specific form of the considered Gaussian mixtures allows us to
detect the noisy variables in order to improve the data clustering
and its interpretation. The behavior of our data-driven criterion
is highlighted on simulated datasets, a curve clustering example
and a genomics application.
LA - eng
KW - Slope heuristics; Penalized likelihood criterion; Model-based clustering; noisy variable detection; slope heuristics; penalized likelihood criterion; model-based clustering
UR - http://eudml.org/doc/222487
ER -
References
top- C. Abraham, P.A. Cornillon, E. Matzner-Løber and N. Molinari. Unsupervised curve clustering using B-splines. Scand. J. Stat. Th. Appl.30 (2003) 581–595.
- H. Akaike, Information theory and an extension of the maximum likelihood principle, in Second International Symposium on Information Theory (Tsahkadsor, 1971). Akadémiai Kiadó, Budapest (1973) 267–281.
- H. Akaike, A new look at the statistical model identification. IEEE Trans. Automatic Control AC-19 (1974) 716–723. System identification and time-series analysis
- S. Arlot, Réechantillonnage et sélection de modèles, Ph.D. thesis, Université Paris-Sud XI (2007).
- S. Arlot and P. Massart, Slope heuristics for heteroscedastic regression on a random design. Submitted to the Annals of Statistics (2008).
- D. Babusiaux, S. Barreau and P.-R. Bauquis, Oil and gas exploration and production, reserves, costs, contracts. Technip, Paris (2007).
- J.D. Banfield and A.E. Raftery, Model-based gaussian and non-gaussian clustering. Biometrics49 (1993) 803–821.
- A. Barron, L. Birgé and P. Massart, Risk bounds for model selection via penalization. Prob. Th. Rel. Fields113 (1999) 301–413.
- J.-P. Baudry, Clustering through model selection criteria. Poster session at One Day Statistical Workshop in Lisieux. baudry, June (2007). URIhttp://www.math.u-psud.fr/
- A. Berlinet, G. Biau and L. Rouvière, Functional classification with wavelets, Technical report To appear (2008), in Annales de l'ISUP.
- C. Biernacki, G. Celeux and G. Govaert, Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Anal. Mach. Intell.22 (2000) 719–725.
- C. Biernacki, G. Celeux, G. Govaert and F. Langrognet, Model-based cluster and discriminant analysis with the MIXMOD software. Comp. Stat. Data Anal.51 (2006) 587–600.
- L. Birgé and P. Massart, Gaussian model selection. J. Eur. Math. Soc. (JEMS)3 (2001) 203–268.
- L. Birgé and P. Massart, Minimal penalties for Gaussian model selection. Prob. Th. Rel. Fields138 (2006) 33–73.
- K.-E. Blake and C. Merz, Uci repository of machine learning databases (1999). . URIhttp://mlearn.ics.uci.edu/MLSummary.html
- L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone, Classification and regression trees. Wadsworth Statistics/Probability Series. Wadsworth Advanced Books and Software, Belmont, CA (1984).
- G. Celeux and G. Govaert, Gaussian parsimonious clustering models. Patt. Recog.28 (1995) 781–793.
- A.P. Dempster, N.M. Laird and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B. Methodol.39 (1977) 1–38, With discussion.
- S. Gagnot, J.-P. Tamby, M.-L. Martin-Magniette, F. Bitton, L. Taconnat, S. Balzergue, S. Aubourg, J.-P. Renou, A. Lecharny and V. Brunaud, CATdb: a public access to Arabidopsis transcriptome data from the URGV-CATMA platform. Nucleic Acids Res.36 (2008) 986–990.
- L.A. García-Escudero and A. Gordaliza, A proposal for robust curve clustering. J. Class.22 (2005) 185–201.
- P.J. Huber, Robust Statistics. Wiley (1981).
- G.M. James and C.A. Sugar, Clustering for sparsely sampled functional data. J. Am. Stat. Assoc.98 (2003) 397–408.
- D. Jiang, C. Tang and A. Zhang, Cluster analysis for gene expression data: A survey. IEEE Trans. Knowl. Data Eng.16 (2004) 1370–1386.
- C. Keribin, Consistent estimation of the order of mixture models. Sankhyā Ser. A62 (2000) 49–66.
- E. Lebarbier, Detecting multiple change-points in the mean of Gaussian process by model selection. Signal Proc.85 (2005) 717–736.
- V. Lepez, Potentiel de réserves d'un bassin pétrolier: modélisation et estimation, Ph.D. thesis, Université Paris Sud (2002).
- C. Lurin, C. Andréas, S. Aubourg, M. Bellaoui, F. Bitton, C. Bruyère, M. Caboche, J. Debast, C. Gualberto, B. Hoffmann, M. Lecharny, A. Le Ret, M.-L. Martin-Magniette, H. Mireau, N. Peeters, J.-P. Renou, B. Szurek, L. Taconnat and I. Small, Genome-wide analysis of arabidopsis pentatricopeptide repeat proteins reveals their essential role in organelle biogenesis. Plant Cell16 (2004) 2089–103.
- P. Ma, W. Castillo-Davis, C. Zhong and J.S. Liu, A data-driven clustering method for time course gene expression data. Nucleic Acids Res.34 (2006) 1261–1269.
- C.L. Mallows, Some comments on Cp. Technometrics37 (1973) 362–372.
- P. Massart, Concentration inequalities and model selection, Lecture Notes in Mathematics Vol. 1896. Springer, Berlin (2007). Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6–23 (2003).
- C. Maugis, G. Celeux and M.-L. Martin-Magniette, Variable selection for clustering with Gaussian mixture models. Biometrics65 (2009) 701–709.
- C. Maugis, G. Celeux and M.-L. Martin-Magniette, Variable selection in model-based clustering: A general variable role modeling. Comput. Stat. Data Anal. 53 (2009) 3872–3882.
- C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. ESAIM: P&S15 (2011) 41–68.
- B. Michel, Modélisation de la production d'hydrocarbures dans un bassin pétrolier, Ph.D. thesis, Université Paris-Sud 11 (2008).
- B.P. Percival and A.T. Walden, Wavelet methods for time series analysis. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge university press, New York (2000).
- A.E. Raftery and N. Dean, Variable selection for model-based clustering. J. Am. Stat. Assoc.101 (2006) 168–178.
- G. Schwarz, Estimating the dimension of a model. Ann. Stat.6 (1978) 461–464.
- R. Sharan, R. Elkon and R. Shamir, Cluster analysis and its applications to gene expression data. In Ernst Schering Workshop on Bioinformatics and Genome Analysis. Springer Verlag (2002).
- T. Tarpey and K.K.J. Kinateder, Clustering functional data. J. Class.20 (2003) 93–114.
- F. Villers, Tests et sélection de modèles pour l'analyse de données protéomiques et transcriptomiques, Ph.D. thesis, Université Paris-Sud 11 (2007).
Citations in EuDML Documents
topNotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.