The search session has expired. Please query the service again.

The search session has expired. Please query the service again.

Displaying similar documents to “A non asymptotic penalized criterion for gaussian mixture model selection”

A non asymptotic penalized criterion for Gaussian mixture model selection

Cathy Maugis, Bertrand Michel (2012)

ESAIM: Probability and Statistics

Similarity:

Specific Gaussian mixtures are considered to solve simultaneously variable selection and clustering problems. A non asymptotic penalized criterion is proposed to choose the number of mixture components and the relevant variable subset. Because of the non linearity of the associated Kullback-Leibler contrast on Gaussian mixtures, a general model selection theorem for maximum likelihood estimation proposed by [Massart  Springer, Berlin (2007). Lectures from the 33rd Summer School on...

Data-driven penalty calibration: A case study for gaussian mixture model selection

Cathy Maugis, Bertrand Michel (2011)

ESAIM: Probability and Statistics

Similarity:

In the companion paper [C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. 15 (2011) 41–68] , a penalized likelihood criterion is proposed to select a Gaussian mixture model among a specific model collection. This criterion depends on unknown constants which have to be calibrated in practical situations. A “slope heuristics” method is described and experimented to deal with this practical problem. In a model-based clustering context,...

Data-driven penalty calibration: A case study for Gaussian mixture model selection

Cathy Maugis, Bertrand Michel (2012)

ESAIM: Probability and Statistics

Similarity:

In the companion paper [C. Maugis and B. Michel, A non asymptotic penalized criterion for Gaussian mixture model selection. (2011) 41–68] , a penalized likelihood criterion is proposed to select a Gaussian mixture model among a specific model collection. This criterion depends on unknown constants which have to be calibrated in practical situations. A “slope heuristics” method is described and experimented to deal with this practical problem. In a model-based clustering...

Adaptive density estimation for clustering with gaussian mixtures

C. Maugis-Rabusseau, B. Michel (2013)

ESAIM: Probability and Statistics

Similarity:

Gaussian mixture models are widely used to study clustering problems. These model-based clustering methods require an accurate estimation of the unknown data density by Gaussian mixtures. In Maugis and Michel (2009), a penalized maximum likelihood estimator is proposed for automatically selecting the number of mixture components. In the present paper, a collection of univariate densities whose logarithm is locally -Hölder with moment and tail conditions are considered. We show that this...

Partition-based conditional density estimation

S. X. Cohen, E. Le Pennec (2013)

ESAIM: Probability and Statistics

Similarity:

We propose a general partition-based strategy to estimate conditional density with candidate densities that are piecewise constant with respect to the covariate. Capitalizing on a general penalized maximum likelihood model selection result, we prove, on two specific examples, that the penalty of each model can be chosen roughly proportional to its dimension. We first study a strategy in which the densities are chosen piecewise conditional according to the variable. We then consider Gaussian...

An ℓ1-oracle inequality for the Lasso in finite mixture gaussian regression models

Caroline Meynet (2013)

ESAIM: Probability and Statistics

Similarity:

We consider a finite mixture of Gaussian regression models for high-dimensional heterogeneous data where the number of covariates may be much larger than the sample size. We propose to estimate the unknown conditional mixture density by an -penalized maximum likelihood estimator. We shall provide an -oracle inequality satisfied by this Lasso estimator with the Kullback–Leibler loss. In particular, we give a condition on the regularization parameter of...