Article contents
An ℓ1-oracle inequality for the Lasso in finite mixture Gaussian regression models
Published online by Cambridge University Press: 04 November 2013
Abstract
We consider a finite mixture of Gaussian regression models for high-dimensionalheterogeneous data where the number of covariates may be much larger than the sample size.We propose to estimate the unknown conditional mixture density by anℓ1-penalized maximum likelihood estimator. We shall providean ℓ1-oracle inequality satisfied by this Lasso estimator withthe Kullback–Leibler loss. In particular, we give a condition on the regularizationparameter of the Lasso to obtain such an oracle inequality. Our aim is twofold: to extendthe ℓ1-oracle inequality established by Massart and Meynet[12] in the homogeneous Gaussian linearregression case, and to present a complementary result to Städler et al.[18], by studying the Lasso for itsℓ1-regularization properties rather than considering it as avariable selection procedure. Our oracle inequality shall be deduced from a finite mixtureGaussian regression model selection theorem for ℓ1-penalizedmaximum likelihood conditional density estimation, which is inspired from Vapnik’s methodof structural risk minimization [23] and from thetheory on model selection for maximum likelihood estimators developed by Massart in [11].
Keywords
- Type
- Research Article
- Information
- Copyright
- © EDP Sciences, SMAI, 2013
References
- 3
- Cited by