Article contents
Penalization versus Goldenshluger − Lepskistrategies in warped bases regression
Published online by Cambridge University Press: 17 May 2013
Abstract
This paper deals with the problem of estimating a regression function f,in a random design framework. We build and study two adaptive estimators based on modelselection, applied with warped bases. We start with a collection of finite dimensionallinear spaces, spanned by orthonormal bases. Instead of expanding directly the targetfunction f on these bases, we rather consider the expansion ofh = f ∘ G-1, whereG is the cumulative distribution function of the design, followingKerkyacharian and Picard [Bernoulli 10 (2004) 1053–1105].The data-driven selection of the (best) space is done with two strategies: we use both apenalization version of a “warped contrast”, and a model selection device in the spirit ofGoldenshluger and Lepski [Ann. Stat. 39 (2011) 1608–1632].We propose by these methods two functions, ĥl(l = 1, 2), easier to compute than least-squares estimators. Weestablish nonasymptotic mean-squared integrated risk bounds for the resulting estimators, \hbox{$\hat{f}_l=\hat{h}_l\circ G$}f̂l = ĥl°G if G is known, or\hbox{$\hat{f}_l=\hat{h}_l\circ\hat{G}$}f̂l = ĥl°Ĝ(l = 1,2) otherwise, where Ĝ is theempirical distribution function. We study also adaptive properties, in case the regressionfunction belongs to a Besov or Sobolev space, and compare the theoretical and practicalperformances of the two selection rules.
- Type
- Research Article
- Information
- Copyright
- © EDP Sciences, SMAI, 2013
References
- 13
- Cited by