Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHorowitz, Joelen_US
dc.contributor.authorHuang, Jianen_US
dc.description.abstractWe consider estimation of a linear or nonparametric additive model in which a few coefficients or additive components are large and may be objects of substantive interest, whereas others are small but not necessarily zero. The number of small coefficients or additive components may exceed the sample size. It is not known which coefficients or components are large and which are small. The large coefficients or additive components can be estimated with a smaller mean-square error or integrated mean-square error if the small ones can be identified and the covariates associated with them dropped from the model. We give conditions under which several penalized least squares procedures distinguish correctly between large and small coefficients or additive components with probability approaching 1 as the sample size increases. The results of Monte Carlo experiments and an empirical example illustrate the benefits of our methods.en_US
dc.publisher|aCentre for Microdata Methods and Practice (cemmap) |cLondonen_US
dc.relation.ispartofseries|acemmap working paper |xCWP17/12en_US
dc.subject.keywordpenalized regressionen_US
dc.subject.keywordhigh-dimensional dataen_US
dc.subject.keywordvariable selectionen_US
dc.titlePenalized estimation of high-dimensional models under a generalized sparsity conditionen_US
dc.typeWorking Paperen_US

Files in This Item:
452.34 kB

Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.