Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/31115 
Year of Publication: 
2004
Series/Report no.: 
Discussion Paper No. 401
Publisher: 
Ludwig-Maximilians-Universität München, Sonderforschungsbereich 386 - Statistische Analyse diskreter Strukturen, München
Abstract: 
The use of generalized additive models in statistical data analysis suffers from the restriction to few explanatory variables and the problems of selection of smoothing parameters. Generalized additive model boosting circumvents these problems by means of stagewise fitting of weak learners. A fitting procedure is derived which works for all simple exponential family distributions, including binomial, Poisson and normal response variables. The procedure combines the selection of variables and the determination of the appropriate amount of smoothing. As weak learners penalized regression splines and the newly introduced penalized stumps are considered. Estimates of standard deviations and stopping criteria which are notorious problems in iterative procedures are based on an approximate hat matrix. The method is shown to outperform common procedures for the fitting of generalized additive models. In particular in high dimensional settings it is the only method that works properly.
Subjects: 
Generalized additive models
boosting
selection of smoothing parameters
variable selection
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
389.23 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.