Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/149793 
Erscheinungsjahr: 
2016
Schriftenreihe/Nr.: 
cemmap working paper No. CWP47/16
Verlag: 
Centre for Microdata Methods and Practice (cemmap), London
Zusammenfassung: 
In this paper, we derive a rate of convergence of the Lasso estimator when the penalty parameter Lambda for the estimator is chosen using K-fold cross-validation; in particular, we show that in the model with Gaussian noise and under fairly general assumptions on the candidate set of values of Lambda, the prediction norm of the estimation error of the cross-validated Lasso estimator is with high probability bounded from above up-to a constant by (s log p/n)1/2 . (log7/8 n) as long as p log n/n = o(1) and some other mild regularity conditions are satisfied, where n is the sample size of available data, p is the number of covariates, and s is the number of non-zero coefficients in the model. Thus, the cross-validated Lasso estimator achieves the fastest possible rate of convergence up-to the logarithmic factor log7/8 n. In addition, we derive a sparsity bound for the cross-validated Lasso estimator; in particular, we show that under the same conditions as above, the number of non-zero coefficients of the estimator is with high probability bounded from above up-to a constant by s log5 n. Finally, we show that our proof technique generates non-trivial bounds on the prediction norm of the estimation error of the cross-validated Lasso estimator even if p is much larger than n and the assumption of Gaussian noise fails; in particular, the prediction norm of the estimation error is with high-probability bounded from above up-to a constant by (s log2(pn)/n)1/4 under mild regularity conditions.
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
596.7 kB





Publikationen in EconStor sind urheberrechtlich geschützt.