Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/19040 
Year of Publication: 
2005
Series/Report no.: 
CESifo Working Paper No. 1576
Publisher: 
Center for Economic Studies and ifo Institute (CESifo), Munich
Abstract: 
We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.
Subjects: 
adaptive learning
E-stability
recursive least squares
robust estimation
JEL: 
C65
C62
E17
E10
D83
Document Type: 
Working Paper
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.