Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/322765 
Erscheinungsjahr: 
2008
Schriftenreihe/Nr.: 
Discussion Papers Series No. 08-19
Verlag: 
Utrecht University, Utrecht School of Economics, Tjalling C. Koopmans Research Institute, Utrecht
Zusammenfassung: 
In this paper we turn our attention to comparing the policy function obtained by Beck and Wieland (2002) to the one obtained with adaptive control methods. It is an integral part of the optimal learning method used by Beck and Wieland to obtain a policy function that provides the optimal control as a feedback function of the state of the system. However, computing this function is not necessary when doing Monte Carlo experiments with adaptive control methods. Therefore, we have modified our software in order to obtain the policy function for comparison to the BW results.
Schlagwörter: 
Active learning
dual control
optimal experimentation
stochastic optimization
time-varying parameters
numerical experiments
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
316.04 kB





Publikationen in EconStor sind urheberrechtlich geschützt.