Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/322876 
Year of Publication: 
2012
Series/Report no.: 
Discussion Papers Series No. 12-09
Publisher: 
Utrecht University, Utrecht School of Economics, Tjalling C. Koopmans Research Institute, Utrecht
Abstract: 
In the economics literature there are two dominant approaches for solving models with optimal experimentation (also called active learning). The first approach is based on the value function and the second on an approximation method. In principle the value function approach is the preferred method. However, it suffers from the curse of dimensionality and is only applicable to small problems with a limited number of policy variables. The approximation method allows for a computationally larger class of models, but may produce results that deviate from the optimal solution. Our simulations indicate that when the effects of learning are limited, the differences may be small. However, when there is sufficient scope for learning, the value function solution is more aggressive in the use of the policy variable.
Subjects: 
design of fiscal policy
optimal experimentation
stochastic optimization
time-varying parameters
numerical experiments
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.