Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/189140 
Year of Publication: 
1991
Series/Report no.: 
Queen's Economics Department Working Paper No. 816
Publisher: 
Queen's University, Department of Economics, Kingston (Ontario)
Abstract: 
This paper develops a new method for constructing approximate solutions to discrete time, infinite horizon, discounted stochastic dynamic programming problems with convex choice sets. The key idea is to restrict the decision rule to belong to a parametric class of function. The agent then chooses the best decision rule from within this class. Monte Carlo simulations are used to calculate arbitrarily precise estimates of the optimal decision rule parameters. The solution method is used to solve a version of the Brock-Mirman (1972) stochastic optimal growth model. For this model, relatively simple rules of thumb provide very good approximations to optimal behavior.
Subjects: 
rule of thumb
Monte Carlo simulation
numerical optimization
JEL: 
210
O23
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.