Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/25512 
Year of Publication: 
2006
Series/Report no.: 
CFS Working Paper No. 2007/11
Publisher: 
Goethe University Frankfurt, Center for Financial Studies (CFS), Frankfurt a. M.
Abstract: 
We study the problem of a policymaker who seeks to set policy optimally in an economy where the true economic structure is unobserved, and policymakers optimally learn from their observations of the economy. This is a classic problem of learning and control, variants of which have been studied in the past, but little with forward-looking variables which are a key component of modern policy-relevant models. As in most Bayesian learning problems, the optimal policy typically includes an experimentation component reflecting the endogeneity of information. We develop algorithms to solve numerically for the Bayesian optimal policy (BOP). However the BOP is only feasible in relatively small models, and thus we also consider a simpler specification we term adaptive optimal policy (AOP) which allows policymakers to update their beliefs but shortcuts the experimentation motive. In our setting, the AOP is significantly easier to compute, and in many cases provides a good approximation to the BOP. We provide a simple example to illustrate the role of learning and experimentation in an MJLQ framework.
Subjects: 
Optimal Monetary Policy
Learning
Recursive Saddlepoint Method
JEL: 
E42
E52
E58
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
784.38 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.