Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/67780 
Erscheinungsjahr: 
2009
Schriftenreihe/Nr.: 
Queen's Economics Department Working Paper No. 1201
Verlag: 
Queen's University, Department of Economics, Kingston (Ontario)
Zusammenfassung: 
This paper provides a step-by-step guide to estimating discrete choice dynamic programming (DDP) models using the Bayesian Dynamic Programming algorithm developed in Imai, Jain and Ching (2008) (IJC). The IJC method combines the DDP solution algorithm with the Bayesian Markov Chain Monte Carlo algorithm into a single algorithm, which solves the DDP model and estimates its structural parameters simultaneously. The main computational advantage of this estimation algorithm is the efficient use of information obtained from the past iterations. In the conventional Nested Fixed Point algorithm, most of the information obtained in the past iterations remains unused in the current iteration. In contrast, the Bayesian Dynamic Programming algorithm extensively uses the computational results obtained from the past iterations to help solving the DDP model at the current iterated parameter values. Consequently, it significantly alleviates the computational burden of estimating a DDP model. We carefully discuss how to implement the algorithm in practice, and use a simple dynamic store choice model to illustrate how to apply this algorithm to obtain parameter estimates.
Schlagwörter: 
Bayesian Dynamic Programming
Discrete Choice Dynamic Programming
Markov Chain Monte Carlo
JEL: 
C11
M03
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
2.5 MB





Publikationen in EconStor sind urheberrechtlich geschützt.