Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/23200 
Year of Publication: 
2004
Series/Report no.: 
Working Paper No. 2004-23
Publisher: 
Rutgers University, Department of Economics, New Brunswick, NJ
Abstract: 
This paper outlines testing procedures for assessing the relative out-of-sample predictive accuracy of multiple conditional distribution models. The tests that are discussed are based on either the comparison of entire conditional distributions or the comparison of predictive confidence intervals. We also briefly survey existing related methods in the area of predictive density evaluation, including methods based on the probability integral transform and the Kullback-Leibler Information Criterion. The procedures proposed in this paper are similar in many ways to Andrews' (1997) conditional Kolmogorov test and to White's (2000) reality check. In particular, a predictive density test is outlined that involves comparing square (approximation) errors associated with models I, i=1,...,n, by constructing weighted averages over U of E[( F_{i}(u t},theta _{i} dagger )-F_{0}(u t},theta _{0})) 2}] , where F_{0}(.) and F_{i}(.)$ are true and model-i distributions, u belongs to U, and U is a possibly unbounded set on the real line. A conditional confidence interval version of this test is also outlined, and appropriate bootstrap procedures for obtaining critical values when predictions used in the formation of the test statistics are obtained via rolling and recursive estimation schemes are developed. An empirical illustration comparing alternative predictive models for U.S. inflation is given for the predictive confidence interval test.
Subjects: 
block bootstrap
recursive estimation scheme
reality check
nonlinear causality
parameter estimation error
JEL: 
C51
C22
Document Type: 
Working Paper

Files in This Item:
File
Size
2.07 MB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.