Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/86069 
Year of Publication: 
2001
Series/Report no.: 
Tinbergen Institute Discussion Paper No. 01-055/4
Publisher: 
Tinbergen Institute, Amsterdam and Rotterdam
Abstract: 
The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares, asymptotically, the pivotal property of theAR statistic. That is, asymptotically it has a chi-square distributionwhether or not the model is identified. This pivotal property is veryrelevant for size distortions in finite-sample tests. Whereas Kleibergen(2000) focuses especially on the asymptotic behavior of the statistic,the present paper concentrates on finite-sample properties in a Gaussianframework. In that case the AR statistic has an F-distribution.However, the K-statistic is not exactly pivotal. Its finite-sample distributionis affected by nuisance parameters. Here we consider the twoextreme cases, which provide tight bounds for the exact distribution.The first case amounts to perfect identification—which is similar tothe asymptotic case—where the statistic has an F-distribution. Inthe other extreme case there is total underidentification. For the lattercase we show how to compute the exact distribution. Thus weprovide tight bounds for exact confidence sets based on the efficientK-statistic. Asymptotically the two bounds converge, except whenthere is a large number of redundant instruments.
Document Type: 
Working Paper

Files in This Item:
File
Size
188.56 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.