Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/149795 
Year of Publication: 
2016
Series/Report no.: 
cemmap working paper No. CWP49/16
Publisher: 
Centre for Microdata Methods and Practice (cemmap), London
Abstract: 
Most modern supervised statistical/machine learning (ML) methods are explicitly designed to solve prediction problems very well. Achieving this goal does not imply that these methods automatically deliver good estimators of causal parameters. Examples of such parameters include individual regression coefficients, average treatment effects, average lifts, and demand or supply elasticities. In fact, estimators of such causal parameters obtained via naively plugging ML estimators into estimating equations for such parameters can behave very poorly. For example, the resulting estimators may formally have inferior rates of convergence with respect to the sample size n caused by regularization bias. Fortunately, this regularization bias can be removed by solving auxiliary prediction problems via ML tools. Specifically, we can form an efficient score for the target low-dimensional parameter by combining auxiliary and main ML predictions. The efficient score may then be used to build an efficient estimator of the target parameter which typically will converge at the fastest possible 1/Í n rate and be approximately unbiased and normal, allowing simple construction of valid confidence intervals for parameters of interest. The resulting method thus could be called a "double ML" method because it relies on estimating primary and auxiliary predictive models. Such double ML estimators achieve the fastest rates of convergence and exhibit robust good behavior with respect to a broader class of probability distributions than naive "single" ML estimators. In order to avoid overfitting, following [3], our construction also makes use of the K-fold sample splitting, which we call cross-fitting. The use of sample splitting allows us to use a very broad set of ML predictive methods in solving the auxiliary and main prediction problems, such as random forests, lasso, ridge, deep neural nets, boosted trees, as well as various hybrids and aggregates of these methods (e.g. a hybrid of a random forest and lasso). We illustrate the application of the general theory through application to the leading cases of estimation and inference on the main parameter in a partially linear regression model and estimation and inference on average treatment effects and average treatment effects on the treated under conditional random assignment of the treatment. These applications cover randomized control trials as a special case. We then use the methods in an empirical application which estimates the effect of 401(k) eligibility on accumulated financial assets.
Subjects: 
Neyman
Orthogonalization
cross-fit
double machine learning
debiased machine learning
orthogonal score
efficient score
post-machine-learning and post-regularization inference
random forest
lasso
deep learning
neural nets
boosted trees
efficiency
optimality
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size
537.28 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.