Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/250138 
Year of Publication: 
2021
Series/Report no.: 
Discussion Papers No. 971
Publisher: 
Statistics Norway, Research Department, Oslo
Abstract: 
We evaluate how nonresponse affects conclusions drawn from survey data and consider how researchers can reliably test and correct for nonresponse bias. To do so, we examine a survey on labor market conditions during the COVID-19 pandemic that used randomly assigned financial incentives to encourage participation. We link the survey data to administrative data sources, allowing us to observe a ground truth for participants and nonparticipants. We find evidence of large nonresponse bias, even after correcting for observable differences between participants and nonparticipants. We apply a range of existing methods that account for nonresponse bias due to unobserved differences, including worst-case bounds, bounds that incorporate monotonicity assumptions, and approaches based on parametric and nonparametric selection models. These methods produce bounds (or point estimates) that are either too wide to be useful or far from the ground truth. We show how these shortcomings can be addressed by modeling how nonparticipation can be both active (declining to participate) and passive (not seeing the survey invitation). The model makes use of variation from the randomly assigned financial incentives, as well as the timing of reminder emails. Applying the model to our data produces bounds (or point estimates) that are narrower and closer to the ground truth than the other methods.
Subjects: 
survey
nonresponse
nonresponse bias
JEL: 
C01
C81
C83
Document Type: 
Working Paper

Files in This Item:
File
Size
1.94 MB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.