Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/313871 
Year of Publication: 
2025
Series/Report no.: 
ILE Working Paper Series No. 83
Publisher: 
University of Hamburg, Institute of Law and Economics (ILE), Hamburg
Abstract: 
We examine the trade-off between functionality and data privacy inherent in many AI products by conducting a randomized survey experiment with 1,734 participants from the US and several European countries. Participants' willingness to adopt a hypothetical, AI-enhanced app is measured under three sets of treatments: (i) installation defaults (opt-in vs. opt-out), (ii) salience of data privacy risks, and (iii) regulatory regimes with different levels of data protection. In addition, we study how the willingness to adopt depends on individual attitudes and preferences. We find no effect of defaults or salience, while a regulatory regime with stricter privacy protection increases the likelihood that the app is adopted. Finally, greater data privacy concerns, greater risk aversion, lower levels of trust, and greater skepticism toward AI are associated with a significantly lower willingness to adopt the app.
Subjects: 
Artificial intelligence
privacy concerns
randomized survey experiment
smart products
technology adoption
JEL: 
D80
D90
K24
L86
Z10
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.