Abstract:
We examine the trade-off between functionality and data privacy inherent in many AI products by conducting a randomized survey experiment with 1,734 participants from the US and several European countries. Participants' willingness to adopt a hypothetical, AI-enhanced app is measured under three sets of treatments: (i) installation defaults (opt-in vs. opt-out), (ii) salience of data privacy risks, and (iii) regulatory regimes with different levels of data protection. In addition, we study how the willingness to adopt depends on individual attitudes and preferences. We find no effect of defaults or salience, while a regulatory regime with stricter privacy protection increases the likelihood that the app is adopted. Finally, greater data privacy concerns, greater risk aversion, lower levels of trust, and greater skepticism toward AI are associated with a significantly lower willingness to adopt the app.