Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/309300 
Year of Publication: 
2025
Series/Report no.: 
GLO Discussion Paper No. 1556
Publisher: 
Global Labor Organization (GLO), Essen
Abstract: 
Can people develop trust in Artificial Intelligence (AI) by learning about its developments? We conducted a survey experiment in a nationally representative panel survey in the United States (N = 1,491) to study whether exposure to news about AI influences trust differently than learning about non-AI scientific advancements. The results show that people trust AI advancements less than non-AI scientific developments, with significant variations across domains. The mistrust of AI is the smallest in medicine, a high-stakes domain, and largest in the area of personal relationships. The key mediators are contextspecific: fear is the most critical mediator for linguistics, excitement for medicine, and societal benefit for dating. Personality traits do not affect trust differences in the linguistics domain. In medicine, mistrust of AI is higher among respondents with high agreeableness and neuroticism scores. In personal relationships, mistrust of AI is strongest among individuals with high openness, conscientiousness, and agreeableness. Furthermore, mistrust of AI advancements is higher among women than men, as well as among older, White, and US-born individuals. Our results have implications for tailored communication strategies about AI advancements in the Fourth Industrial Revolution.
Subjects: 
Randomized Controlled Trial (RCT)
survey experiment
Artificial Intelligence (AI)
Trust
United States
JEL: 
C91
D83
O33
Z10
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.