Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/190390 
Year of Publication: 
2018
Series/Report no.: 
22nd Biennial Conference of the International Telecommunications Society (ITS): "Beyond the Boundaries: Challenges for Business, Policy and Society", Seoul, Korea, 24th-27th June, 2018
Publisher: 
International Telecommunications Society (ITS), Calgary
Abstract: 
Since the mid-2010s, the rapid advancement of artificial intelligence (AI) has increased both people's expectations and their anxieties. Technology-centered optimism is largely widespread, hoping that AI will lead to a blessing of human life and society by maximizing productivity and efficiency. However, serious concerns, such as job substitution, deepening polarization, and human alienation reinforce society's skepticism of AI (Hurlburt, 2017). To achieve a hopeful and sustainable diffusion of AI, building human trust toward the technology becomes a very critical task. Some studies have stressed the role and importance of trust in the successful deployment and diffusion of AI-based applications (Choi and Ji, 2015; Hengstler et al., 2016; Mcknight et al., 2011). However, to the best of our knowledge, little or no attention has been paid to the antecedents and consequences of trust formation in AI. Therefore, against the background of Korean context, we aim to investigate the personal and technical factors influencing that trust formation, which in turn will impact individuals' value-perceptions on AI. We address this problem with three research questions. RQ1: What are the perceived technological factors that affect the formation of trust in AI? RQ2: What are the personal characteristics that affect the formation of trust in AI? RQ3: Does trust in AI affect individuals' value-perceptions?
Document Type: 
Conference Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.