Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/311817 
Erscheinungsjahr: 
2022
Quellenangabe: 
[Journal:] Electronic Markets [ISSN:] 1422-8890 [Volume:] 32 [Issue:] 4 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2022 [Pages:] 2079-2102
Verlag: 
Springer, Berlin, Heidelberg
Zusammenfassung: 
Contemporary decision support systems are increasingly relying on artificial intelligence technology such as machine learning algorithms to form intelligent systems. These systems have human-like decision capacity for selected applications based on a decision rationale which cannot be looked-up conveniently and constitutes a black box. As a consequence, acceptance by end-users remains somewhat hesitant. While lacking transparency has been said to hinder trust and enforce aversion towards these systems, studies that connect user trust to transparency and subsequently acceptance are scarce. In response, our research is concerned with the development of a theoretical model that explains end-user acceptance of intelligent systems. We utilize the unified theory of acceptance and use in information technology as well as explanation theory and related theories on initial trust and user trust in information systems. The proposed model is tested in an industrial maintenance workplace scenario using maintenance experts as participants to represent the user group. Results show that acceptance is performance-driven at first sight. However, transparency plays an important indirect role in regulating trust and the perception of performance.
Schlagwörter: 
User acceptance
Intelligent system
Artificial intelligence
Trust
System transparency
JEL: 
C6
C8
M15
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.