Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/311920 
Year of Publication: 
2022
Citation: 
[Journal:] Electronic Markets [ISSN:] 1422-8890 [Volume:] 32 [Issue:] 4 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2022 [Pages:] 2139-2158
Publisher: 
Springer, Berlin, Heidelberg
Abstract: 
The black-box nature of Artificial Intelligence (AI) models and their associated explainability limitations create a major adoption barrier. Explainable Artificial Intelligence (XAI) aims to make AI models more transparent to address this challenge. Researchers and practitioners apply XAI services to explore relationships in data, improve AI methods, justify AI decisions, and control AI technologies with the goals to improve knowledge about AI and address user needs. The market volume of XAI services has grown significantly. As a result, trustworthiness, reliability, transferability, fairness, and accessibility are required capabilities of XAI for a range of relevant stakeholders, including managers, regulators, users of XAI models, developers, and consumers. We contribute to theory and practice by deducing XAI archetypes and developing a user-centric decision support framework to identify the XAI services most suitable for the requirements of relevant stakeholders. Our decision tree is founded on a literature-based morphological box and a classification of real-world XAI services. Finally, we discussed archetypical business models of XAI services and exemplary use cases.
Subjects: 
Artificial intelligence
Explainability
Morphological analysis
Business models
Archetypes
Decision tree
JEL: 
M150
M210
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.