Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/307951 
Year of Publication: 
2022
Citation: 
[Journal:] Information Systems Frontiers [ISSN:] 1572-9419 [Volume:] 25 [Issue:] 2 [Publisher:] Springer US [Place:] New York, NY [Year:] 2022 [Pages:] 743-773
Publisher: 
Springer US, New York, NY
Abstract: 
Hate speech in social media is an increasing problem that can negatively affect individuals and society as a whole. Moderators on social media platforms need to be technologically supported to detect problematic content and react accordingly. In this article, we develop and discuss the design principles that are best suited for creating efficient user interfaces for decision support systems that use artificial intelligence (AI) to assist human moderators. We qualitatively and quantitatively evaluated various design options over three design cycles with a total of 641 participants. Besides measuring perceived ease of use, perceived usefulness, and intention to use, we also conducted an experiment to prove the significant influence of AI explainability on end users' perceived cognitive efforts, perceived informativeness, mental model, and trustworthiness in AI. Finally, we tested the acquired design knowledge with software developers, who rated the reusability of the proposed design principles as high.
Subjects: 
Design science research
Design principles
Hate speech detection
Explainable artificial intelligence
Local explanations
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.