Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/307951 
Erscheinungsjahr: 
2022
Quellenangabe: 
[Journal:] Information Systems Frontiers [ISSN:] 1572-9419 [Volume:] 25 [Issue:] 2 [Publisher:] Springer US [Place:] New York, NY [Year:] 2022 [Pages:] 743-773
Verlag: 
Springer US, New York, NY
Zusammenfassung: 
Hate speech in social media is an increasing problem that can negatively affect individuals and society as a whole. Moderators on social media platforms need to be technologically supported to detect problematic content and react accordingly. In this article, we develop and discuss the design principles that are best suited for creating efficient user interfaces for decision support systems that use artificial intelligence (AI) to assist human moderators. We qualitatively and quantitatively evaluated various design options over three design cycles with a total of 641 participants. Besides measuring perceived ease of use, perceived usefulness, and intention to use, we also conducted an experiment to prove the significant influence of AI explainability on end users' perceived cognitive efforts, perceived informativeness, mental model, and trustworthiness in AI. Finally, we tested the acquired design knowledge with software developers, who rated the reusability of the proposed design principles as high.
Schlagwörter: 
Design science research
Design principles
Hate speech detection
Explainable artificial intelligence
Local explanations
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.