Zusammenfassung:
The pervasive threat of online disinformation challenges the integrity of the digital public sphere and the resilience of liberal democracies. This study conceptualizes and evaluates an explainable artificial intelligence (XAI) artifact specifically designed for disinformation detection, integrating confidence scores, visual explanations, and detailed insights into potentially misleading content. Based on a systematic empirical literature review, we establish theoretically informed design principles to guide responsible XAI development. Using a mixed-method approach, including qualitative user testing and a large-scale online study ( n = 344), we reveal nuanced findings: while explainability features did not inherently enhance trust or usability, they sometimes introduced uncertainty and reduced classification agreement. Demographic insights highlight the pivotal role of age and trust propensity, with older users facing greater challenges in comprehension and usability. Users expressed a preference for simplified and visually intuitive features. These insights underscore the critical importance of iterative, user-centered design in aligning XAI systems with diverse user needs and ethical imperatives. By offering actionable guidelines and advancing the theoretical understanding of explainability, this study contributes to the development of transparent, adaptive, and effective solutions for disinformation detection in digital ecosystems.