Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/278797 
Authors: 
Year of Publication: 
2023
Citation: 
[Journal:] Internet Policy Review [ISSN:] 2197-6775 [Volume:] 12 [Issue:] 2 [Year:] 2023 [Pages:] 1-30
Publisher: 
Alexander von Humboldt Institute for Internet and Society, Berlin
Abstract: 
Public discourses have become sensitive to the ethical challenges of big data and artificial intelligence, as scandals about privacy invasion, algorithmic discrimination, and manipulation in digital platforms repeatedly make news headlines. However, it remains largely unexplored how exactly these complex issues are presented to lay audiences and to what extent news reporting-as a window to tech debates-can instil critical data literacy. The present study addresses this research gap and introduces the concept of "data risks". The main goal is to critically investigate how societal and individual harms of data-driven technology find their way into the public sphere and are discussed there. The empirical part applies a mixed methods design that combines qualitative and automated content analyses for charting data risks in news reporting sampled from prominent English-speaking media outlets of global reach. The resulting inventory of data risks includes privacy invasion/surveillance, data bias/algorithmic discrimination, cybersecurity, and information disorder. The study posits data risks as communication challenges, highlights shortcomings in public discussions about the issue, and provides stimuli for (practical) interventions that aim at elucidating how datafication and automation can have harmful effects on citizens.
Subjects: 
Big data
Artificial intelligence
Data literacy
News media
Critical Data Studies
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size
584.29 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.