Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/315589 
Year of Publication: 
2025
Citation: 
[Journal:] Internet Policy Review [ISSN:] 2197-6775 [Volume:] 14 [Issue:] 1 [Year:] 2025 [Pages:] 1-26
Publisher: 
Alexander von Humboldt Institute for Internet and Society, Berlin
Abstract: 
Users of secure messaging tools, especially in communities attuned to the risks of statebased and other forms of censorship, increasingly hesitate to delegate their data to centralised platforms, endowed with substantial power to filter content and block user profiles. This article analyses the role that informational architectures and infrastructures in federated social media platforms play in content moderation processes. Alongside privacy by design, the article asks, is it possible to speak of online "safe(r) spaces by design"? And what is the specific role that human moderators play in federated environments? The article argues that federation can pave the way for novel practices in content moderation governance, merging community organising, information distribution and alternative techno-social instruments to deal with online harassment, hate speech or disinformation; however, this alternative also presents a number of pitfalls and potential difficulties that we examine to provide a complete picture of the potential of federated models.
Subjects: 
Federation
Content moderation
Censorship
Governance
secure messaging
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.