<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>EconStor Collection:</title>
    <link>https://hdl.handle.net/10419/213956</link>
    <description />
    <pubDate>Mon, 04 May 2026 15:58:46 GMT</pubDate>
    <dc:date>2026-05-04T15:58:46Z</dc:date>
    <item>
      <title>Perceived personal and societal data harms shape users' data control preferences</title>
      <link>https://hdl.handle.net/10419/336200</link>
      <description>Title: Perceived personal and societal data harms shape users' data control preferences
Authors: Gagrčin, Emilija; Toth, Roland; Schaetz, Nadja; Naab, Teresa; Emmer, Martin
Abstract: Platformisation and the growing adoption of AI-driven systems have intensified pervasive data extraction and appropriation that bring distinct harms for both individuals and societies at large. Yet, little is known about how distinct harm perceptions shape citizens' preferences for different control mechanisms. Based on survey data from six EU countries (N=2,889), we examine differences in perceptions of personal vs. societal harm and their implications for individual control preferences and support for regulation. We find a surprising inverse relationship between perceived personal harm and desire for individual control: when citizens' perceive greater personal harm, they become less inclined to seek individual data control, suggesting privacy resignation. Conversely, perceived societal harm positively relates to both individual and regulatory control preferences, underscoring citizens' view of these mechanisms as complementary, particularly when they perceive harms to democracy. For policymakers, the findings suggest that regulators should treat both dimensions as related but distinct inputs when designing interventions and address the conditions that generate both individual and collective harms. Specifically, regulatory frameworks with an overreliance on individual control mechanisms (like consent requirements) may be insufficient or even counterproductive when citizens already perceive data harms.</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/336200</guid>
      <dc:date>2026-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Social media and mental harms under the Digital Services Act</title>
      <link>https://hdl.handle.net/10419/336201</link>
      <description>Title: Social media and mental harms under the Digital Services Act
Authors: Pałka, Przemysław; Ilczuk, Ewa
Abstract: Numerous empirical studies indicate that social media use is correlated with, and sometimes might be causing, mental harms like addiction, anxiety and depression, or lowering of cognitive abilities. In 2023, the European Parliament called on the European Commission to introduce new rules to combat these problems. However, it might take years before such new laws are adopted and become applicable. In this article, we demonstrate how a law already in effect - the Digital Services Act - offers the Commission tools necessary to combat certain mental harms stemming from social media's design and functioning within the ad-based business model. We show that the risk assessment and mitigation obligations addressed at the Very Large Online Platforms' providers include three "mental goods:" the mental well-being of individuals, mental health (as a component of public health), and the fundamental right to mental integrity. This article offers elaboration and theorisation of these concepts to enable more effective application of the DSA's requirements, both by providers engaging in risk assessment and the Commission serving as the enforcer.</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/336201</guid>
      <dc:date>2026-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Open data meets data justice</title>
      <link>https://hdl.handle.net/10419/336202</link>
      <description>Title: Open data meets data justice
Authors: Santoro, Caterina; Chandrasekhar, Ramya; Milan, Stefania
Abstract: Public administrations continue to adopt open data initiatives. These initiatives involve creating, releasing, and re-using data sets on political, social, and economic aspects, which are published in machine-readable, interoperable formats and under open licenses. Yet, many open data initiatives adhering to these 'techno-legal' characteristics do not live up to their promises of enabling 'vision' (i.e., ensuring transparency) and 'voice' (i.e., enabling participation ) for citizens, especially when algorithms and AI tools are integrated into the workings of public administrations.&amp;nbsp;The conceptual framework of 'data justice' might help correct the direction. It addresses issues of 'vision' and 'voice,' focusing on who decides what data is generated, for what purposes, and for whose benefit. In this paper, we extend this framework to public administrations, given that public administrations already incorporate an orientation toward justice in practice, commonly referred to as social equity. Building on research from critical data studies and public administration, we present a conceptual framework called 'open data justice', and illustrate how this framework can be translated in practice by governments to promote justice in their open data initiatives. This contribution is intended to benefit researchers and practitioners seeking to operationalise justice in open data governance, thus reframing the study and practice of open data in public administration.</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/336202</guid>
      <dc:date>2026-01-01T00:00:00Z</dc:date>
    </item>
    <item>
      <title>Science-on-chain: How can we trust science again?</title>
      <link>https://hdl.handle.net/10419/336203</link>
      <description>Title: Science-on-chain: How can we trust science again?
Authors: Heurich, Benjamin; Lukács, Bence; Weidener, Lukas
Abstract: Over the last few decades society has lost significant trust in the work of the scientific community and debated the trustworthiness of scientific findings and (policy) implications. In response, an open science framework has been proposed based on accessibility and transparency of data and increased collaboration and participation among the scientific community and society at large. We argue that this can restore trust within society, and science itself. Following the proposed framework innovative scientists discovered the affordances of blockchain technology (e.g. inherent transparency, immutability, data security). However, since this issue describes a fundamental trend in society as a whole, it is worthwhile to conduct a sociological analysis through Niklas Luhmann's Systems Theory that focuses on both the functional areas and the purpose of trust in modern societies and the overall approach to disruptive technologies. This paper focuses on two key texts, the Bitcoin and Bloxberg white papers, used here as case studies to examine the theoretical underpinnings of trust in blockchain-based systems. We argue that while blockchain offers potential solutions, the term 'trust' is often misused in these discourses, overshadowing the need for a robust sociological framework. By critically analysing these technologies, we highlight their potential to reshape scientific practices and restore trust through a decentralised, transparent infrastructure.</description>
      <pubDate>Thu, 01 Jan 2026 00:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://hdl.handle.net/10419/336203</guid>
      <dc:date>2026-01-01T00:00:00Z</dc:date>
    </item>
  </channel>
</rss>

