Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/277971 
Year of Publication: 
2023
Series/Report no.: 
32nd European Conference of the International Telecommunications Society (ITS): "Realising the digital decade in the European Union – Easier said than done?", Madrid, Spain, 19th - 20th June 2023
Publisher: 
International Telecommunications Society (ITS), Calgary
Abstract: 
Artificial intelligence (AI) techniques for natural language processing have made dramatic advances in the past few years (Lin 2023). Thunström & Steingrimsson (2022) demonstrated that the present generation of AI text engines are even able to write low-level scientific pieces about themselves, with relatively minimal prompting, whereas Goyal et al. (2022) show how good general-purpose AI language engines are at summarizing news articles. There is however a downside to all of this progress. Bontridder & Poullet (2021) point out how inexpensive it has become to generate deepfake videos and synthetic voice recordings. Kreps et al. (2022) look at AI generated text and find that "individuals are largely incapable of distinguishing between AI- and human-generated text". Illia et al. (2023) point to three ethical challenges raised by automated text generation that is difficult to distinguish from human writing: 1. facilitation of mass manipulation and disinformation; 2. a lowest denominator problem where a mass of low-quality but incredibly cheap text, crowds out higher-quality discourse; and 3. the suppression of direct communication between stakeholders and an attendant drop in the levels of trust. Our focus is mainly on (2) and we examine the institutional consequences that may arise in two specific sectors currently already facing challenges from AI-generated text: scientific journals and social media platforms. Drawing on the body of learning from institutional economics regarding responses to uncertainties in the veracity of information, it also proposes some elementary remedies that may prove helpful in navigating through the anticipated challenges. Distinguishing genuinely human-authored content from machine-generated text will likely be more easily done using a credible signal of the authenticity of the content creator. This is a variation of Akerlof's (1970) famous "market for lemons" problem. This paper uses an inductive approach to examine sections of the content industry that are likely to be particularly relevant to "market for lemons" substitution, referring to the framework of Giannakas & Fulton (2020).
Document Type: 
Conference Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.