Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/318594 
Year of Publication: 
2025
Citation: 
[Journal:] Amfiteatru Economic [ISSN:] 2247-9104 [Volume:] 27 [Issue:] 68 [Year:] 2025 [Pages:] 253-268
Publisher: 
The Bucharest University of Economic Studies, Bucharest
Abstract: 
This study is based on exploratory research to test the ability of artificial intelligence (AI) to shape human behaviour, testing on a recent geopolitical event, the ongoing Russian war in Ukraine. Thus, text, image and video were generated using artificial intelligence, tracking users' perceptions of fake narratives generated using artificial intelligence (in the case of text) and testing their ability to distinguish between synthetically generated models and real ones (in the case of image and video). Methodologically, three generative text models were used, namely Chat-GPT, Bing AI and Google Bard, and human perception was tested through a questionnaire. The results confirmed the ability of artificial intelligence (text generative models) to provide information in the domain of disinformation. Additionally, an average of one in ten respondents fail to identify automatically generated disinformation, about half failed to correctly identify an AI-generated image, and more than half have difficulty identifying an AI-generated video compared to the true video.
Subjects: 
generative artificial intelligence
fake-news
deep-fake
disinformation
war in Ukraine
JEL: 
O36
Q55
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.