Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/318594 
Erscheinungsjahr: 
2025
Quellenangabe: 
[Journal:] Amfiteatru Economic [ISSN:] 2247-9104 [Volume:] 27 [Issue:] 68 [Year:] 2025 [Pages:] 253-268
Verlag: 
The Bucharest University of Economic Studies, Bucharest
Zusammenfassung: 
This study is based on exploratory research to test the ability of artificial intelligence (AI) to shape human behaviour, testing on a recent geopolitical event, the ongoing Russian war in Ukraine. Thus, text, image and video were generated using artificial intelligence, tracking users' perceptions of fake narratives generated using artificial intelligence (in the case of text) and testing their ability to distinguish between synthetically generated models and real ones (in the case of image and video). Methodologically, three generative text models were used, namely Chat-GPT, Bing AI and Google Bard, and human perception was tested through a questionnaire. The results confirmed the ability of artificial intelligence (text generative models) to provide information in the domain of disinformation. Additionally, an average of one in ten respondents fail to identify automatically generated disinformation, about half failed to correctly identify an AI-generated image, and more than half have difficulty identifying an AI-generated video compared to the true video.
Schlagwörter: 
generative artificial intelligence
fake-news
deep-fake
disinformation
war in Ukraine
JEL: 
O36
Q55
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article

Datei(en):
Datei
Größe
785.51 kB





Publikationen in EconStor sind urheberrechtlich geschützt.