Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/314556 
Erscheinungsjahr: 
2025
Schriftenreihe/Nr.: 
IZA Discussion Papers No. 17659
Verlag: 
Institute of Labor Economics (IZA), Bonn
Zusammenfassung: 
We investigate whether artificial intelligence can address the peer review crisis in economics by analyzing 27,090 evaluations of 9,030 unique submissions using a large language model (LLM). The experiment systematically varies author characteristics (e.g., affiliation, reputation, gender) and publication quality (e.g., top-tier, mid-tier, low-tier, AI-generated papers). The results indicate that LLMs effectively distinguish paper quality but exhibit biases favoring prominent institutions, male authors, and renowned economists. Additionally, LLMs struggle to differentiate high-quality AI-generated papers from genuine top-tier submissions. While LLMs offer efficiency gains, their susceptibility to bias necessitates cautious integration and hybrid peer review models to balance equity and accuracy.
Schlagwörter: 
Artificial Intelligence
peer review
large language model (LLM)
bias in academia
economics publishing
equity-efficiency trade-off
JEL: 
A11
C63
O33
I23
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.55 MB





Publikationen in EconStor sind urheberrechtlich geschützt.