Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/196589 
Erscheinungsjahr: 
2019
Schriftenreihe/Nr.: 
IWH Discussion Papers No. 9/2019
Verlag: 
Leibniz-Institut für Wirtschaftsforschung Halle (IWH), Halle (Saale)
Zusammenfassung: 
This paper illustrates how audio-visual data from pre-play face-to-face communication can be used to identify groups which contain free-riders in a public goods experiment. It focuses on two channels over which face-to-face communication influences contributions to a public good. Firstly, the contents of the face-to-face communication are investigated by categorising specific strategic information and using simple meta-data. Secondly, a machine-learning approach to analyse facial expressions of the subjects during their communications is implemented. These approaches constitute the first of their kind, analysing content and facial expressions in face-to-face communication aiming to predict the behaviour of the subjects in a public goods game. The analysis shows that verbally mentioning to fully contribute to the public good until the very end and communicating through facial clues reduce the commonly observed end-game behaviour. The length of the face-to-face communication quantified in number of words is further a good measure to predict cooperation behaviour towards the end of the game. The obtained findings provide first insights how a priori available information can be utilised to predict free-riding behaviour in public goods games.
Schlagwörter: 
automatic facial expressions recognition
content analysis
public goods experiment
face-to-face communication
JEL: 
C80
C92
D91
Persistent Identifier der Erstveröffentlichung: 
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.64 MB





Publikationen in EconStor sind urheberrechtlich geschützt.