Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/196589 
Year of Publication: 
2019
Series/Report no.: 
IWH Discussion Papers No. 9/2019
Publisher: 
Leibniz-Institut für Wirtschaftsforschung Halle (IWH), Halle (Saale)
Abstract: 
This paper illustrates how audio-visual data from pre-play face-to-face communication can be used to identify groups which contain free-riders in a public goods experiment. It focuses on two channels over which face-to-face communication influences contributions to a public good. Firstly, the contents of the face-to-face communication are investigated by categorising specific strategic information and using simple meta-data. Secondly, a machine-learning approach to analyse facial expressions of the subjects during their communications is implemented. These approaches constitute the first of their kind, analysing content and facial expressions in face-to-face communication aiming to predict the behaviour of the subjects in a public goods game. The analysis shows that verbally mentioning to fully contribute to the public good until the very end and communicating through facial clues reduce the commonly observed end-game behaviour. The length of the face-to-face communication quantified in number of words is further a good measure to predict cooperation behaviour towards the end of the game. The obtained findings provide first insights how a priori available information can be utilised to predict free-riding behaviour in public goods games.
Subjects: 
automatic facial expressions recognition
content analysis
public goods experiment
face-to-face communication
JEL: 
C80
C92
D91
Persistent Identifier of the first edition: 
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.