Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/66844 
Year of Publication: 
2011
Citation: 
[Journal:] Journal of Choice Modelling [ISSN:] 1755-5345 [Volume:] 4 [Issue:] 2 [Publisher:] University of Leeds, Institute for Transport Studies [Place:] Leeds [Year:] 2011 [Pages:] 95-148
Publisher: 
University of Leeds, Institute for Transport Studies, Leeds
Abstract: 
A generation of new models has been proposed to handle some complex human behaviors. These models account for the data ambiguity, and therefore extend the application field of the discrete choice modeling. The facial expression recognition (FER) is highly relevant in this context. We develop a dynamic facial expression recognition (DFER) framework based on discrete choice models (DCM). The DFER consists in modeling the choice of a person who has to label a video sequence representing a facial expression. The originality is based on the the analysis of videos with discrete choice models as well as the explicit modeling of causal effects between the facial features and the recognition of the expression. Five models are proposed. The first assumes that only the last frame of the video triggers the choice of the expression. The second model has two components. The first captures the perception of the facial expression within each frame in the sequence, while the second determines which frame triggers the choice. The third model is an extension of the second model and assumes that the choice of the expression results from the average of perceptions within a group of frames. The fourth and fifth models integrate the panel effect inherent to the estimation data and are respectively extensing the first and second models. The models are estimated using videos from the Facial Expressions and Emotions Database (FEED). Labeling data on the videos has been obtained using an internet survey available at http://transp-or2.ep.ch/videosurvey/. The prediction capability of the models is studied in order to check their validity by cross-validation using the estimation data.
Subjects: 
video analysis
dynamic facial expression analysis
latent class models
modeling of ambiguity
collection of facial expression data
FACS
Creative Commons License: 
cc-by-nc Logo
Document Type: 
Article
Appears in Collections:

Files in This Item:
File
Size
939.35 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.