Please use this identifier to cite or link to this item:
Title: Non-acted multi-view audio-visual dyadic interactions. Project master thesis: multitask learning for facial attributes analysis
Author: Masdeu Ninot, Andreu
Director/Tutor: Escalera Guerrero, Sergio
Palmero, Cristina
Jacques Junior, Julio C. S.
Keywords: Aprenentatge automàtic
Treballs de fi de màster
Expressió facial
Machine learning
Master's theses
Facial expression
Issue Date: 2-Sep-2019
Abstract: [en] In this thesis we explore the use of Multitask Learning for improving performance in facial attributes tasks such as gender, age and ethnicity prediction. These tasks, along with emotion recognition will be part of a new dyadic interaction dataset which was recorded during the development of this thesis. This work includes the implementation of two state of the art multitask deep learning models and the discussion of the results obtained from these methods in a preliminary dataset, as well as a first evaluation in a sample of the dyadic interaction dataset. This will serve as a baseline for a future implementation of Multitask Learning methods in the fully annotated dyadic interaction dataset.
Note: Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2019, Tutor: Sergio Escalera Guerrero, Cristina Palmero i Julio C. S. Jacques Junior
Appears in Collections:Programari - Treballs de l'alumnat
Màster Oficial - Fonaments de la Ciència de Dades

Files in This Item:
File Description SizeFormat 
161200.pdfMemòria8.27 MBAdobe PDFView/Open
codi_font.zipCodi font1.99 MBzipView/Open

This item is licensed under a Creative Commons License Creative Commons