Please use this identifier to cite or link to this item:
https://hdl.handle.net/2445/161200
Title: | Non-acted multi-view audio-visual dyadic interactions. Project master thesis: multitask learning for facial attributes analysis |
Author: | Masdeu Ninot, Andreu |
Director/Tutor: | Escalera Guerrero, Sergio Palmero Cantariño, Cristina Jacques Junior, Julio C. S. |
Keywords: | Aprenentatge automàtic Emocions Treballs de fi de màster Expressió facial Machine learning Emotions Master's theses Facial expression |
Issue Date: | 2-Sep-2019 |
Abstract: | [en] In this thesis we explore the use of Multitask Learning for improving performance in facial attributes tasks such as gender, age and ethnicity prediction. These tasks, along with emotion recognition will be part of a new dyadic interaction dataset which was recorded during the development of this thesis. This work includes the implementation of two state of the art multitask deep learning models and the discussion of the results obtained from these methods in a preliminary dataset, as well as a first evaluation in a sample of the dyadic interaction dataset. This will serve as a baseline for a future implementation of Multitask Learning methods in the fully annotated dyadic interaction dataset. |
Note: | Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2019, Tutor: Sergio Escalera Guerrero, Cristina Palmero i Julio C. S. Jacques Junior |
URI: | https://hdl.handle.net/2445/161200 |
Appears in Collections: | Programari - Treballs de l'alumnat Màster Oficial - Fonaments de la Ciència de Dades |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
161200.pdf | Memòria | 8.27 MB | Adobe PDF | View/Open |
codi_font.zip | Codi font | 1.99 MB | zip | View/Open |
This item is licensed under a
Creative Commons License