Carregant...
Miniatura

Tipus de document

Treball de fi de màster

Data de publicació

Llicència de publicació

cc-by-sa (c) Andreu Masdeu Ninot, 2019
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/161200

Non-acted multi-view audio-visual dyadic interactions. Project master thesis: multitask learning for facial attributes analysis

Títol de la revista

ISSN de la revista

Títol del volum

Recurs relacionat

Resum

[en] In this thesis we explore the use of Multitask Learning for improving performance in facial attributes tasks such as gender, age and ethnicity prediction. These tasks, along with emotion recognition will be part of a new dyadic interaction dataset which was recorded during the development of this thesis. This work includes the implementation of two state of the art multitask deep learning models and the discussion of the results obtained from these methods in a preliminary dataset, as well as a first evaluation in a sample of the dyadic interaction dataset. This will serve as a baseline for a future implementation of Multitask Learning methods in the fully annotated dyadic interaction dataset.

Descripció

Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2019, Tutor: Sergio Escalera Guerrero, Cristina Palmero i Julio C. S. Jacques Junior

Citació

Citació

MASDEU NINOT, Andreu. Non-acted multi-view audio-visual dyadic interactions. Project master thesis: multitask learning for facial attributes analysis. [consulta: 7 de febrer de 2026]. [Disponible a: https://hdl.handle.net/2445/161200]

Exportar metadades

JSON - METS

Compartir registre