Carregant...
Miniatura

Tipus de document

Treball de fi de grau

Data de publicació

Llicència de publicació

memòria: cc-nc-nd (c) Joaquim Yuste Ramos, 2021
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/182804

Using deep learning for fine-grained action segmentation

Títol de la revista

ISSN de la revista

Títol del volum

Recurs relacionat

Resum

[en] This project focuses on video action segmentation task, which aims to temporally segment and classify fine-grained actions in untrimmed videos. The development and refinement of this process is an important yet challenging problem, which can provide great improvements in work areas such as robotics, e-Health assistive technologies, surveillance, and beyond. On the one hand, we will study the current state-of-the-art, as well as the metrics that are commonly used to evaluate an architecture on this kind of problems. On the other hand, we introduce two different attention-based modules that are capable of extracting frame-to-frame relationships, and a behaviour analysis will be performed by evaluating them over Georgia Tech Egocentric Activity (GTEA), which is an outstanding dataset. This dataset is focused on daily cooking activity videos, with fine-grained labels, and it has an egocentric point view. Eventually, we will compare the obtained results against the actual state-of-the-art scores, in order to discuss the effectiveness of each module.

Descripció

Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2021, Director: Albert Clapés i Sergio Escalera Guerrero

Citació

Citació

YUSTE RAMOS, Joaquim. Using deep learning for fine-grained action segmentation. [consulta: 31 de gener de 2026]. [Disponible a: https://hdl.handle.net/2445/182804]

Exportar metadades

JSON - METS

Compartir registre