Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/186181
Title: Learning contextual information via deep learning
Author: Bardají Serra, Sara
Director/Tutor: Seguí Mesquida, Santi
Gilabert Roca, Pere
Keywords: Aprenentatge automàtic
Xarxes neuronals convolucionals
Programari
Treballs de fi de grau
Xarxes neuronals (Informàtica)
Imatges mèdiques
Machine learning
Convolutional neural networks
Computer software
Neural networks (Computer science)
Imaging systems in medicine
Bachelor's theses
Issue Date: 22-Jan-2022
Abstract: [en] During the last few years, deep learning has become one of the most attractive fields of artificial intelligence, with the use of artificial neural networks at its core. In this project we propose several neural networks architectures for the context learning methodology. The main goal of this project is to verify if these methodologies might work on medical images by first testing them on simpler datasets. We propose two different approaches, one consisting of a convolutional architecture and the other being a recurrent neural network. Whilst the first approach provided grate results with the first datasets we used, it proved to be insufficient as the complexity of the dataset increased. The recurrent architecture provided successful results when working with more complex datasets. This thesis provides a general overview of neural networks and explains the different steps taken to reach the proposed models.
Note: Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Santi Seguí Mesquida i Pere Gilabert Roca
URI: http://hdl.handle.net/2445/186181
Appears in Collections:Programari - Treballs de l'alumnat
Treballs Finals de Grau (TFG) - Administració i Direcció d’Empreses i Matemàtiques (Doble Grau)
Treballs Finals de Grau (TFG) - Enginyeria Informàtica

Files in This Item:
File Description SizeFormat 
codi.zipCodi font48.18 kBzipView/Open
tfg_bardaji_serra_sara.pdfMemòria4.93 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons