Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/171101
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorIgual Muñoz, Laura-
dc.contributor.authorFady, Christopher-
dc.contributor.authorCastela Ibáñez, Susana-
dc.date.accessioned2020-10-08T10:39:40Z-
dc.date.available2020-10-08T10:39:40Z-
dc.date.issued2020-01-
dc.identifier.urihttp://hdl.handle.net/2445/171101-
dc.descriptionTreballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona, Any: 2020, Tutor: Laura Igual Muñozca
dc.description.abstract[en] Medical imagery is arguably one of the clearest use cases for Deep Neural Networks: automatic detection of illnesses as an additional guidance tool could massively help doctors in their everyday work. However, the nature of the field makes errors extremely costly. That means that understanding the reasons for the decisions made by any Deep Learning model is absolutely crucial to its implementation and use. Previous work has demonstrated the effectiveness of Deep Learning methods applied to the detection of atherosclerotic plaques in carotid arteries. These are a known factor in cardio-vascular diseases, and can be identified by measuring Intima Media Thickness (IMT). To the best of our knowledge, these effective models, such as CNNs, VGG and Tiramisu (U-Net type), have not been studied under the lens of interpretability in this context. Our goal is to study the classification and segmentation decisions from the models in order to determine how they were taken, and if they make sense when compared to medical knowledge. For this purpose, we propose to use previously studied interpretability techniques, mainly Grid Saliency and the well-documented SHAP values, both of which are adapted to Deep Learning models. Indeed, both of these methods attempt to study local decisions from the models, while having the advantage of being model agnostic and visual in the representation. This makes them easy to understand in general with minimal explanations, a clear advantage when one of the goals is to help medical personnel, as well as data scientists, in making better and faster decisions. This study is applied to a dataset of ultrasound images, REGICOR. This work is framed within a larger research project within the UB, which has already spawned various works. This makes the dataset well suited for the purpose of interpreting the results from our models.ca
dc.format.extent68 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightscc-by-nc-nd (c) Christopher Fady i Susana Castela, 2020-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.sourceMàster Oficial - Fonaments de la Ciència de Dades-
dc.subject.classificationImatges mèdiques-
dc.subject.classificationXarxes neuronals (Informàtica)-
dc.subject.classificationTreballs de fi de màster-
dc.subject.classificationDiagnòstic per la imatge-
dc.subject.classificationAprenentatge automàtic-
dc.subject.classificationArtèries caròtidesca
dc.subject.otherImaging systems in medicine-
dc.subject.otherNeural networks (Computer science)-
dc.subject.otherMaster's theses-
dc.subject.otherDiagnostic imaging-
dc.subject.otherMachine learning-
dc.subject.otherCarotid arteryen
dc.titleInterpretability of deep learning methods in carotid artery image classification and semantic segmentationca
dc.typeinfo:eu-repo/semantics/masterThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Màster Oficial - Fonaments de la Ciència de Dades

Files in This Item:
File Description SizeFormat 
Memoria.pdfMemòria13.16 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons