Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/192075
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorClapés i Sintes, Albert-
dc.contributor.advisorEscalera Guerrero, Sergio-
dc.contributor.authorNieto Juscafresa, Aleix-
dc.date.accessioned2023-01-12T11:08:02Z-
dc.date.available2023-01-12T11:08:02Z-
dc.date.issued2022-06-
dc.identifier.urihttps://hdl.handle.net/2445/192075-
dc.descriptionTreballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Albert Clapés i Sintes i Sergio Escalera Guerreroca
dc.description.abstract[en] Artificial intelligence (AI) and more specifically machine learning (ML) have shown their potential by approaching or even exceeding human levels of accuracy for a variety of real-world problems. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, creating a tradeoff between accuracy and interpretability. These models are known for being "black box" and opaque, which is especially problematic in industries like healthcare. Therefore, understanding the reasons behind predictions is crucial in establishing trust, which is fundamental if one plans to take action based on a prediction, or when deciding whether or not to implement a new model. Here is where explainable artificial intelligence (XAI) comes in by helping humans to comprehend and trust the results and output created by a machine learning model. This project is organised in 3 chapters with the aim of introducing the reader to the field of explainable artificial intelligence. Machine learning and some related concepts are introduced in the first chapter. The second chapter focuses on the theory of the random forest model in detail. Finally, in the third chapter, the theory behind two contemporary and influential XAI methods, LIME and SHAP, is formalised. Additionally, a public diabetes tabular dataset is used to illustrate an application of these two methods in the medical sector. The project concludes with a discussion of its possible future works.ca
dc.format.extent70 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightscc-by-nc-nd (c) Aleix Nieto Juscafresa, 2022-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.sourceTreballs Finals de Grau (TFG) - Matemàtiques-
dc.subject.classificationIntel·ligència artificialca
dc.subject.classificationTreballs de fi de grau-
dc.subject.classificationIntel·ligència artificial en medicinaca
dc.subject.classificationAprenentatge automàticca
dc.subject.otherArtificial intelligenceen
dc.subject.otherBachelor's theses-
dc.subject.otherMedical artificial intelligenceen
dc.subject.otherMachine learningen
dc.titleAn introduction to explainable artificial intelligence with LIME and SHAPca
dc.typeinfo:eu-repo/semantics/bachelorThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Treballs Finals de Grau (TFG) - Matemàtiques

Files in This Item:
File Description SizeFormat 
tfg_nieto_juscafresa_aleix.pdfMemòria5.84 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons