Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/215166
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorVitrià i Marca, Jordi-
dc.contributor.authorFredes Cáceres, Arturo-
dc.date.accessioned2024-09-16T07:16:10Z-
dc.date.available2024-09-16T07:16:10Z-
dc.date.issued2024-06-30-
dc.identifier.urihttps://hdl.handle.net/2445/215166-
dc.descriptionTreballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Jordi Vitrià i Marcaca
dc.description.abstract[en] Counterfactual examples have shown to be a promising method for explaining a machine learning model’s decisions, by providing the user with variants of its own data with small shifts to flip the outcome. When a user is presented with a single counterfactual, extracting conclusions from it is straightforward. Yet, this may not reflect the whole scope of possible actions the user can take, and furthermore, the example could be unfeasible. On the other hand, as we increase the number of counterfactuals, drawing conclusions from them becomes difficult for people who are not trained in data analytic thinking. The objective of this work is to evaluate the use of LLMs in producing clear explanations in plain language of these counterfactual examples for the end user. We propose a method to decompose the explanation generation problem into smaller, more manageable tasks to guide the LLM, drawing inspiration from studies on how humans create and communicate explanations. We carry out different experiments using a public dataset and propose a method of closed loop evaluation to assess the coherence of the final explanation with the counterfactuals as well as the quality of the content. Furthermore, an experiment with people is currently being done in order to evaluate the understanding and satisfaction of the users. This work has been submitted for review to the Human-Interpretable Artificial Intelligence (HI-AI) Workshop, held in conjunction with KDD 2024. The submission aims to contribute to the field by presenting findings that enhance the interpretability and understanding of ML systems. The review process is expected to provide insightful feedback that will further refine the methodologies and conclusions discussed in this thesis.ca
dc.format.extent41 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightscc-by-nc-nd (c) Arturo Fredes Cáceres, 2024-
dc.rightscodi: GPL (c) Arturo Fredes Cáceres, 2024-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.rights.urihttp://www.gnu.org/licenses/gpl-3.0.ca.html*
dc.sourceMàster Oficial - Fonaments de la Ciència de Dades-
dc.subject.classificationAprenentatge automàtic-
dc.subject.classificationTractament del llenguatge natural (Informàtica)-
dc.subject.classificationAlgorismes computacionals-
dc.subject.classificationTreballs de fi de màster-
dc.subject.otherMachine learning-
dc.subject.otherNatural language processing (Computer science)-
dc.subject.otherComputer algorithms-
dc.subject.otherMaster's thesis-
dc.titleLLMs for explaining sets of counterfactual examples to final usersca
dc.typeinfo:eu-repo/semantics/masterThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Màster Oficial - Fonaments de la Ciència de Dades
Programari - Treballs de l'alumnat

Files in This Item:
File Description SizeFormat 
LMM-4-CFs-Explanation-main.zipCodi font20.31 MBzipView/Open
tfm_fredes_caceres_arturo.pdfMemòria633.23 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons