Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/215166
Title: LLMs for explaining sets of counterfactual examples to final users
Author: Fredes Cáceres, Arturo
Director/Tutor: Vitrià i Marca, Jordi
Keywords: Aprenentatge automàtic
Tractament del llenguatge natural (Informàtica)
Algorismes computacionals
Treballs de fi de màster
Machine learning
Natural language processing (Computer science)
Computer algorithms
Master's thesis
Issue Date: 30-Jun-2024
Abstract: [en] Counterfactual examples have shown to be a promising method for explaining a machine learning model’s decisions, by providing the user with variants of its own data with small shifts to flip the outcome. When a user is presented with a single counterfactual, extracting conclusions from it is straightforward. Yet, this may not reflect the whole scope of possible actions the user can take, and furthermore, the example could be unfeasible. On the other hand, as we increase the number of counterfactuals, drawing conclusions from them becomes difficult for people who are not trained in data analytic thinking. The objective of this work is to evaluate the use of LLMs in producing clear explanations in plain language of these counterfactual examples for the end user. We propose a method to decompose the explanation generation problem into smaller, more manageable tasks to guide the LLM, drawing inspiration from studies on how humans create and communicate explanations. We carry out different experiments using a public dataset and propose a method of closed loop evaluation to assess the coherence of the final explanation with the counterfactuals as well as the quality of the content. Furthermore, an experiment with people is currently being done in order to evaluate the understanding and satisfaction of the users. This work has been submitted for review to the Human-Interpretable Artificial Intelligence (HI-AI) Workshop, held in conjunction with KDD 2024. The submission aims to contribute to the field by presenting findings that enhance the interpretability and understanding of ML systems. The review process is expected to provide insightful feedback that will further refine the methodologies and conclusions discussed in this thesis.
Note: Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Jordi Vitrià i Marca
URI: http://hdl.handle.net/2445/215166
Appears in Collections:Màster Oficial - Fonaments de la Ciència de Dades
Programari - Treballs de l'alumnat

Files in This Item:
File Description SizeFormat 
LMM-4-CFs-Explanation-main.zipCodi font20.31 MBzipView/Open
tfm_fredes_caceres_arturo.pdfMemòria633.23 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons