Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/215277
Title: Large language models and causal analysis: zero-shot counterfactuals in hate speech perception
Author: Hernández Jiménez, Sergio
Director/Tutor: Pros Rius, Roger
Vitrià i Marca, Jordi
Keywords: Discurs de l'odi
Xarxes socials en línia
Estadística matemàtica
Treballs de fi de màster
Tractament del llenguatge natural (Informàtica)
Hate speech
Online social networks
Mathematical statistics
Master's thesis
Natural language processing (Computer science)
Issue Date: 30-Jun-2024
Abstract: [en] Detecting hate speech is crucial for maintaining the integrity of social media platforms, as it involves identifying content that denigrates individuals or groups based on their characteristics. However, the expression of hate can be different across different demographics and platforms, making its detection a complex task. A significant factor in hate speech is the presence of offense, which alters the perception of hate without altering the core meaning of the text. This study aims to examine how offense affects the perception of hate speech in social media comments. To achieve this, we employ two distinct causal inference methods to measure the impact of offensive language on the detection of hate speech. The first method utilizes the traditional backdoor criterion, which allows us to model the nodes of the causal graph as features in a machine learning model that predicts hate. This method is demanding from a modeling point of view, as it requires training a specific model for each node in the causal graph. The second method leverages the capabilities of Large Language Models (LLMs) to generate textual counterfactuals in a zero-shot manner, i.e., without requiring any training or fine-tuning. These textual counterfactuals are then used to estimate causal effects. Our findings reveal that the causal effect of offense on hate is higher with the LLM generated counterfactuals than with the methodology that follows the backdoor criterion. Additionally, we train a machine learning model to directly predict the causal effect from a comment.
Note: Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Roger Pros Rius i Jordi Vitrià i Marca
URI: https://hdl.handle.net/2445/215277
Appears in Collections:Màster Oficial - Fonaments de la Ciència de Dades
Programari - Treballs de l'alumnat

Files in This Item:
File Description SizeFormat 
tfm_hernandez_jimenez_sergio.pdfMemòria844.47 kBAdobe PDFView/Open
Causal_NLP_TFM-main.zipCodi font4.96 MBzipView/Open


This item is licensed under a Creative Commons License Creative Commons