Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/182589
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSalamó Llorente, Maria-
dc.contributor.authorSánchez Lladó, Ferran-
dc.date.accessioned2022-01-26T10:19:31Z-
dc.date.available2022-01-26T10:19:31Z-
dc.date.issued2021-07-20-
dc.identifier.urihttp://hdl.handle.net/2445/182589-
dc.descriptionTreballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2021, Director: Maria Salamó Llorenteca
dc.description.abstract[en] The presence of social networks has increased in our daily lives and have become platforms for sharing information. But, it also can be used for sending hate messages or for propagating false news. Users can take advantage of their anonymity to provide these toxic interactions. Furthermore, some groups of people (minorities) get disproportionately more targeted than the rest. This raises the problem of how to detect if a message contains hate speech. A solution could be the use of machine learning models that would be in charge of this decision. In addition, it could handle the enormous amount of texts interchanged daily. However, there are many approaches to tackle the problem, which are divided mainly into two groups. The first one is through the use of classical algorithms to extract information from the text. The other one is through the use of deep learning models that can understand some context that allows for better predictions. The main objectives of the project are the exploration and comparison of different types of models and techniques. The diverse models are trained with three distinct toxicity datasets, of two natural language processing competitions. Generally, the best performing model is BERT or SBERT, both models based on the deep learning approach, with metric scores much higher than any model based on the traditional methods. The results show the vast potential of Natural Language Processing for the detection of hate speech. Although the best models did not have a very high perplexity, a more reliable model could be trained with more training data or new architectures. Even at the current state, the models could be used as an external font for helping humans in the decision-making process. Moreover, these models could filter the most confident predictions while leaving the rest for the reviewer team.ca
dc.format.extent67 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightsmemòria: cc-nc-nd (c) Ferran Sánchez Lladó, 2021-
dc.rights.urihttp://creativecommons.org/licenses/by-sa/3.0/es/*
dc.sourceTreballs Finals de Grau (TFG) - Enginyeria Informàtica-
dc.subject.classificationXarxes socialsca
dc.subject.classificationDiscurs de l'odica
dc.subject.classificationProgramarica
dc.subject.classificationTreballs de fi de grauca
dc.subject.classificationAprenentatge automàticca
dc.subject.classificationAlgorismes computacionalsca
dc.subject.classificationTractament del llenguatge natural (Informàtica)ca
dc.subject.otherSocial networksen
dc.subject.otherHate speechen
dc.subject.otherComputer softwareen
dc.subject.otherMachine learningen
dc.subject.otherComputer algorithmsen
dc.subject.otherBachelor's thesesen
dc.subject.otherNatural language processing (Computer science)en
dc.titleAnalysis of hate speech detection in social mediaca
dc.typeinfo:eu-repo/semantics/bachelorThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Treballs Finals de Grau (TFG) - Enginyeria Informàtica

Files in This Item:
File Description SizeFormat 
tfg_ferran_sanchez_llado.pdfMemòria2.19 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons