Alineació de paraules i mecanismes d'atenció en sistemes de traducció automàtica neuronal

dc.contributor.advisorOrtiz Martínez, Daniel
dc.contributor.authorSafont Gascón, Pol
dc.date.accessioned2022-07-18T08:34:21Z
dc.date.available2022-07-18T08:34:21Z
dc.date.issued2022-01-24
dc.descriptionTreballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Daniel Ortiz Martínezca
dc.description.abstract[en] Deep Neural Networks have become the state of the art in many complex computational tasks. While they achieve great improvements over several benchmarking tasks year after year, they seem to operate as black boxes, making it hard for both data scientist and end users to assess their inner decision mechanisms and trust their results. While statistical and interpretable methods are widely used to analyze them, they don’t fully grasp their internal mechanisms and are prone to misleading results, leading to a need for better tools. As a result, self-explaining methods embedded inside the architecture of the neural networks have become a possible alternative, with attention mechanisms as one of the main new technics. The project main focus is the word alignment task, finding the most relevant translation relationships between source and target words in a pair of parallel sentences in different languages. This is a complex task of the Natural Language Processing and machine translation field, and we analyze the use of the novel attention mechanisms embedded in different encoder-decoder neural networks in order to extract the word to word alignments between source and target translations as a byproduct of the translation task. In the first part we analyze the background of the machine translation field: the main traditional statistical methods, the neural machine translation approach to the sequence to sequence problem and finally the word align task and the attention mechanism. In the second part, we implement a machine translation deep neural networks model: a recurrent neural network with an encoder-decoder architecture with attention. And we propose an alignment generation mechanism using the attention layer in order to extract and predict source to target word to word alignments. Finally, we train the neural networks with an English and French bilingual parallel sentence corpus and analyze the experimental results of the model for the translation and align word to word tasks, using a variety of metrics and suggest improvements and alternatives.ca
dc.format.extent66 p.
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/2445/187820
dc.language.isocatca
dc.rightsmemòria: cc-nc-nd (c) Pol Safont Gascón, 2022
dc.rightscodi: GPL (c) Pol Safont Gascón, 2022
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/
dc.rights.urihttp://www.gnu.org/licenses/gpl-3.0.ca.html
dc.sourceTreballs Finals de Grau (TFG) - Enginyeria Informàtica
dc.subject.classificationXarxes neuronals (Informàtica)ca
dc.subject.classificationTraducció automàticaca
dc.subject.classificationProgramarica
dc.subject.classificationTreballs de fi de grauca
dc.subject.classificationTractament del llenguatge natural (Informàtica)ca
dc.subject.classificationAprenentatge automàticca
dc.subject.otherNeural networks (Computer science)en
dc.subject.otherMachine translatingen
dc.subject.otherComputer softwareen
dc.subject.otherNatural language processing (Computer science)en
dc.subject.otherMachine learningen
dc.subject.otherBachelor's thesesen
dc.titleAlineació de paraules i mecanismes d'atenció en sistemes de traducció automàtica neuronalca
dc.typeinfo:eu-repo/semantics/bachelorThesisca

Fitxers

Paquet original

Mostrant 1 - 2 de 2
Carregant...
Miniatura
Nom:
codi.zip
Mida:
377.21 KB
Format:
ZIP file
Descripció:
Codi font
Carregant...
Miniatura
Nom:
tfg_safont_gascon_pol.pdf
Mida:
2.56 MB
Format:
Adobe Portable Document Format
Descripció:
Memòria