Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/187820
Title: Alineació de paraules i mecanismes d'atenció en sistemes de traducció automàtica neuronal
Author: Safont Gascón, Pol
Director/Tutor: Ortiz Martínez, Daniel
Keywords: Xarxes neuronals (Informàtica)
Traducció automàtica
Programari
Treballs de fi de grau
Tractament del llenguatge natural (Informàtica)
Aprenentatge automàtic
Neural networks (Computer science)
Machine translating
Computer software
Natural language processing (Computer science)
Machine learning
Bachelor's theses
Issue Date: 24-Jan-2022
Abstract: [en] Deep Neural Networks have become the state of the art in many complex computational tasks. While they achieve great improvements over several benchmarking tasks year after year, they seem to operate as black boxes, making it hard for both data scientist and end users to assess their inner decision mechanisms and trust their results. While statistical and interpretable methods are widely used to analyze them, they don’t fully grasp their internal mechanisms and are prone to misleading results, leading to a need for better tools. As a result, self-explaining methods embedded inside the architecture of the neural networks have become a possible alternative, with attention mechanisms as one of the main new technics. The project main focus is the word alignment task, finding the most relevant translation relationships between source and target words in a pair of parallel sentences in different languages. This is a complex task of the Natural Language Processing and machine translation field, and we analyze the use of the novel attention mechanisms embedded in different encoder-decoder neural networks in order to extract the word to word alignments between source and target translations as a byproduct of the translation task. In the first part we analyze the background of the machine translation field: the main traditional statistical methods, the neural machine translation approach to the sequence to sequence problem and finally the word align task and the attention mechanism. In the second part, we implement a machine translation deep neural networks model: a recurrent neural network with an encoder-decoder architecture with attention. And we propose an alignment generation mechanism using the attention layer in order to extract and predict source to target word to word alignments. Finally, we train the neural networks with an English and French bilingual parallel sentence corpus and analyze the experimental results of the model for the translation and align word to word tasks, using a variety of metrics and suggest improvements and alternatives.
Note: Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2022, Director: Daniel Ortiz Martínez
URI: http://hdl.handle.net/2445/187820
Appears in Collections:Treballs Finals de Grau (TFG) - Enginyeria Informàtica
Programari - Treballs de l'alumnat

Files in This Item:
File Description SizeFormat 
codi.zipCodi font377.21 kBzipView/Open
tfg_safont_gascon_pol.pdfMemòria2.62 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons