Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/185653
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSans Gispert, Eloi-
dc.contributor.authorSánchez Albaladejo, Raül-
dc.date.accessioned2022-05-17T08:13:59Z-
dc.date.available2022-05-17T08:13:59Z-
dc.date.issued2021-06-18-
dc.identifier.urihttp://hdl.handle.net/2445/185653-
dc.descriptionTreballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2021, Director: Eloi Sans Gispertca
dc.description.abstract[en] In this work we describe what feedforward neural networks are and how they are used. We explain the elements that make them up: layers, depth, weights, biases, learning rate and activation function. Then we see that feedforward neural networks are universal approximators of functions under certain conditions. We study two different ways to prove this. On the one hand, the Kolmogorov-Sprecher pathway tells us that feedforward neural networks with three layers, $\mathrm{n}$ components in the first layer, $2 n+1$ nodes in the second layer, and modes in the last layer are universal approximators of continuous functions from $\mathbb{R}^{n}$ a $\mathbb{R}^{m}$ as long as the activation function is monotonically increasing and class $\operatorname{Lip}\left[\frac{\ln 2}{\ln (2 N+2}\right]$. On the other hand, in the second pathway we see that feedforward neural networks are universal approximations of any measurable function as long as the activation function of the neural network is a squashing function. Finally we explain how to determine the different elements that configure the feedforward neural networks. We define the cost function. We explain that by minimizing the cost function by the stochastic gradient descent method and the learning rate we can calculate the weights and biases. At the end we study different activation functions and see how they affect neural networks also explaining the vanishing gradient.ca
dc.format.extent53 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isocatca
dc.rightscc-by-nc-nd (c) Raül Sánchez Albaladejo, 2021-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.sourceTreballs Finals de Grau (TFG) - Matemàtiques-
dc.subject.classificationXarxes neuronals (Informàtica)ca
dc.subject.classificationTreballs de fi de grau-
dc.subject.classificationIntel·ligència artificialca
dc.subject.classificationAprenentatge automàticca
dc.subject.classificationAnàlisi numèricaca
dc.subject.otherNeural networks (Computer science)en
dc.subject.otherBachelor's theses-
dc.subject.otherArtificial intelligenceen
dc.subject.otherMachine learningen
dc.subject.otherNumerical analysisen
dc.titleLes xarxes neuronals de propagació cap endavant. Una aproximació matemàticaca
dc.typeinfo:eu-repo/semantics/bachelorThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Treballs Finals de Grau (TFG) - Matemàtiques

Files in This Item:
File Description SizeFormat 
tfgsanchez_albaladejo_raul.pdfMemòria806.84 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons