Ortiz Martínez, DanielArpírez Vega, Julio CésarFayos i Pérez, Victor2024-09-162024-09-162024-06-30https://hdl.handle.net/2445/215162Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Daniel Ortiz Martínez[en] This study investigates the potential of using smaller, locally hosted language models (LLMs) to perform specific tasks traditionally handled by large LLMs, such as OpenAI’s Chat-GPT 3.5. With the growing integration of LLMs in corporate environments, concerns over costs, data privacy, and security have become prominent. By focusing on question answering and text summarization tasks, we compare the performance of several smaller models, including Flan T5 XXL, Phi 3 Mini, and Yi 1.5, against Chat-GPT 3.5. As the two experiments show, one on question answering and the second one on text summarization, this tasks can be done by the tested models at the same level than the state of the art Chat-GPT 3.5. Concluding that depending the use intended for the LLM one of the different models could best fit as the variety in the response structure and verbosity highly depends on the model selected.35 p.application/pdfengcc-by-nc-nd (c) Victor Fayos i Pérez, 2024codi: GPL (c) Victor Fayos i Pérez, 2024http://creativecommons.org/licenses/by-nc-nd/3.0/es/http://www.gnu.org/licenses/gpl-3.0.ca.htmlTractament del llenguatge natural (Informàtica)Sistemes informàtics interactiusBots (Programes d'ordinador)Treballs de fi de màsterNatural language processing (Computer science)Interactive computer systemsInternet bots (Computer software)Master's thesisComparative analysis of open source large language modelsinfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/openAccess