Ortiz Martínez, DanielPuertas i Prats, EloiPol Pujadas, Maria Magdalena2024-09-262024-09-262024-06-30https://hdl.handle.net/2445/215400Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Daniel Ortiz Martínez i Eloi Puertas i Prats[en] The principal aim of this project is to conduct an analysis of how different Large Language Models (LLMs) operate in diverse context and situations in the field of education. In particular, we aim to assess the suitability of LLMs for specific tasks within the domain of algorithmic subjects within computer science studies. The tasks under analysis are designed to assist both students and teachers. With regard to students, we will assess the capacity of the models to implement a specified code. When it comes to teachers, we will evaluate the models’ abilities to identify the target of the introduced code and potential errors introduced by students in their codes, enabling students to become more self-taught and seek assistance from teachers when necessary. To evaluate these tasks, we have considered eight models. Two closed-source models were evaluated: GPT-3.5 and GPT-4. Five open-source models were also considered: Llama2, Codellama instruct, Llama3, Platypus2, Deepseek Coder and Qwen-1.5.80 p.application/pdfengcc-by-nc-nd (c) Maria Magdalena Pol Pujadas, 2024codi: GPL (c) Maria Magdalena Pol Pujadas, 2024http://creativecommons.org/licenses/by-nc-nd/3.0/es/http://www.gnu.org/licenses/gpl-3.0.ca.htmlTractament del llenguatge natural (Informàtica)Sistemes informàtics interactiusProgramació (Ordinadors)Treballs de fi de màsterNatural language processing (Computer science)Interactive computer systemsComputer programmingMaster's thesisEvaluating Large Language Models as computer programming teaching assistantsinfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/openAccess