Carregant...
Miniatura

Tipus de document

Treball de fi de màster

Data de publicació

Llicència de publicació

cc-by-nc-nd (c) Maria Magdalena Pol Pujadas, 2024
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/215400

Evaluating Large Language Models as computer programming teaching assistants

Títol de la revista

ISSN de la revista

Títol del volum

Resum

[en] The principal aim of this project is to conduct an analysis of how different Large Language Models (LLMs) operate in diverse context and situations in the field of education. In particular, we aim to assess the suitability of LLMs for specific tasks within the domain of algorithmic subjects within computer science studies. The tasks under analysis are designed to assist both students and teachers. With regard to students, we will assess the capacity of the models to implement a specified code. When it comes to teachers, we will evaluate the models’ abilities to identify the target of the introduced code and potential errors introduced by students in their codes, enabling students to become more self-taught and seek assistance from teachers when necessary. To evaluate these tasks, we have considered eight models. Two closed-source models were evaluated: GPT-3.5 and GPT-4. Five open-source models were also considered: Llama2, Codellama instruct, Llama3, Platypus2, Deepseek Coder and Qwen-1.5.

Descripció

Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2023-2024. Tutor: Daniel Ortiz Martínez i Eloi Puertas i Prats

Citació

Citació

POL PUJADAS, Maria magdalena. Evaluating Large Language Models as computer programming teaching assistants. [consulta: 6 de desembre de 2025]. [Disponible a: https://hdl.handle.net/2445/215400]

Exportar metadades

JSON - METS

Compartir registre