Igual Muñoz, LauraEguzkitza Zalakain, Jokin2025-09-162025-09-162025-06-30https://hdl.handle.net/2445/223176Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Any: 2025. Tutor: Laura Igual Muñoz i Pablo ÁlvarezThis thesis studies how to evaluate ReAct agents that use external tools. ReAct agents are AI Agents that combine reasoning and tool use (functions), allowing large language models to perform tasks that require accessing external sources of information. These agents are becoming more common in real applications, but evaluating their behaviour remains a challenge. Using LangGraph and LangChain three different AI agents are created using locally deployed LLM models served with Ollama. These agents use open-source tools like Wikipedia, Wikidata, Yahoo Finance and PDF readers. To evaluate them, the project combines rule-based checks with RAGAS metrics to measure tool use, answer quality, factual correctness and context use. The results show that prompt design is very important to guide the agent’s behaviour, and that typical question-answer metrics are not always enough to measure how well an agent works. This work offers a simple and practical way to test LLM agents. All the corresponding code notebook can be found on the following repository, https://github.com/Jokinn9/Evaluating-Tool-Augmented-ReAct-Language-Agents37 p.application/pdfengcc-by-nc-nd (c) Jokin Eguzkitza Zalakain, 2025codi: GPL (c) Jokin Eguzkitza Zalakain, 2025http://creativecommons.org/licenses/by-nc-nd/3.0/es/http://www.gnu.org/licenses/gpl-3.0.ca.htmlTractament del llenguatge natural (Informàtica)Intel·ligència artificialAgents intel·ligents (Programari)Treballs de fi de màsterNatural language processing (Computer science)Artificial intelligenceIntelligent agents (Computer software)Master's thesisEvaluating Tool-Augmented ReAct Language Agentsinfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/openAccess