El Dipòsit Digital ha actualitzat el programari. Qualsevol incidència que trobeu si us plau contacteu amb dipositdigital@ub.edu.

 
Carregant...
Miniatura

Tipus de document

Treball de fi de grau

Data de publicació

Llicència de publicació

memòria: cc-nc-nd (c) José Javier Iglesias Murrieta, 2025
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/223822

Exploring a multimodal foundation model on breast cancer visual question answering

Títol de la revista

ISSN de la revista

Títol del volum

Resum

Cancer remains a leading cause of mortality worldwide, with breast cancer being the most frequently diagnosed. Early and accurate detection is critical to improving patient outcomes, and recent advances in artificial intelligence (AI) have demonstrated significant potential in supporting this goal. Machine learning (ML) and deep learning (DL) techniques have been widely applied to medical imaging tasks enhancing diagnostic accuracy across modalities such as mammography, ultrasound, and magnetic resonance imaging (MRI). However, most models require task-specific training and large annotated datasets, limiting their scalability and generalizability.​ ​In response to these limitations, foundation models (FMs) have emerged as a promising shift in AI research. These large scale models are pre-trained on diverse data and can be adapted to a wide range of downstream tasks, including multimodal medical applications. Their capacity for zero-shot and few-shot learning presents opportunities for improving diagnostic support in data constrained settings. This research explores the application of FMs in breast cancer analysis, specifically assessing their ability to perform visual question answering (VQA) on the BCDR-F01 and BreakHis breast imaging datasets. The study involves selecting a suitable vision-language FM and evaluating zero-shot and fine-tuning strategies to breast imaging data. Results demonstrate that while FMs show promising zero-shot performance and flexibility, their effectiveness depends heavily on model scale, fine-tuning approach, and task formulation, especially in complex multimodal tasks such as VQA. Instruction tuning and multimodal alignment emerged as critical factors for improving clinical relevance. This research highlights the potential of FMs to serve as integrative tools for breast cancer analysis, leveraging multimodal data with minimal retraining. Nonetheless, challenges remain in optimizing performance for clinical deployment, particularly around interpretability, domain-specific adaptation, and computational cost.

Descripció

Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2025, Director: Oliver Díaz

Citació

Citació

IGLESIAS MURRIETA, José javier. Exploring a multimodal foundation model on breast cancer visual question answering. [consulta: 25 de novembre de 2025]. [Disponible a: https://hdl.handle.net/2445/223822]

Exportar metadades

JSON - METS

Compartir registre