Please use this identifier to cite or link to this item:
https://hdl.handle.net/2445/223822
Title: | Exploring a multimodal foundation model on breast cancer visual question answering |
Author: | Iglesias Murrieta, José Javier |
Director/Tutor: | Díaz, Oliver |
Keywords: | Càncer de mama Diagnòstic per la imatge Aprenentatge automàtic Imatges mèdiques Programari Treballs de fi de grau Breast cancer Diagnostic imaging Machine learning Imaging systems in medicine Computer software Bachelor's theses |
Issue Date: | 10-Jun-2025 |
Abstract: | Cancer remains a leading cause of mortality worldwide, with breast cancer being the most frequently diagnosed. Early and accurate detection is critical to improving patient outcomes, and recent advances in artificial intelligence (AI) have demonstrated significant potential in supporting this goal. Machine learning (ML) and deep learning (DL) techniques have been widely applied to medical imaging tasks enhancing diagnostic accuracy across modalities such as mammography, ultrasound, and magnetic resonance imaging (MRI). However, most models require task-specific training and large annotated datasets, limiting their scalability and generalizability. In response to these limitations, foundation models (FMs) have emerged as a promising shift in AI research. These large scale models are pre-trained on diverse data and can be adapted to a wide range of downstream tasks, including multimodal medical applications. Their capacity for zero-shot and few-shot learning presents opportunities for improving diagnostic support in data constrained settings. This research explores the application of FMs in breast cancer analysis, specifically assessing their ability to perform visual question answering (VQA) on the BCDR-F01 and BreakHis breast imaging datasets. The study involves selecting a suitable vision-language FM and evaluating zero-shot and fine-tuning strategies to breast imaging data. Results demonstrate that while FMs show promising zero-shot performance and flexibility, their effectiveness depends heavily on model scale, fine-tuning approach, and task formulation, especially in complex multimodal tasks such as VQA. Instruction tuning and multimodal alignment emerged as critical factors for improving clinical relevance. This research highlights the potential of FMs to serve as integrative tools for breast cancer analysis, leveraging multimodal data with minimal retraining. Nonetheless, challenges remain in optimizing performance for clinical deployment, particularly around interpretability, domain-specific adaptation, and computational cost. |
Note: | Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2025, Director: Oliver Díaz |
URI: | https://hdl.handle.net/2445/223822 |
Appears in Collections: | Treballs Finals de Grau (TFG) - Enginyeria Informàtica Programari - Treballs de l'alumnat |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
TFG_Iglesias_Murrieta_José_Javier.pdf | Memòria | 13.24 MB | Adobe PDF | View/Open |
codi.zip | Codi font | 1.2 MB | zip | View/Open |
This item is licensed under a
Creative Commons License