Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/223419
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorDíaz, Oliver-
dc.contributor.authorBlandón Tórrez, David-
dc.date.accessioned2025-09-29T10:06:38Z-
dc.date.available2025-09-29T10:06:38Z-
dc.date.issued2025-06-10-
dc.identifier.urihttps://hdl.handle.net/2445/223419-
dc.descriptionTreballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2025, Director: Oliver Díazca
dc.description.abstractBreast cancer remains the most prevalent malignancy and a leading cause of mortality in women worldwide. Early and accurate molecular characterization is critical for prognosis and treatment selection. Molecular subtyping, traditionally guided by invasive tissue biopsies and immunohistochemical analysis, enables personalized therapies but is costly, time-consuming, and not universally feasible. Non-invasive alternatives leveraging medical imaging, particularly mammography, have gained research interest for molecular classification. This study evaluates the potential of Transformer-based DL models to classify molecular subtypes of invasive ductal carcinoma using mammographic images exclusively from the public CMMD (The Chinese Mammography Database) dataset. A systematical analysis was conducted to compare three state-of-the-art Transformer architectures, Vision Transformer (ViT), Swin Transformer (Swin), and Multi-Axis Vision Transformer (MaxViT), against a traditional CNN model, ResNet-101. The experimental methodology addresses key challenges such as class imbalance through weighted loss functions, oversampling, data augmentation, and robust cross-validation strategies. Results demonstrate that transformer-based models consistently outperform the CNN baseline. ViT achieved the highest average AUC ( $0.635 \pm 0.016$ ) and balanced accuracy $(0.385 \pm 0.042)$ on test sets, compared to ResNet-101 (AUC: $0.563 \pm 0.03$; balanced accuracy: $0.322 \pm 0.062$ ). Statistical analysis confirmed significant performance differences ( $\mathrm{p}<0.05$ ), supporting the hypothesis that transformer self-attention mechanisms better model global spatial relationships in mammograms. Despite these advances, overall performance remains below clinically acceptable thresholds, highlighting the inherent difficulty of non-invasive molecular subtyping based solely on imaging and the need for larger datasets or multimodal integration. Nevertheless, this work demonstrates the potential of transformer-based approaches for accessible, non-invasive breast cancer characterization, establishing a robust foundation for future AI-driven advancements in medical imaging.ca
dc.format.extent95 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.rightsmemòria: cc-nc-nd (c) David Blandón Tórrez, 2025-
dc.rightscodi: GPL (c) David Blandón Tórrez, 2025-
dc.rights.urihttp://www.gnu.org/licenses/gpl-3.0.ca.html-
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.sourceTreballs Finals de Grau (TFG) - Enginyeria Informàtica-
dc.subject.classificationCàncer de mamaca
dc.subject.classificationAprenentatge profundca
dc.subject.classificationMamografiaca
dc.subject.classificationSistemes classificadors (Intel·ligència artificial)ca
dc.subject.classificationProgramarica
dc.subject.classificationTreballs de fi de grauca
dc.subject.otherBreast canceren
dc.subject.otherDeep learning (Machine learning)en
dc.subject.otherMammographyen
dc.subject.otherLearning classifier systemsen
dc.subject.otherComputer softwareen
dc.subject.otherBachelor's thesesen
dc.titleEvaluation of Transformer-Based Models for Molecular Subtype Classification of Invasive Ductal Breast Carcinoma Using Mammographyca
dc.typeinfo:eu-repo/semantics/bachelorThesisca
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Treballs Finals de Grau (TFG) - Enginyeria Informàtica
Programari - Treballs de l'alumnat

Files in This Item:
File Description SizeFormat 
TFG_Blandón_Tórrez_David.pdfMemòria19.98 MBAdobe PDFView/Open
codi.zipCodi font1.72 MBzipView/Open


This item is licensed under a Creative Commons License Creative Commons