Please use this identifier to cite or link to this item: https://hdl.handle.net/2445/220790
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGuzman Requena, Alejandro-
dc.contributor.authorMárquez Vara, Noah-
dc.contributor.authorDíaz, Oliver-
dc.date.accessioned2025-05-05T08:14:36Z-
dc.date.available2025-05-05T08:14:36Z-
dc.date.issued2025-04-10-
dc.identifier.citationAlejandro Guzman, Noah Márquez, Oliver Díaz, "From hand-crafted radiomics to deep learning: evaluating breast cancer classification methods in mammograms," Proc. SPIE 13411, Medical Imaging 2025: Imaging Informatics, 134110U (10 April 2025); https://doi.org/10.1117/12.3046672ca
dc.identifier.urihttps://hdl.handle.net/2445/220790-
dc.description.abstractThis study evaluates the performance of some machine learning (ML) and deep learning (DL) models for breast cancer tumor classification in mammography (MG) images, by training them on the BCDR dataset. The study compares the use of radiomics-based features in ML models, including Random Forest, Support Vector Machines, and XGBoost, with two deep learning approaches using Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Radiomics features were extracted from segmented regions of interest (ROIs) and used to train the ML models, performing hyperparameter tuning and cross-validation to optimize the results. CNN and ViT models were trained using the MG location present in the ROI segmentations, to explore the impact of tumor region localization assistance on classification performance. To examine and verify the performance of the ML models and the ViT, the area under the receiver operating characteristic curve (AUC-ROC) and the training execution time of all experiments (performed on the same device) were used. The results indicate that, while all methods achieve good performance on the training dataset (mean AUCROC scores around 0.9), they exhibit substantial performance drops when tested on external data. Among the evaluated models, ViT achieves the highest overall AUC-ROC in both internal (0.93) and external (0.68) validation, surpassing CNNs and radiomics-based ML models. However, ViT also incurs the highest computational cost, highlighting a trade-off between accuracy and training time. These findings underscore the need for multicenter, multi-vendor data to improve model generalization and reliability, as well as for continued refinement of advanced architectures, such as transformers, to optimize breast cancer lesion classification in clinical settings.ca
dc.format.extent10 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoengca
dc.publisherSPIEca
dc.relation.isformatofVersió postprint de la comunicació publicada a: https://doi.org/10.1117/12.3046672-
dc.relation.ispartofComunicació a: Proc. SPIE 13411, Medical Imaging 2025: Imaging Informatics; 134110U (10 April 2025)-
dc.relation.ispartofseriesProceedings SPIEca
dc.relation.ispartofseries13411ca
dc.relation.urihttps://doi.org/10.1117/12.3046672-
dc.rights(c) SPIE, 2025-
dc.sourceComunicacions a congressos (Matemàtiques i Informàtica)-
dc.subject.classificationAprenentatge automàtic-
dc.subject.classificationMamografia-
dc.subject.classificationCàncer de mamaca
dc.subject.classificationXarxes neuronals convolucionalsca
dc.subject.otherMachine learning-
dc.subject.otherMammography-
dc.subject.otherBreast canceren
dc.subject.otherConvolutional neural networksen
dc.titleFrom hand-crafted radiomics to deep learning: evaluating breast cancer classification methods in mammogramsen
dc.typeinfo:eu-repo/semantics/conferenceObjecten
dc.typeinfo:eu-repo/semantics/acceptedVersion-
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca
Appears in Collections:Comunicacions a congressos (Matemàtiques i Informàtica)

Files in This Item:
File Description SizeFormat 
UTF-82025 Guzman SPIE.pdfGuzman SPIE3.21 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.