Please use this identifier to cite or link to this item:
http://hdl.handle.net/2445/213461
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Gkontra, Polyxeni | - |
dc.contributor.advisor | Lekadir, Karim, 1977- | - |
dc.contributor.author | Herron Mulet, Claudia | - |
dc.date.accessioned | 2024-06-20T08:03:09Z | - |
dc.date.available | 2024-06-20T08:03:09Z | - |
dc.date.issued | 2023-06-30 | - |
dc.identifier.uri | http://hdl.handle.net/2445/213461 | - |
dc.description | Treballs finals del Màster de Fonaments de Ciència de Dades, Facultat de matemàtiques, Universitat de Barcelona. Curs: 2022-2023. Tutor: Polyxeni Gkontra i Karim Lekadir | ca |
dc.description.abstract | Over the past few years, there has been a rise in the utilization of information and communication technologies (ICTs) and electronic health records (EHRs) within the healthcare system. This increase has led to a substantial gathering of medical data, opening up promising prospects for personalized medicine. Notably, one promising application is the creation of disease risk assessment tools, designed to precisely estimate an individual’s predisposition to developing certain illnessess. These innovative tools empower healthcare professionals to conduct more targeted trials, closely monitor high-risk subjects, and implement timely interventions. However, as these systems start to be tested in real world scenarios, recent studies reveal that they might worsen off the situation of historically underprivileged groups in our society. These discriminatory biases might be caused by many reasons: unequal access to healthcare, false beliefs about biological differences, non-diverse datasets, ma- chine learning (ML) models optimizing for the majority and disregarding underrepresented communities, etc. As a result, it becomes crucial to design and implement metrics and techniques to quantify and mitigate discriminatory biases. In this work, we propose a comprehensive methodology that encompasses data wrangling, model evaluation, and the monitoring of both model performance and potential disparities. Building upon existing research on fairness in machine learning, we aim to adapt the fairness framework specifically for disease prediction, considering that some of the protected features also contribute to increased disease risk. Furthermore, we apply both in-processing and post-processing mitigation techniques to a classifier trained on a large-scale dataset. By experimenting with two diseases of increasing prevalence, Primary Hypertension and Parkinson’s Disease, we seek to assess the effectiveness of these techniques in reducing discriminatory biases and ensuring equitable outcomes. | ca |
dc.format.extent | 57 p. | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | eng | ca |
dc.rights | cc-by-nc-nd (c) Claudia Herron Mulet, 2023 | - |
dc.rights | codi: MIT (c) Claudia Herron Mulet, 2023 | - |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/es/ | * |
dc.rights.uri | https://opensource.org/license/mit | * |
dc.source | Màster Oficial - Fonaments de la Ciència de Dades | - |
dc.subject.classification | Aprenentatge automàtic | - |
dc.subject.classification | Intel·ligència artificial en medicina | - |
dc.subject.classification | Pronòstic mèdic | - |
dc.subject.classification | Treballs de fi de màster | - |
dc.subject.other | Machine learning | - |
dc.subject.other | Medical artificial intelligence | - |
dc.subject.other | Prognosis | - |
dc.subject.other | Master's thesis | - |
dc.title | Towards fair machine learning in healthcare: ensuring non-discrimination for disease prediction | ca |
dc.type | info:eu-repo/semantics/masterThesis | ca |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | ca |
Appears in Collections: | Programari - Treballs de l'alumnat Màster Oficial - Fonaments de la Ciència de Dades |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
codi_font_herron_mulet_claudia.zip | Codi font | 155.04 kB | zip | View/Open |
tfm_herron_mulet_claudia.pdf | Memòria | 4.11 MB | Adobe PDF | View/Open |
This item is licensed under a Creative Commons License