Please use this identifier to cite or link to this item:
https://hdl.handle.net/2445/223909| Title: | Differentially Private Machine Learning: Implementation and Analysis of Gradient and Dataset Perturbation Techniques |
| Author: | Mantilla Carreño, Juan Pablo |
| Director/Tutor: | Statuto, Nahuel |
| Keywords: | Aprenentatge automàtic Protecció de dades Dades massives Programari Treballs de fi de grau Processos gaussians Machine learning Data protection Big data Computer software Bachelor's theses Gaussian processes |
| Issue Date: | 10-Jun-2025 |
| Abstract: | The increasing use of machine learning poses significant privacy risks, especially when sensitive data is used, and conventional anonymization methods have proven insufficient. Differential privacy is a rigorous framework for data privacy providing strong mathematical guarantees. The possibility of applying this framework to machine learning solves the privacy problem. We will present the fundamental basis of these concepts to empirically investigate, implement, and analyse two techniques for integrating differential privacy into machine learning pipelines. The first technique, dataset perturbation, involves adding calibrated Gaussian noise directly to the training data and then using any standard machine learning pipeline. The second, gradient perturbation, centers on differentially private stochastic gradient descent, is an approach that injects noise into the gradients during the training phase. For the comparative study, we developed a multi-class classification architecture using a real-world, sensitive medical dataset derived from the MIMIC-IV database. Model performance was evaluated against a non-private baseline, using the appropriate metrics considering our class imbalance, such as Macro F1-score and Macro OVO AUC. The results confirm the trade-off between privacy and utility in the models developed, where higher privacy guarantees consistently result in reduced model utility. For the specific context of this study, gradient perturbation provided a slightly more advantageous model in overall balance of utility and privacy. Ultimately, the thesis provides strong evidence for the feasibility of training useful and formally private machine learning models on real-world medical data, successfully demonstrating a practical "sweet spot" between privacy and performance can be found. |
| Note: | Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2025, Director: Nahuel Statuto |
| URI: | https://hdl.handle.net/2445/223909 |
| Appears in Collections: | Treballs Finals de Grau (TFG) - Enginyeria Informàtica Treballs Finals de Grau (TFG) - Matemàtiques Programari - Treballs de l'alumnat |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| tfg_Mantilla_Carreño_Juan_Pablo.pdf | Memòria | 1.37 MB | Adobe PDF | View/Open |
| codi.zip | Codi font | 38.56 kB | zip | View/Open |
This item is licensed under a
Creative Commons License
