Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/168552
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBrando Guillaumes, Axel-
dc.contributor.authorTorres, Damià-
dc.contributor.authorRodriguez-Serrano, José A.-
dc.contributor.authorVitrià i Marca, Jordi-
dc.date.accessioned2020-07-14T08:42:01Z-
dc.date.available2020-07-14T08:42:01Z-
dc.date.issued2020-07-02-
dc.identifier.issn2169-3536-
dc.identifier.urihttp://hdl.handle.net/2445/168552-
dc.description.abstractWith the commoditization of machine learning, more and more off-the-shelf models are available as part of code libraries or cloud services. Typically, data scientists and other users apply these models as ''black boxes'' within larger projects. In the case of regressing a scalar quantity, such APIs typically offer a predict() function, which outputs the estimated target variable (often referred to as y¿ or, in code, y_hat). However, many real-world problems may require some sort of deviation interval or uncertainty score rather than a single point-wise estimate. In other words, a mechanism is needed with which to answer the question ''How confident is the system about that prediction?'' Motivated by the lack of this characteristic in most predictive APIs designed for regression purposes, we propose a method that adds an uncertainty score to every black-box prediction. Since the underlying model is not accessible, and therefore standard Bayesian approaches are not applicable, we adopt an empirical approach and fit an uncertainty model using a labelled dataset (x, y) and the outputs y¿ of the black box. In order to be able to use any predictive system as a black box and adapt to its complex behaviours, we propose three variants of an uncertainty model based on deep networks. The first adds a heteroscedastic noise component to the black-box output, the second predicts the residuals of the black box, and the third performs quantile regression using deep networks. Experiments using real financial data that contain an in-production black-box system and two public datasets (energy forecasting and biology responses) illustrate and quantify how uncertainty scores can be added to black-box outputs.-
dc.format.extent13 p.-
dc.format.mimetypeapplication/pdf-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)-
dc.relation.isformatofReproducció del document publicat a: https://doi.org/10.1109/ACCESS.2020.3006711-
dc.relation.ispartofIEEE Access, 2020, vol. 8, p. 121344 -121356-
dc.relation.urihttps://doi.org/10.1109/ACCESS.2020.3006711-
dc.rightscc-by (c) Brando, Axel et al., 2020-
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es-
dc.sourceArticles publicats en revistes (Matemàtiques i Informàtica)-
dc.subject.classificationAprenentatge automàtic-
dc.subject.classificationXarxes neuronals (Informàtica)-
dc.subject.classificationIntel·ligència artificial-
dc.subject.otherMachine learning-
dc.subject.otherNeural networks (Computer science)-
dc.subject.otherArtificial intelligence-
dc.titleBuilding uncertainty models on top of black-box predictive APIs-
dc.typeinfo:eu-repo/semantics/article-
dc.typeinfo:eu-repo/semantics/publishedVersion-
dc.identifier.idgrec702678-
dc.date.updated2020-07-14T08:42:01Z-
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess-
Appears in Collections:Articles publicats en revistes (Matemàtiques i Informàtica)

Files in This Item:
File Description SizeFormat 
702678.pdf7.05 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons