Please use this identifier to cite or link to this item: http://hdl.handle.net/2445/192920
Title: Instilling moral value alignment by means of multi-objective reinforcement learning
Author: Rodriguez Soto, Manel
Serramia, Marc
López Sánchez, Maite
Rodríguez-Aguilar, Juan A. (Juan Antonio)
Keywords: Intel·ligència artificial
Aprenentatge per reforç (Intel·ligència artificial)
Ètica
Aspectes morals
Artificial intelligence
Reinforcement learning
Ethics
Moral aspects
Issue Date: 24-Jan-2022
Publisher: Springer
Abstract: AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent's individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.
Note: Reproducció del document publicat a: https://doi.org/10.1007/s10676-022-09635-0
It is part of: Ethics And Information Technology, 2022, vol. 24
URI: http://hdl.handle.net/2445/192920
Related resource: https://doi.org/10.1007/s10676-022-09635-0
ISSN: 1388-1957
Appears in Collections:Articles publicats en revistes (Matemàtiques i Informàtica)

Files in This Item:
File Description SizeFormat 
715848.pdf1.86 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons