Carregant...
Miniatura

Tipus de document

Article

Versió

Versió publicada

Data de publicació

Llicència de publicació

cc by (c) Manel Rodríguez Soto et al., 2022
Si us plau utilitzeu sempre aquest identificador per citar o enllaçar aquest document: https://hdl.handle.net/2445/192920

Instilling moral value alignment by means of multi-objective reinforcement learning

Títol de la revista

Director/Tutor

ISSN de la revista

Títol del volum

Resum

AI research is being challenged with ensuring that autonomous agents learn to behave ethically, namely in alignment with moral values. Here, we propose a novel way of tackling the value alignment problem as a two-step process. The first step consists on formalising moral values and value aligned behaviour based on philosophical foundations. Our formalisation is compatible with the framework of (Multi-Objective) Reinforcement Learning, to ease the handling of an agent's individual and ethical objectives. The second step consists in designing an environment wherein an agent learns to behave ethically while pursuing its individual objective. We leverage on our theoretical results to introduce an algorithm that automates our two-step approach. In the cases where value-aligned behaviour is possible, our algorithm produces a learning environment for the agent wherein it will learn a value-aligned behaviour.

Citació

Citació

RODRIGUEZ SOTO, Manel, SERRAMIA, Marc, LÓPEZ SÁNCHEZ, Maite, RODRÍGUEZ-AGUILAR, Juan a. (juan antonio). Instilling moral value alignment by means of multi-objective reinforcement learning. _Ethics And Information Technology_. 2022. Vol. 24. [consulta: 9 de gener de 2026]. ISSN: 1388-1957. [Disponible a: https://hdl.handle.net/2445/192920]

Exportar metadades

JSON - METS

Compartir registre