De Chiara, AlessandroManna, EsterSingh, Shubhranshu2025-12-092025-12-092025https://hdl.handle.net/2445/224749We theoretically investigate whether AI developers or AI operators should be liable for the harm the AI systems may cause when they hallucinate. We find that the optimal liability framework may vary over time, with the evolution of the AI technology, and that making the AI operators liable can be desirable only if it induces monitoring of the AI systems. We also highlight non-trivial relationships between welfare and reputational concerns, human supervision ability, and the accuracy of the technology. Our results have implications for regulatory design and business strategies.44 p.application/pdfengcc-by-nc-nd, (c) De Chiara et al., 2025http://creativecommons.org/licenses/by-nc-nd/4.0/Intel·ligència artificialTeoria d'operadorsDisseny de sistemesArtificial intelligenceOperator theorySystem designMitigating Generative AI Hallucinationsinfo:eu-repo/semantics/workingPaperinfo:eu-repo/semantics/openAccess