Peter Railton on Moral Learning and Metaethics in AI Systems
Future of Life Institute Podcast - A podcast by Future of Life Institute
Categorie:
From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Topics discussed in this episode include: -Moral epistemology -The potential relevance of metaethics to AI alignment -The importance of moral learning in AI systems -Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/ Timestamps: 0:00 Intro 3:05 Does metaethics matter for AI alignment? 22:49 Long-reflection considerations 26:05 Moral learning in humans 35:07 The need for moral learning in artificial intelligence 53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit 1:38:50 The need for engagement between philosophers and the AI alignment community 1:40:37 Where to find Peter's work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.