Connor Leahy on AI Safety and Why the World is Fragile

Future of Life Institute Podcast - A podcast by Future of Life Institute

Categorie:

Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

Visit the podcast's native language site