“Holden Karnofsky on dozens of amazing opportunities to make AI safer — and all his AGI takes” by 80000_Hours

EA Forum Podcast (All audio) - A podcast by EA Forum Team

Podcast artwork

By Robert Wiblin | Watch on Youtube | Listen on Spotify | Read transcript Episode summary Whatever your skills are, whatever your interests are, we’re out of the world where you have to be a conceptual self-starter, theorist mathematician, or a policy person — we’re into the world where whatever your skills are, there is probably a way to use them in a way that is helping make maybe humanity's most important event ever go better. — Holden Karnofsky For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback. According to Anthropic's Holden Karnofsky, this situation has now reversed completely. There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven’t appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work’ they could personally help with, in both technical and non-technical areas. In today's interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he's excited to see happening, including: [...] ---Outline:(00:19) Episode summary(04:17) The interview in a nutshell(04:41) 1. The AI race is NOT a coordination problem -- too many actors genuinely want to race(05:30) 2. Anthropic demonstrates it's possible to be both competitive and safety focused(06:56) 3. Success is possible even with terrible execution -- but we shouldn't count on it(07:54) 4. Concrete work in AI safety is now tractable and measurable(09:21) Highlights(09:24) The world is handling AGI about as badly as possible(12:33) Is rogue AI takeover easy or hard?(15:34) The AGI race isnt a coordination failure(21:27) Lessons from farm animal welfare we can use in AI(25:40) The case for working at Anthropic(30:31) Overrated AI risk: Persuasion(35:06) Holden thinks AI companions are bad news --- First published: October 31st, 2025 Source: https://forum.effectivealtruism.org/posts/Ee43ztGBsks3X5byb/holden-karnofsky-on-dozens-of-amazing-opportunities-to-make --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site