35 Episodio

  1. Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)

    Pubblicato: 12/09/2025
  2. Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)

    Pubblicato: 05/09/2025
  3. Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)

    Pubblicato: 29/08/2025
  4. Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)

    Pubblicato: 22/08/2025
  5. Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)

    Pubblicato: 15/08/2025
  6. Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

    Pubblicato: 12/08/2025
  7. Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)

    Pubblicato: 01/08/2025
  8. Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)

    Pubblicato: 18/07/2025
  9. Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)

    Pubblicato: 04/07/2025
  10. Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]

    Pubblicato: 20/06/2025
  11. David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]

    Pubblicato: 06/06/2025
  12. Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]

    Pubblicato: 23/05/2025
  13. Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]

    Pubblicato: 09/05/2025
  14. Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

    Pubblicato: 25/04/2025
  15. Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

    Pubblicato: 11/04/2025
  16. Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

    Pubblicato: 28/03/2025
  17. Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

    Pubblicato: 14/03/2025
  18. Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

    Pubblicato: 24/01/2025
  19. Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]

    Pubblicato: 10/01/2025
  20. Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]

    Pubblicato: 27/12/2024

1 / 2

What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.

Visit the podcast's native language site