AI Safety Fundamentals: Alignment

A podcast by BlueDot Impact

Categorie:

83 Episodio

  1. Constitutional AI Harmlessness from AI Feedback

    Pubblicato: 19/07/2024
  2. Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

    Pubblicato: 19/07/2024
  3. Illustrating Reinforcement Learning from Human Feedback (RLHF)

    Pubblicato: 19/07/2024
  4. Chinchilla’s Wild Implications

    Pubblicato: 17/06/2024
  5. Deep Double Descent

    Pubblicato: 17/06/2024
  6. Intro to Brain-Like-AGI Safety

    Pubblicato: 17/06/2024
  7. Eliciting Latent Knowledge

    Pubblicato: 17/06/2024
  8. Toy Models of Superposition

    Pubblicato: 17/06/2024
  9. Least-To-Most Prompting Enables Complex Reasoning in Large Language Models

    Pubblicato: 17/06/2024
  10. Discovering Latent Knowledge in Language Models Without Supervision

    Pubblicato: 17/06/2024
  11. ABS: Scanning Neural Networks for Back-Doors by Artificial Brain Stimulation

    Pubblicato: 17/06/2024
  12. Two-Turn Debate Doesn’t Help Humans Answer Hard Reading Comprehension Questions

    Pubblicato: 17/06/2024
  13. Imitative Generalisation (AKA ‘Learning the Prior’)

    Pubblicato: 17/06/2024
  14. An Investigation of Model-Free Planning

    Pubblicato: 17/06/2024
  15. Low-Stakes Alignment

    Pubblicato: 17/06/2024
  16. Gradient Hacking: Definitions and Examples

    Pubblicato: 17/06/2024
  17. Empirical Findings Generalize Surprisingly Far

    Pubblicato: 17/06/2024
  18. Compute Trends Across Three Eras of Machine Learning

    Pubblicato: 13/06/2024
  19. Worst-Case Thinking in AI Alignment

    Pubblicato: 29/05/2024
  20. Public by Default: How We Manage Information Visibility at Get on Board

    Pubblicato: 12/05/2024

1 / 5

Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment

Visit the podcast's native language site