Best AI papers explained
A podcast by Enoch H. Kang
506 Episodio
-
The Art of Scaling Reinforcement Learning Compute for LLMs
Pubblicato: 16/10/2025 -
A small number of samples can poison LLMs of any size
Pubblicato: 16/10/2025 -
Dual Goal Representations
Pubblicato: 14/10/2025 -
Welcome to the Era of Experience
Pubblicato: 14/10/2025 -
Value Flows: Flow-Based Distributional Reinforcement Learning
Pubblicato: 14/10/2025 -
Self-Adapting Language Models
Pubblicato: 12/10/2025 -
The Markovian Thinker
Pubblicato: 12/10/2025 -
Moloch’s Bargain: emergent misalignment when LLMs compete for audiences
Pubblicato: 12/10/2025 -
Transformer Predictor Dynamics and Task Diversity
Pubblicato: 11/10/2025 -
Base models know how to reason, thinking models learn when
Pubblicato: 11/10/2025 -
Spectrum tuning: Post-training for distributional coverage and in-context steerability
Pubblicato: 11/10/2025 -
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Pubblicato: 11/10/2025 -
MLPs Learn In-Context on Regression and Classification tasks
Pubblicato: 11/10/2025 -
Is Pre-Training Truly Better than Meta-Learning?
Pubblicato: 11/10/2025 -
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Pubblicato: 11/10/2025 -
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
Pubblicato: 09/10/2025 -
Learning dynamics of LLM finetuning
Pubblicato: 09/10/2025 -
Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF
Pubblicato: 09/10/2025 -
OpenAI Agent Builder and n8n: Orchestrating Reasoning Versus Automating Process
Pubblicato: 08/10/2025 -
Training Agents Inside of Scalable World Models
Pubblicato: 08/10/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
