Best AI papers explained
A podcast by Enoch H. Kang
550 Episodio
-
Bayesian Concept Bottlenecks with LLM Priors
Pubblicato: 15/05/2025 -
In-Context Parametric Inference: Point or Distribution Estimators?
Pubblicato: 15/05/2025 -
Enough Coin Flips Can Make LLMs Act Bayesian
Pubblicato: 15/05/2025 -
Bayesian Scaling Laws for In-Context Learning
Pubblicato: 15/05/2025 -
Posterior Mean Matching Generative Modeling
Pubblicato: 15/05/2025 -
Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective
Pubblicato: 15/05/2025 -
Dynamic Search for Inference-Time Alignment in Diffusion Models
Pubblicato: 15/05/2025 -
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective
Pubblicato: 12/05/2025 -
Leaked Claude Sonnet 3.7 System Instruction tuning
Pubblicato: 12/05/2025 -
Converging Predictions with Shared Information
Pubblicato: 11/05/2025 -
Test-Time Alignment Via Hypothesis Reweighting
Pubblicato: 11/05/2025 -
Rethinking Diverse Human Preference Learning through Principal Component Analysis
Pubblicato: 11/05/2025 -
Active Statistical Inference
Pubblicato: 10/05/2025 -
Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework
Pubblicato: 10/05/2025 -
AI-Powered Bayesian Inference
Pubblicato: 10/05/2025 -
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Pubblicato: 09/05/2025 -
Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of AI
Pubblicato: 09/05/2025 -
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
Pubblicato: 09/05/2025 -
How to Evaluate Reward Models for RLHF
Pubblicato: 09/05/2025 -
LLMs as Judges: Survey of Evaluation Methods
Pubblicato: 09/05/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
