EA - Probabilities, Prioritization, and 'Bayesian Mindset' by Violet Hour

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilities, Prioritization, and 'Bayesian Mindset', published by Violet Hour on April 4, 2023 on The Effective Altruism Forum.Sometimes we use explicit probabilities as an input into our decision-making, and take such probabilities to offer something like a literal representation of our uncertainty. This practice of assigning explicit probabilities to claims is sociologically unusual, and clearly inspired by Bayesianism as a theory of how ideally rational agents should behave. But we are not such agents, and our theories of ideal rationality don’t straightforwardly entail that we should be assigning explicit probabilities to claims. This leaves the following question open:When, in practice, should the use of explicit probabilities inform our actions?Holden sketches one proto-approach to this question, under the name Bayesian Mindset. 'Bayesian Mindset' describes a longtermist EA (LEA)-ish approach to quantifying uncertainty, and using the resulting quantification to make decisions in a way that’s close to Expected Value Maximization. Holden gestures at an ‘EA cluster’ of principles related to thought and action, and discusses its costs and benefits. His post contains much that I agree with:We both agree that Bayesian Mindset is undervalued by the rest of society, and shows promise as a way to clarify important disagreements.We both agree that there’s “a large gulf” between the theoretical underpinnings of Bayesian epistemology, and the practices prescribed by Bayesian Mindset.We both agree on a holistic conception of what Bayesian Mindset is — “an interesting experiment in gaining certain benefits [rather than] the correct way to make decisions.”However, I feel as though my sentiments part with Holden on certain issues, and so use his post as a springboard for my essay. Here’s the roadmap:In (§1), I introduce my question, and outline two cases under which it appears (to varying degrees) helpful to assign explicit probabilities to guide decision-making.I discuss complications with evaluating how best to approximate the theoretical Bayesian ideal in practice (§2).With the earlier sections in mind, I discuss two potential implications for cause prioritization (§3).I elaborate one potential downside of a community culture that emphasizes the use of explicit subjective probabilities (§4).I conclude in (§5).1. Philosophy and PracticeFirst, I want to look at the relationship between longtermist theory, and practical longtermist prioritization. Some terminology: I'll sometimes speak of ‘longtermist grantmaking’ to refer to grants directed towards areas like (for example) biorisk and AI risk. This terminology is imperfect, but nevertheless gestures at the sociological cluster with which I’m concerned.Very few of us are explicitly calculating the expected value of our career decisions, donations, and grants. That said, our decisions are clearly informed by a background sense of ‘our’ explicit probabilities and explicit utilities. In The Case for Strong Longtermism, Greaves and MacAskill defend deontic (action-guiding) strong longtermism as a theory which applies to “the most important decision situations facing agents today”, and support this thesis with reference to an estimate of the expected lives that can be saved via longtermist interventions. Greaves and MacAskill note that their analysis takes for granted a “subjective decision theory”, which assumes, for the purposes of an agent deciding which actions are best, that the agent “is in a position to grasp the states, acts and consequences that are involved in modeling her decision”, who then decides what to do, in large part, based on their explicit understanding of “the states, acts and consequences”.Of course, this is a philosophy paper, rather than a document on how to do grantmaking or policy in pra...

Visit the podcast's native language site