EA - EA is about maximization, and maximization is perilous by Holden Karnofsky

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is about maximization, and maximization is perilous, published by Holden Karnofsky on September 2, 2022 on The Effective Altruism Forum. This is not a contest submission; I don’t think it’d be appropriate for me to enter this contest given my position as a CEA funder. This also wasn’t really inspired by the contest - I’ve been thinking about writing something like this for a little while - but I thought the contest provided a nice time to put it out. This piece generally errs on the concise side, gesturing at intuitions rather than trying to nail down my case thoroughly. As a result, there’s probably some nuance I’m failing to capture, and hence more ways than usual in which I would revise my statements upon further discussion. For most of the past few years, I’ve had the following view on EA criticism: Most EA criticism is - and should be - about the community as it exists today, rather than about the “core ideas.” The core ideas are just solid. Do the most good possible - should we really be arguing about that? Recently, though, I’ve been thinking more about this and realized I’ve changed my mind. I think “do the most good possible” is an intriguing idea, a powerful idea, and an important idea - but it’s also a perilous idea if taken too far. My basic case for this is that: If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X - a terrible idea unless you’re really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.) EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble. The upshot is that I think the core ideas of EA present constant temptations to create problems. Fortunately, I think EA mostly resists these temptations - but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls. As EA grows, this could be a fragile situation. I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have - a constant current that we need to swim against. How things would go if we were maximally “hard-core” The general conceptual points behind my critique - “maximization is perilous unless you’re sure you have the right maximand” and “EA is centrally about maximizing something that we can’t define or measure and have massive disagreements about” - are hopefully reasonably clear and sufficiently explained above. To make this more concrete, I’ll list just some examples of things I think would be major problems if being “EA” meant embracing the core ideas of EA without limits or reservations. We’d have a bitterly divided community, with clusters having diametrically opposed goals. For example: Many EAs think that “do the most good possible” ends up roughly meaning “Focus on the implications of your actions for the long-run future.” Within this set, some EAs essentially endorse: “The more persons there are in the long-run future, the better it is” while others endorse something close to the opposite: “The more persons there are in the long-run future, the worse it is.”1 In practice, it seems that people in these two camps try hard to find comm...

Visit the podcast's native language site