EA - Against EA-Community-Received-Wisdom on Practical Sociological Questions by Michael Cohen

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum.In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good.In my view, this rot comes from incorrect answers to certain practical sociological questions, like:How important for success is having experience or having been apprenticed to someone experienced?Is the EA Forum a good tool for collaborative truth-seeking?How helpful is peer review for collaborative truth-seeking?Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions?Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right?I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term).Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer."That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me.High-Level ClaimsClaim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you.There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it.Let's now turn to Meta-2 from above.Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....

Visit the podcast's native language site