EA - Two directions for research on forecasting and decision making by Paal Fredrik Skjørten Kvarberg
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categorie:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum.An assessment of methods to improve individual and institutional decision-making and some ideas for further researchForecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making.In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles areCosts. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs.Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions.I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They areExpected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs.Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks.After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups.This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below.In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed.Forecasting tournamentsIn a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzeesâ€â€”random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times†who were curious and open-minded, outperformed the experts, who tended to be overconfident.In a series of...
