EA - Announcing the Future Fund's AI Worldview Prize by Nick Beckstead

The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund

Podcast artwork

Categorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Future Fund's AI Worldview Prize, published by Nick Beckstead on September 23, 2022 on The Effective Altruism Forum. Today we are announcing a competition with prizes ranging from $15k to $1.5M for work that informs the Future Fund's fundamental assumptions about the future of AI, or is informative to a panel of superforecaster judges. These prizes will be open for three months—until Dec 23—after which we may change or discontinue them at our discretion. We have two reasons for launching these prizes. First, we hope to expose our assumptions about the future of AI to intense external scrutiny and improve them. We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century, and it is consequently one of our top funding priorities. Yet our philanthropic interest in AI is fundamentally dependent on a number of very difficult judgment calls, which we think have been inadequately scrutinized by others. As a result, we think it's really possible that: all of this AI stuff is a misguided sideshow, we should be even more focused on AI, or a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem. If any of those three options is right—and we strongly suspect at least one of them is—we want to learn about it as quickly as possible because it would change how we allocate hundreds of millions of dollars (or more) and help us better serve our mission of improving humanity's longterm prospects. Second, we are aiming to do bold and decisive tests of prize-based philanthropy, as part of our more general aim of testing highly scalable approaches to funding. We think these prizes contribute to that work. If these prizes work, it will be a large update in favor of this approach being capable of surfacing valuable knowledge that could affect our prioritization. If they don't work, that could be an update against this approach surfacing such knowledge (depending how it plays out). The rest of this post will: Explain the beliefs that, if altered, would dramatically affect our approach to grantmaking Describe the conditions under which our prizes will pay out Describe in basic terms how we arrived at our beliefs and cover other clarifications Prize conditions On our areas of interest page, we introduce our core concerns about AI as follows: We think artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century. AI is already posing serious challenges: transparency, interpretability, algorithmic bias, and robustness, to name just a few. Before too long, advanced AI could automate the process of scientific and technological discovery, leading to economic growth rates well over 10% per year (see Aghion et al 2017, this post, and Davidson 2021). As a result, our world could soon look radically different. With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease. But two formidable new problems for humanity could also arise: Loss of control to AI systemsAdvanced AI systems might acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future. Concentration of powerActors with an edge in advanced AI technology could acquire massive power and influence; if they misuse this technology, they could inflict lasting damage on humanity’s long-term future. For more on these problems, we recommend Holden Karnofsky’s “Most Important Century,” Nick Bostrom’s Superintelligence, and Joseph Carlsmith’s “Is power-seeking AI an existential risk?”. Here is a table identifying various questions about these scenarios that we believe are central, our curren...

Visit the podcast's native language site