EA - Shahar Avin on How to Strategically Regulate Advanced AI Systems by Michaël Trazzi
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categorie:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shahar Avin on How to Strategically Regulate Advanced AI Systems, published by Michaël Trazzi on September 23, 2022 on The Effective Altruism Forum. Shahar Avin is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of his time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops. In this episode, we talk about a broad variety of topics, including how we could apply what Shahar learned running AI Governance workshops to governing transformative AI, AI Strategy, AI Governance, Trustworthy AI Development and end up answering some twitter questions. Below are some highlighted quotes from our conversation (available on Youtube, Spotify, Google Podcast, Apple Podcast). For the full context for each of these quotes, you can find the accompanying transcript. We Are Only Seeing The Tip Of The Iceberg The Most Cutting Edge AI Research Is Probably Private “I don’t know how much of the most cutting edge research today is public. I would not be confident that it is. It is very easy to look at all of the stuff that is public and see a lot of it, and infer from the fact that you’re seeing a lot of public research that all research must, therefore be public. I don’t think that is a correct inference to make.” AI companies may not be showing all of their cards “My guess would be that they're not always showing all of the cards. It's not always a calculated decision, but there is a calculated decision to be made of, if I have a result, do I publish or not? And then what goes into the calculation is if there is a benefit from publishing. It increases your brand, it attracts more talent, it shows that you are at the cutting edge, it allows others to build on your result and then you get to benefit from building on top of their results. And you have the cost of, as long as you keep for yourself, no one else knows it, and you can keep on doing the research.” Aligning Complex Systems Is Hard Narrow AI Do Not Guarantee Alignment “One failure mode is that there is an overall emergent direction that is bad for us. And another is there is no emergent direction, but the systems in fact are conflicting with each other, undermining each other. So one system is optimizing for one proxy. It generates it externality that is not fully captured by its designers that gets picked up by another system that has a bad proxy for it, and then tries to do something about it.” Security failures are unavoidable for large, complex systems “In particular, if you're building very large, complex, opaque systems, from a system-engineering or system-security perspective, you're just significantly increasing the way things go wrong because you haven't engineered every little part of the thing to be 100% safe, and provably and verifiably secure. And even provably and verifiably secure stuff could fail because you've made some bad assumptions about the hardware.” Why Regulating AI Makes Sense Our World Is A Very Regulated World “Our world is a very regulated world. We tend to see the failures, but we forget that none of these digital technology would exist around us without standards, and interoperability. We wouldn’t be able to move around if transport was not regulated and controlled and mandated in some way. If you don’t have rules, standards, norms, treaties, laws, you just get chaos.” Compliance Is Part Of The Cost Of Doing Business “Compliance is part of the cost of doing business in a risky domain. If you have a medical AI startup, you get people inspecting your stuff all the time because you have to pass through a bunch of regulations and you could get fined or go to jail, if you don’t do that. Th...
