A “surgical pause” won’t work because politics doesn’t work that way and we don’t know when to pause4/25/2024 A “surgical pause” won’t work because:
1) Politics doesn’t work that way 2) We don’t know when to pause 1) Politics doesn't work that way For the politics argument, I think people are acting as if we could just go up to Sam or Dario and say “it’s too dangerous now. Please press pause”. Then the CEO would just tell the organization to pause and it would magically work. That’s not what would happen. There will be a ton of disagreement about when it’s too dangerous. You might not be able to convince them. You might not even be able to talk to them! Most people, including the people in the actual orgs, can’t just meet with the CEO. Then, even if the CEO did tell the org to pause, there might be rebellion in the ranks. They might pull a Sam Altman and threaten to move to a different company that isn’t pausing. And if just one company pauses, citing dangerous capabilities, you can bet that at least one AI company will defect (my money’s on Meta at the moment) and rush to build it themselves. The only way for a pause to avoid the tragedy of the commons is to have an external party who can make us not fall into a defecting mess. This is usually achieved via the government, and the government takes a long time. Even in the best case scenarios it would take many months to achieve, and most likely, years. Therefore, we need to be working on this years before we think the pause is likely to happen. 2. We don’t know when the right time to pause is We don’t know when AI will become dangerous. There’s some possibility of a fast take-off. There’s some possibility of threshold effects, where one day it’s fine, and the other day, it’s not. There’s some possibility that we don’t see how it’s becoming dangerous until it’s too late. We just don’t know when AI goes from being disruptive technology to potentially world-ending. It might be able to destroy humanity before it can be superhuman at any one of our arbitrarily chosen intelligence tests. It’s just a really complicated problem, and if you put together 100 AI devs and asked them when would be a good point to pause development, you’d get 100 different answers. Well, you’d actually get 80 different answers and 20 saying “nEvEr! 100% oF tEchNoLoGy is gOod!!!” and other such unfortunate foolishness. But we’ll ignore the vocal minority and get to the point of knowing that there is no time where it will be clear that “AI is safe now, and dangerous after this point” We are risking the lives of every sentient being in the known universe under conditions of deep uncertainty and we have very little control over our movements. The response to that isn’t to rush ahead and then pause when we know it’s dangerous. We can’t pause with that level of precision. We won’t know when we’ll need to pause because there will be no stop signs. There will just be warning signs. Many of which we’ve already flown by. Like AIs scoring better than the median human on most tests of skills, including IQ. Like AIs being generally intelligent across a broad swathe of skills. We just need to stop as soon as we can, then we can figure out how to proceed actually safely.
0 Comments
Leave a Reply. |
Popular postsThe Parable of the Boy Who Cried 5% Chance of Wolf
The most important lesson I learned after ten years in EA Why fun writing can save lives Full List Categories
All
Kat WoodsI'm an effective altruist who co-founded Nonlinear, Charity Entrepreneurship, and Charity Science Health Archives
October 2024
Categories
All
|