“The difference between nuclear arms treaties and AI treaties is that it’s so easy to copy AIs, so regulation is hopeless” This is only true for existing models. Inventing new, state of the art models is incredibly difficult and expensive. It requires immense amounts of talent, infrastructure, money, compute, and innovations that people don’t yet know how to do. Almost all of the human extinction risk from AIs come from not-yet-invented superintelligent AI models. North Korea or a terrorist group cannot just defect from an AI treaty and build superintelligent AI. And it’s relatively straightforward to monitor and prevent the amount of compute necessary to make a superintelligent AI (e.g. monitoring electrical grids, specialized GPUs, satellite imagery, etc) Once it’s already invented, then yes, people could easily steal it. But if we just stop sometime 𝘣𝘦𝘧𝘰𝘳𝘦 we have superintelligent AI, then it will be very hard for any group to defect. Also, by the time we superintelligent AI, it’s probably already too late, and it will be up to the superintelligence what to do, not humans anymore. Read more: All
0 Comments
Leave a Reply. |
Popular postsThe Parable of the Boy Who Cried 5% Chance of Wolf
The most important lesson I learned after ten years in EA Why fun writing can save lives Full List Categories
All
Kat WoodsI'm an effective altruist who co-founded Nonlinear, Charity Entrepreneurship, and Charity Science Health Archives
January 2025
Categories
All
|