AI could solve it all.
AI could kill us all (or worse). You can believe both at the same time, and most AI safety folks do. AI risk deniers try to paint us as "doomers" who don't appreciate what aligned AI could do, and that's just so off base. I can't wait until we get an aligned superintelligence. If we succeed at that, it will be the best thing that's every happened. And that's why I work on safety. I want us to get there, instead of the much less good option, which is currently scarily probable. Let's do this right instead of rushing off a cliff, hoping to build a plane on the way down.
0 Comments
Leave a Reply. |
Popular postsThe Parable of the Boy Who Cried 5% Chance of Wolf
The most important lesson I learned after ten years in EA Why fun writing can save lives Full List Categories
All
Kat WoodsI'm an effective altruist who co-founded Nonlinear, Charity Entrepreneurship, and Charity Science Health Archives
October 2024
Categories
All
|