Once upon a time, a scientist was driving fast In a car full of weaponized superebola. It was raining heavily so he couldn’t see clearly where he was going. His passenger said calmly, “Quick question: what the fuck?” “Don’t worry,” said the scientist. “Since I can’t see clearly, we don’t know we’re going to hit anything and accidentally release a virus that kills all humans.” As he said this, they hit a tree, released the virus, and everybody died slow horrible deaths. The End The moral of the story is that if there’s more uncertainty, you should go slower and more cautiously. Sometimes people say that we can’t know if creating a digital species (AI) is going to harm us. Predicting the future is hard, therefore we should go as fast as possible. And I agree - there is a ton of uncertainty around what will happen. It could be one of the best inventions we ever make. It could also be the worst, and make nuclear weapons look like benign little trinkets. And because it’s hard to predict, we should move more slowly and carefully. And anybody who's confident it will go well or go poorly is overconfident. Things are too uncertain to go full speed ahead. Don't move fast and break things if the "things" in question could be all life on earth. Read more: All
0 Comments
The AIs Will Only Do Good Fallacy. You cannot think that:
Read more: All PSA: California’s AI safety bill does not require kill switches for open source models. People who are saying it does are either being misled or the ones doing the misleading. AIs under the control of the developer need a kill switch. Open source AIs are not under the control of the developers, so do not need a kill switch. Many of the people who are spreading the idea that it will kill open source know this and are spreading it anyways because they know that “open source” is an applause light for so many devs. Check the bill yourself. It's short and written in plain language: Or ask an AI to summarize it for you. The current AIs that aren't covered models and don't have the capacity to cause mass casualties so are great and won't be affected by this legislation. Gavin Newsom, please don't listen to corporate lobbyists who aren't even attacking the real bill, but an imagined boogeyman. Please don't veto a bill that's supported by the majority of Californians. Read more: All The essential problem with AI safety: there will always be some people who are willing to roll the dice. We need to figure out a way to convince people who have a reality distortion field around themselves to really get that superintelligent AI is not like the rest of reality. You can't just be high agency and gritty and resourceful. Just in the same way that no matter how virtuous and intelligent a cow gets, it can never beat the humans. We need to convince them to either change their minds, or we have to use the law and governments to protect the many from the reality distortion fields of the few. And I say this an entrepreneurial person who has more self-efficacy than might be good for me. But I use that self-efficacy to work on getting us more time to figure AI safety out. Even I don't have the arrogance to think that something vastly smarter and more powerful than me will care about what I want by default. Read more: All Once upon a time in 2026, an idiot teenager used the AI, LLAMA 5.2, to create superebola. As a joke, you see. The problem was, the joke worked. And because he had at his fingertips the IQ of an advanced AI but the wisdom of an idiot teenager, the superebola got loose. Over a billion people died slow, horrific deaths. And Meta, the creator of the AI, the creator who said that you should treat AI the same wait you treat Google Docs, just shrugged and said “wasn’t our fault. We couldn’t have possibly known that if we shared advanced AI with the entire world with no guardrails that this could have happened.” The Meta folks responsible who were not killed by superebola were killed by angry mobs. And then all future AIs were heavily regulated and safety standards were taken fucking seriously and everybody lived happily ever after. Read more: All Dear Gavin Newsom, please don’t let AI corporations self-police. That’d be a disaster. Their CEOs themselves have repeatedly said that their technology could cause mass casualties or literal extinction, and they repeatedly silenced whistleblowers. 77% of Californians support this bill. These corporations must be held accountable. Read more: All AI corporations complained, got most of what they wanted, but they’re still shrieking about bill SB 1047 just as loudly as before.
Their objections aren’t the real objections. They just don’t want any government oversight. Hopefully Gavin Newsom can see through the obvious rationalizations they’re making. Read more: All Once upon a time in 2026, an idiot teenager used the AI, LLAMA 5.2, to create superebola. As a joke, you see. The problem was, the joke worked. And because he had at his fingertips the IQ of an advanced AI but the wisdom of an idiot teenager, the superebola got loose. Over a billion people died slow, horrific deaths. And Meta, the creator of the AI, the creator who said that you should treat AI the same wait you treat Google Docs, just shrugged and said “wasn’t our fault. We couldn’t have possibly known that if we shared advanced AI with the entire world with no guardrails that this could have happened.” The Meta folks responsible who were not killed by superebola were killed by angry mobs. And then all future AIs were heavily regulated and safety standards were taken fucking seriously and everybody lived happily ever after. The End Read more All Hypothesis: the people who say that LLMs cannot reason are stochastic parrots
Hear me out. They say variants of the same thing again and again that you can find online. They appear to have no original thoughts. If you give them new information, they do not update. Why didn’t anybody tell me that Marc Andreesen, billionaire lobbyist, sounds like a comic book villain?
Just read his “techno-optimist manifesto” and look at what he said:
No wonder he’s fighting against the AI safety bill. I mean, I can see why it appeals to some. It reads like an Atlas Shrugged monologue. He even lists John Galt as one of the patron saints of techno-optimism. But honestly, I consider this to be a very good sign for the bill. If the biggest enemy of the bill is somebody who reads like a comic book villain, you’re probably doing alright. |
Popular postsThe Parable of the Boy Who Cried 5% Chance of Wolf
The most important lesson I learned after ten years in EA Why fun writing can save lives Full List Categories
All
Kat WoodsI'm an effective altruist who co-founded Nonlinear, Charity Entrepreneurship, and Charity Science Health Archives
August 2024
Categories
All
|