i love your clear writing and reasoning here. Your arguments make sens. Your habit of using small words, clear language while providing links to more complicated jargon and phrases is a great way to communicate. The “there is no good plan” section is spectacular.
I like your framing of every AI intervention being unlikely to help, but Peaceful protest making the most sense amidst a bunch of not great options.
Looking from a distance, in I’ve been shocked how often this sentiment of yours has played out to be true over the last few years. I strongly agree with this...
“AI safety people/groups have a history of looking like they will prioritize x-risk, and then instead doing things that are unrelated or even predictably increase risk.[5] So I have a high bar for which orgs I trust, and I don’t want to donate to an org if it looks wishy-washy on x-risk, or if it looks suspiciously power-seeking (a la “superintelligent AI will only be safe if I’m the one who builds it”). I feel much better about giving to orgs that credibly and loudly signal that AI misalignment risk is their priority.”
I think that power and money corrupts decent folks far more readily than most EA types think or admit. And I haven’t seen clear evidence over the last 5 years that the “inside game” within AI companies has done more good than harm, although maybe (i hope) at some critical stage we’ll see the worth of the EA influence poured into Open AI and Anthropic...
I also love your idea of a “stable preference bonus” for MIRI. I’ve been horrified by orgs like Anthropic shifting their AI preferences over even just a year or two—this makes me trust them much less too.
And if you think a p doom of 50 percent isn’t “overwhelmingly high” then God save us all...
i love your clear writing and reasoning here. Your arguments make sens. Your habit of using small words, clear language while providing links to more complicated jargon and phrases is a great way to communicate. The “there is no good plan” section is spectacular.
I like your framing of every AI intervention being unlikely to help, but Peaceful protest making the most sense amidst a bunch of not great options.
Looking from a distance, in I’ve been shocked how often this sentiment of yours has played out to be true over the last few years. I strongly agree with this...
“AI safety people/groups have a history of looking like they will prioritize x-risk, and then instead doing things that are unrelated or even predictably increase risk.[5] So I have a high bar for which orgs I trust, and I don’t want to donate to an org if it looks wishy-washy on x-risk, or if it looks suspiciously power-seeking (a la “superintelligent AI will only be safe if I’m the one who builds it”). I feel much better about giving to orgs that credibly and loudly signal that AI misalignment risk is their priority.”
I think that power and money corrupts decent folks far more readily than most EA types think or admit. And I haven’t seen clear evidence over the last 5 years that the “inside game” within AI companies has done more good than harm, although maybe (i hope) at some critical stage we’ll see the worth of the EA influence poured into Open AI and Anthropic...
I also love your idea of a “stable preference bonus” for MIRI. I’ve been horrified by orgs like Anthropic shifting their AI preferences over even just a year or two—this makes me trust them much less too.
And if you think a p doom of 50 percent isn’t “overwhelmingly high” then God save us all...