I’ve been a fool trying to influence people who are on the AI industry’s money and glory payroll. I’m going to take my own advice now, write you off, and focus on the moral majority who wants to protect the world.
You should all be ashamed of your complicity in bringing about potentially world-ending technology.
I am literally donating to PauseAI. I don’t think you are being fair. I fully agree that some EAs are directly increasing x-risk by working on AI development, and they should stop doing that. I don’t think it’s fair to paint all of us with that brush.
“13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.”
This post is not (mainly) calling out EA and EAs for wanting to accelerate AI.
It’s calling out those of us who do think that the AGI labs are developing a technology that will literally kill us and destroy everything we love with double digit probability, but are still friendly with the labs and people who work at the labs.
And it’s calling out those people who think the above, and take a salary from the AGI labs anyway.
I read this post as saying something like,
If you’re serious about what you believe, and you had very basic levels of courage, you would never go to a party with someone who was working at Anthropic and not directly tell them that what they’re doing is bad and they should stop.”
Yes, that’s awkward. Yes, that’s confrontational.
But if you go to a party with people building a machine that you think will kill everyone, and you just politely talk with them about other stuff, or politely ignore them, then you are a coward and an enabler and a hypocrite.
Your interest in being friendly with people in your social sphere, over and above vocally opposing the creation of a doom-machine is immoral and disgraceful to the values you claim to hold.
I (Holly) am drawing the line here. Don’t expect me me to give polite respect to what I consider the ludicrous view that it’s reasonable to eg work for Anthropic.
I don’t overall agree with this take, at this time. But I’m not very confident in my disagreement. I think Holly might basically be right here, and on further reflection I might come to agree with her.
I definitely agree that the major reason why there’s not more vocal opposition to working at an AGI lab is social conformity and fear of social risk. (Plus most of us are not well equipped to evaluated whether it possibly makes sense to try to “make things better from the inside”, and so we defer to others who are broadly pro some version of that plan.)
I’ve donated $30,000 to PauseAI. Some of your past posts played a role in that, such as The Case for AI Safety Advocacy to the Public and Pausing AI is the only safe approach to digital sentience. I don’t think writing off people like me is a good idea.
I am literally donating to PauseAI. I don’t think you are being fair. I fully agree that some EAs are directly increasing x-risk by working on AI development, and they should stop doing that. I don’t think it’s fair to paint all of us with that brush.
Right—only 5% of EA Forum users surveyed want to accelerate AI:
“13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.”
This post is not (mainly) calling out EA and EAs for wanting to accelerate AI.
It’s calling out those of us who do think that the AGI labs are developing a technology that will literally kill us and destroy everything we love with double digit probability, but are still friendly with the labs and people who work at the labs.
And it’s calling out those people who think the above, and take a salary from the AGI labs anyway.
I read this post as saying something like,
I don’t overall agree with this take, at this time. But I’m not very confident in my disagreement. I think Holly might basically be right here, and on further reflection I might come to agree with her.
I definitely agree that the major reason why there’s not more vocal opposition to working at an AGI lab is social conformity and fear of social risk. (Plus most of us are not well equipped to evaluated whether it possibly makes sense to try to “make things better from the inside”, and so we defer to others who are broadly pro some version of that plan.)