â13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/âthreshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.â
This post is not (mainly) calling out EA and EAs for wanting to accelerate AI.
Itâs calling out those of us who do think that the AGI labs are developing a technology that will literally kill us and destroy everything we love with double digit probability, but are still friendly with the labs and people who work at the labs.
And itâs calling out those people who think the above, and take a salary from the AGI labs anyway.
I read this post as saying something like,
If youâre serious about what you believe, and you had very basic levels of courage, you would never go to a party with someone who was working at Anthropic and not directly tell them that what theyâre doing is bad and they should stop.â
But if you go to a party with people building a machine that you think will kill everyone, and you just politely talk with them about other stuff, or politely ignore them, then you are a coward and an enabler and a hypocrite.
Your interest in being friendly with people in your social sphere, over and above vocally opposing the creation of a doom-machine is immoral and disgraceful to the values you claim to hold.
I (Holly) am drawing the line here. Donât expect me me to give polite respect to what I consider the ludicrous view that itâs reasonable to eg work for Anthropic.
I donât overall agree with this take, at this time. But Iâm not very confident in my disagreement. I think Holly might basically be right here, and on further reflection I might come to agree with her.
I definitely agree that the major reason why thereâs not more vocal opposition to working at an AGI lab is social conformity and fear of social risk. (Plus most of us are not well equipped to evaluated whether it possibly makes sense to try to âmake things better from the insideâ, and so we defer to others who are broadly pro some version of that plan.)
Rightâonly 5% of EA Forum users surveyed want to accelerate AI:
â13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/âthreshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.â
This post is not (mainly) calling out EA and EAs for wanting to accelerate AI.
Itâs calling out those of us who do think that the AGI labs are developing a technology that will literally kill us and destroy everything we love with double digit probability, but are still friendly with the labs and people who work at the labs.
And itâs calling out those people who think the above, and take a salary from the AGI labs anyway.
I read this post as saying something like,
I donât overall agree with this take, at this time. But Iâm not very confident in my disagreement. I think Holly might basically be right here, and on further reflection I might come to agree with her.
I definitely agree that the major reason why thereâs not more vocal opposition to working at an AGI lab is social conformity and fear of social risk. (Plus most of us are not well equipped to evaluated whether it possibly makes sense to try to âmake things better from the insideâ, and so we defer to others who are broadly pro some version of that plan.)