Though Carl said that an unilateral pause would be riskier, Iâm pretty sure he is not supporting a universal pause now. He said âTo the extent you have a willingness to do a pause, itâs going to be much more impactful later on. And even worse, itâs possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And Iâm enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see whatâs happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didnât I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit â or harm. At the object level, when I think itâs probably bad on the merits, it doesnât arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible â like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: thatâs something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives â and that are, I think, more helpful to pursue at the margin than an early pause.â
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
They arenât the ones terrified of not having the vision to go for the Singularity, of being seen as âLudditesâ for opposing a dangerous and recklessly pursued technology. Frankly they arenât the influential ones.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
Though I think this is too harsh as early thinking in AI Safety included Bostromâs differential technological development and MIRIâs seed (safe) AI, the former of which is similar to people trying to shape Anthropicâs work and the latter of which could be characterized as accelerationist.
To get a pause at any time you have to start asking now. Itâs totally academic to ask about when exactly to pause and itâs not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
But honestly all I hear are excuses. You wouldnât want to help me if Carl said it was the right thing to do or youâd have already realized what I said yourself. You wouldnât be waiting for Carlâs permission or anyone elseâs. What youâre looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
To get a pause at any time you have to start asking now. Itâs totally academic to ask about when exactly to pause and itâs not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But itâs also possible to advocate for pausing when some threshold or trigger is hit, and not now. Itâs also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
What youâre looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I donât think this characterization is an effective way of shifting people towards your point of view.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But itâs also possible to advocate for pausing when some threshold or trigger is hit, and not now. Itâs also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what theyâre doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinineâ pause would become more popular as an option the more people were familiar with it. Itâs only the AI industry and EA that do not like the idea of pausing and pretend like theyâre gonna withdraw support that we actually never had if we do something they donât like.
The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to âtimeâ asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I donât think this characterization is an effective way of shifting people towards your point of view.
Ah, the EA âfuck youâ.
I see, so other people (đ) might not agree with me bc of how I say it, and rather than it being their responsibility to seek truth, thatâs my fault for not behaving the way youâ oops, I mean âtheyââ like. Itâs just ineffective, so nobody listening has any responsibilities and should probably go back to working at an AI company.
Hey Holly, it sounds like youâre frustrated by how people in EA are engaging with the idea of a pause. Iâm sure thatâs really hard, and Iâm sure I donât know even a fraction of what youâve gone through. I know youâre doing this advocacy work because you care a lot, and I really appreciate that. You know that I personally support your work.
However, Iâm worried that this thread is becoming unproductive, and risks making the Forum feel like less of a safe space[1].
In particular, my concern is that you are criticizing @Denkenbergerđ¸ directly in a way that appears to come from nowhere. @Denkenbergerđ¸ doesnât say anything about their personal views on a pause before you respond with:
But honestly all I hear are excuses. You wouldnât want to help me if Carl said it was the right thing to do or youâd have already realized what I said yourself. You wouldnât be waiting for Carlâs permission or anyone elseâs. What youâre looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
In my opinion, this sort of accusation without evidence erodes the Forumâs ability to be a safe community space for important discussions. If itâs the case that Iâm missing some context and you have personal beef with @Denkenbergerđ¸, then I donât think the Forum is the appropriate place to hash that out.
To be clear, I think the following are broadly fine and often good (assuming adherence to our Forum norms):
Criticizing ideas or actions that you disagree with, or see flaws in
Accusing a person or org of doing harm when you have some evidence to back that up
Sharing your feelings, including feelings of hurt and frustration
Holding those in power accountable
Flagging issues about the EA community, especially those that you think others are too afraid to flag
Flagging when someone is potentially not being truth-seeking, or is otherwise not following Forum norms (ideally in a kind way because our norms are unusual compared to the rest of the internet)
So Iâd like to suggest that you take a step back from this thread. If you find yourself getting frustrated at others on the Forum elsewhere, Iâd suggest taking a break from those as well.
Youâve written some of the best posts on this Forum, and the Forum Team greatly values your contributions. At the same time, I also think itâs important that the Forum continues to be a productive discussion space.
The Forum should be a great and safe space: weâre working on hard problems, and the internet can be rough! We donât want arbitrary barriers to keep people from joining discussions on the Forum, we donât want people to be miserable on the Forum, and we want to promote excellent discussions and content.
Civility and charitable discussionâthe Forum should feel like a breath of fresh air and a refuge of sanity. When you join a discussion on the Forum, you should be able to reasonably expect that the other people wonât twist your words, wonât call you names, etc.
Safetyâif users feel unsafe on the Forum â theyâre being threatened, or theyâre worried that if they post, theyâll have to fend off trolls on their own, etc. â thatâs our problem. We need to prevent this.
I think you hit the nail on the headâ this forum is not a safe space for me. Like you said, Iâm an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).
This is a serious problem given what the EA Forum was supposed to be. Itâs not a problem with following your rules for polite posts, but itâs against something more importantâ the purpose of the Forum and of the EA community.
But, Iâve clearly reached the end of my rope, and since Iâd like to keep my account and be able to post new stuff here, Iâll just stop commenting.
I see that you are getting some downvotes, and while I disagree with this comment in particular Iâm glad that you are there in the arena strongly pushing your vision of the good :)
Though Carl said that an unilateral pause would be riskier, Iâm pretty sure he is not supporting a universal pause now. He said âTo the extent you have a willingness to do a pause, itâs going to be much more impactful later on. And even worse, itâs possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And Iâm enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see whatâs happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didnât I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit â or harm. At the object level, when I think itâs probably bad on the merits, it doesnât arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible â like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: thatâs something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives â and that are, I think, more helpful to pursue at the margin than an early pause.â
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
Though I think this is too harsh as early thinking in AI Safety included Bostromâs differential technological development and MIRIâs seed (safe) AI, the former of which is similar to people trying to shape Anthropicâs work and the latter of which could be characterized as accelerationist.
To get a pause at any time you have to start asking now. Itâs totally academic to ask about when exactly to pause and itâs not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
But honestly all I hear are excuses. You wouldnât want to help me if Carl said it was the right thing to do or youâd have already realized what I said yourself. You wouldnât be waiting for Carlâs permission or anyone elseâs. What youâre looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But itâs also possible to advocate for pausing when some threshold or trigger is hit, and not now. Itâs also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I donât think this characterization is an effective way of shifting people towards your point of view.
This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what theyâre doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinineâ pause would become more popular as an option the more people were familiar with it. Itâs only the AI industry and EA that do not like the idea of pausing and pretend like theyâre gonna withdraw support that we actually never had if we do something they donât like.
The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to âtimeâ asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.
Ah, the EA âfuck youâ.
I see, so other people (đ) might not agree with me bc of how I say it, and rather than it being their responsibility to seek truth, thatâs my fault for not behaving the way youâ oops, I mean âtheyââ like. Itâs just ineffective, so nobody listening has any responsibilities and should probably go back to working at an AI company.
Hey Holly, it sounds like youâre frustrated by how people in EA are engaging with the idea of a pause. Iâm sure thatâs really hard, and Iâm sure I donât know even a fraction of what youâve gone through. I know youâre doing this advocacy work because you care a lot, and I really appreciate that. You know that I personally support your work.
However, Iâm worried that this thread is becoming unproductive, and risks making the Forum feel like less of a safe space[1].
In particular, my concern is that you are criticizing @Denkenbergerđ¸ directly in a way that appears to come from nowhere. @Denkenbergerđ¸ doesnât say anything about their personal views on a pause before you respond with:
In my opinion, this sort of accusation without evidence erodes the Forumâs ability to be a safe community space for important discussions. If itâs the case that Iâm missing some context and you have personal beef with @Denkenbergerđ¸, then I donât think the Forum is the appropriate place to hash that out.
To be clear, I think the following are broadly fine and often good (assuming adherence to our Forum norms):
Criticizing ideas or actions that you disagree with, or see flaws in
Accusing a person or org of doing harm when you have some evidence to back that up
Sharing your feelings, including feelings of hurt and frustration
Holding those in power accountable
Flagging issues about the EA community, especially those that you think others are too afraid to flag
Flagging when someone is potentially not being truth-seeking, or is otherwise not following Forum norms (ideally in a kind way because our norms are unusual compared to the rest of the internet)
So Iâd like to suggest that you take a step back from this thread. If you find yourself getting frustrated at others on the Forum elsewhere, Iâd suggest taking a break from those as well.
Youâve written some of the best posts on this Forum, and the Forum Team greatly values your contributions. At the same time, I also think itâs important that the Forum continues to be a productive discussion space.
A relevant quote from our âModeration principlesâ post:
I think you hit the nail on the headâ this forum is not a safe space for me. Like you said, Iâm an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).
This is a serious problem given what the EA Forum was supposed to be. Itâs not a problem with following your rules for polite posts, but itâs against something more importantâ the purpose of the Forum and of the EA community.
But, Iâve clearly reached the end of my rope, and since Iâd like to keep my account and be able to post new stuff here, Iâll just stop commenting.
I see that you are getting some downvotes, and while I disagree with this comment in particular Iâm glad that you are there in the arena strongly pushing your vision of the good :)