That’s not what Carl Shulman said, and the fact that people want to take it that way is telling. He messaged me recently to clarify that he meant unilateral pauses would be bad, something I still kind of disagree with but which isn’t something PauseAI advocates, and he said it way at the beginning of Pause talk. EAs just don’t want to arrest the tech momentum bc they see themselves as technocracy elite, not as humble grassroots organizers. They are disappointed at the chance we have to rally the public and want to find some way they don’t have to contribute to it.
People who identify as EAs in other countries might not be supportive of the AI companies, but they aren’t the ones on the ground in the Bay Area and DC that are letting me down so much. They aren’t the ones working for Anthropic or sitting on their fake board, listening to Dario’s claptrap that justifies what they want to do and believe anyway. They aren’t the ones denying my grant applications bc protests aren’t to their tastes. They aren’t the ones terrified of not having the vision to go for the Singularity, of being seen as “Luddites” for opposing a dangerous and recklessly pursued technology. Frankly they aren’t the influential ones.
Though Carl said that an unilateral pause would be riskier, I’m pretty sure he is not supporting a universal pause now. He said “To the extent you have a willingness to do a pause, it’s going to be much more impactful later on. And even worse, it’s possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And I’m enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see what’s happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didn’t I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit — or harm. At the object level, when I think it’s probably bad on the merits, it doesn’t arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible — like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: that’s something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives — and that are, I think, more helpful to pursue at the margin than an early pause.”
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
They aren’t the ones terrified of not having the vision to go for the Singularity, of being seen as “Luddites” for opposing a dangerous and recklessly pursued technology. Frankly they aren’t the influential ones.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
Though I think this is too harsh as early thinking in AI Safety included Bostrom’s differential technological development and MIRI’s seed (safe) AI, the former of which is similar to people trying to shape Anthropic’s work and the latter of which could be characterized as accelerationist.
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s. What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it’s also possible to advocate for pausing when some threshold or trigger is hit, and not now. It’s also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I don’t think this characterization is an effective way of shifting people towards your point of view.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it’s also possible to advocate for pausing when some threshold or trigger is hit, and not now. It’s also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what they’re doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinine— pause would become more popular as an option the more people were familiar with it. It’s only the AI industry and EA that do not like the idea of pausing and pretend like they’re gonna withdraw support that we actually never had if we do something they don’t like.
The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to “time” asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I don’t think this characterization is an effective way of shifting people towards your point of view.
Ah, the EA “fuck you”.
I see, so other people (😉) might not agree with me bc of how I say it, and rather than it being their responsibility to seek truth, that’s my fault for not behaving the way you— oops, I mean “they”— like. It’s just ineffective, so nobody listening has any responsibilities and should probably go back to working at an AI company.
Hey Holly, it sounds like you’re frustrated by how people in EA are engaging with the idea of a pause. I’m sure that’s really hard, and I’m sure I don’t know even a fraction of what you’ve gone through. I know you’re doing this advocacy work because you care a lot, and I really appreciate that. You know that I personally support your work.
However, I’m worried that this thread is becoming unproductive, and risks making the Forum feel like less of a safe space[1].
In particular, my concern is that you are criticizing @Denkenberger🔸 directly in a way that appears to come from nowhere. @Denkenberger🔸 doesn’t say anything about their personal views on a pause before you respond with:
But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s. What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
In my opinion, this sort of accusation without evidence erodes the Forum’s ability to be a safe community space for important discussions. If it’s the case that I’m missing some context and you have personal beef with @Denkenberger🔸, then I don’t think the Forum is the appropriate place to hash that out.
To be clear, I think the following are broadly fine and often good (assuming adherence to our Forum norms):
Criticizing ideas or actions that you disagree with, or see flaws in
Accusing a person or org of doing harm when you have some evidence to back that up
Sharing your feelings, including feelings of hurt and frustration
Holding those in power accountable
Flagging issues about the EA community, especially those that you think others are too afraid to flag
Flagging when someone is potentially not being truth-seeking, or is otherwise not following Forum norms (ideally in a kind way because our norms are unusual compared to the rest of the internet)
So I’d like to suggest that you take a step back from this thread. If you find yourself getting frustrated at others on the Forum elsewhere, I’d suggest taking a break from those as well.
You’ve written some of the best posts on this Forum, and the Forum Team greatly values your contributions. At the same time, I also think it’s important that the Forum continues to be a productive discussion space.
The Forum should be a great and safe space: we’re working on hard problems, and the internet can be rough! We don’t want arbitrary barriers to keep people from joining discussions on the Forum, we don’t want people to be miserable on the Forum, and we want to promote excellent discussions and content.
Civility and charitable discussion—the Forum should feel like a breath of fresh air and a refuge of sanity. When you join a discussion on the Forum, you should be able to reasonably expect that the other people won’t twist your words, won’t call you names, etc.
Safety—if users feel unsafe on the Forum — they’re being threatened, or they’re worried that if they post, they’ll have to fend off trolls on their own, etc. — that’s our problem. We need to prevent this.
I think you hit the nail on the head— this forum is not a safe space for me. Like you said, I’m an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).
This is a serious problem given what the EA Forum was supposed to be. It’s not a problem with following your rules for polite posts, but it’s against something more important— the purpose of the Forum and of the EA community.
But, I’ve clearly reached the end of my rope, and since I’d like to keep my account and be able to post new stuff here, I’ll just stop commenting.
I see that you are getting some downvotes, and while I disagree with this comment in particular I’m glad that you are there in the arena strongly pushing your vision of the good :)
That’s not what Carl Shulman said, and the fact that people want to take it that way is telling. He messaged me recently to clarify that he meant unilateral pauses would be bad, something I still kind of disagree with but which isn’t something PauseAI advocates, and he said it way at the beginning of Pause talk. EAs just don’t want to arrest the tech momentum bc they see themselves as technocracy elite, not as humble grassroots organizers. They are disappointed at the chance we have to rally the public and want to find some way they don’t have to contribute to it.
People who identify as EAs in other countries might not be supportive of the AI companies, but they aren’t the ones on the ground in the Bay Area and DC that are letting me down so much. They aren’t the ones working for Anthropic or sitting on their fake board, listening to Dario’s claptrap that justifies what they want to do and believe anyway. They aren’t the ones denying my grant applications bc protests aren’t to their tastes. They aren’t the ones terrified of not having the vision to go for the Singularity, of being seen as “Luddites” for opposing a dangerous and recklessly pursued technology. Frankly they aren’t the influential ones.
Though Carl said that an unilateral pause would be riskier, I’m pretty sure he is not supporting a universal pause now. He said “To the extent you have a willingness to do a pause, it’s going to be much more impactful later on. And even worse, it’s possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And I’m enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see what’s happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didn’t I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit — or harm. At the object level, when I think it’s probably bad on the merits, it doesn’t arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible — like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: that’s something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives — and that are, I think, more helpful to pursue at the margin than an early pause.”
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
Though I think this is too harsh as early thinking in AI Safety included Bostrom’s differential technological development and MIRI’s seed (safe) AI, the former of which is similar to people trying to shape Anthropic’s work and the latter of which could be characterized as accelerationist.
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s. What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it’s also possible to advocate for pausing when some threshold or trigger is hit, and not now. It’s also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I don’t think this characterization is an effective way of shifting people towards your point of view.
This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what they’re doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinine— pause would become more popular as an option the more people were familiar with it. It’s only the AI industry and EA that do not like the idea of pausing and pretend like they’re gonna withdraw support that we actually never had if we do something they don’t like.
The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to “time” asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.
Ah, the EA “fuck you”.
I see, so other people (😉) might not agree with me bc of how I say it, and rather than it being their responsibility to seek truth, that’s my fault for not behaving the way you— oops, I mean “they”— like. It’s just ineffective, so nobody listening has any responsibilities and should probably go back to working at an AI company.
Hey Holly, it sounds like you’re frustrated by how people in EA are engaging with the idea of a pause. I’m sure that’s really hard, and I’m sure I don’t know even a fraction of what you’ve gone through. I know you’re doing this advocacy work because you care a lot, and I really appreciate that. You know that I personally support your work.
However, I’m worried that this thread is becoming unproductive, and risks making the Forum feel like less of a safe space[1].
In particular, my concern is that you are criticizing @Denkenberger🔸 directly in a way that appears to come from nowhere. @Denkenberger🔸 doesn’t say anything about their personal views on a pause before you respond with:
In my opinion, this sort of accusation without evidence erodes the Forum’s ability to be a safe community space for important discussions. If it’s the case that I’m missing some context and you have personal beef with @Denkenberger🔸, then I don’t think the Forum is the appropriate place to hash that out.
To be clear, I think the following are broadly fine and often good (assuming adherence to our Forum norms):
Criticizing ideas or actions that you disagree with, or see flaws in
Accusing a person or org of doing harm when you have some evidence to back that up
Sharing your feelings, including feelings of hurt and frustration
Holding those in power accountable
Flagging issues about the EA community, especially those that you think others are too afraid to flag
Flagging when someone is potentially not being truth-seeking, or is otherwise not following Forum norms (ideally in a kind way because our norms are unusual compared to the rest of the internet)
So I’d like to suggest that you take a step back from this thread. If you find yourself getting frustrated at others on the Forum elsewhere, I’d suggest taking a break from those as well.
You’ve written some of the best posts on this Forum, and the Forum Team greatly values your contributions. At the same time, I also think it’s important that the Forum continues to be a productive discussion space.
A relevant quote from our “Moderation principles” post:
I think you hit the nail on the head— this forum is not a safe space for me. Like you said, I’m an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).
This is a serious problem given what the EA Forum was supposed to be. It’s not a problem with following your rules for polite posts, but it’s against something more important— the purpose of the Forum and of the EA community.
But, I’ve clearly reached the end of my rope, and since I’d like to keep my account and be able to post new stuff here, I’ll just stop commenting.
I see that you are getting some downvotes, and while I disagree with this comment in particular I’m glad that you are there in the arena strongly pushing your vision of the good :)