I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn’t have made this mistake, but I also think it’s more competent than most protest orgs that could exist (or protest orgs in other cause areas).
Yes. In a short-timelines, high p(doom), world, we absolutely cannot let perfect be the enemy of the good. Being typical hyper-critical EAs might have lethal consequences[1]. We need many more people in advocacy if we are going to move the needle, so we shouldn’t be so discouraging of the people who are actually doing things. We should just accept that they won’t get everything right all the time.
In a short-timelines world, where inaction means very high p(doom), the bar for counterfactual net-negative[2] is actually pretty high. PauseAI is very far from reaching it.
This term is over-used in EA/LW spaces, to the point where I think people often don’t actually think through fully what they are actually saying by using it. Is it actually net negative, integrating over all expected future consequences in worlds where it both does and doesn’t happen? Or is it just negative?
I would not be advocating for inaction. I do advocate for high-integrity actions and comms, though.
I occasionally see these people publicly expressing that the rationalists’ standards of honesty are impossible to meet and saying that they’re talking in ways rationalists consider to be potentially manipulative.
It would be great if people who are actually doing things tried to avoid manipulations and dishonesty.
Being manipulative is the kind of thing that backfires and leads to the deaths of everyone in a short-timelines world.
Until a year ago, I was hoping EA had learned some lessons from what happened with SBF, but unfortunately, we don’t seem to have.
Yes. In a short-timelines, high p(doom), world, we absolutely cannot let perfect be the enemy of the good. Being typical hyper-critical EAs might have lethal consequences[1]. We need many more people in advocacy if we are going to move the needle, so we shouldn’t be so discouraging of the people who are actually doing things. We should just accept that they won’t get everything right all the time.
In a short-timelines world, where inaction means very high p(doom), the bar for counterfactual net-negative[2] is actually pretty high. PauseAI is very far from reaching it.
Or maybe I should say, “might actually be net-negative in and of itself”(!)
This term is over-used in EA/LW spaces, to the point where I think people often don’t actually think through fully what they are actually saying by using it. Is it actually net negative, integrating over all expected future consequences in worlds where it both does and doesn’t happen? Or is it just negative?
I would not be advocating for inaction. I do advocate for high-integrity actions and comms, though.
I occasionally see these people publicly expressing that the rationalists’ standards of honesty are impossible to meet and saying that they’re talking in ways rationalists consider to be potentially manipulative.
It would be great if people who are actually doing things tried to avoid manipulations and dishonesty.
Being manipulative is the kind of thing that backfires and leads to the deaths of everyone in a short-timelines world.
Until a year ago, I was hoping EA had learned some lessons from what happened with SBF, but unfortunately, we don’t seem to have.
If you lie to try to increase the chance of a global AI pause, our world looks less like a surviving world, not more like it.