Thanks for the comment! Disagreeing with my proposed donations is the most productive sort of disagreement. I also appreciate hearing your beliefs about a variety of orgs.
A few weeks ago, I read your back-and-forth with Holly Elmore about the “working with the Pentagon” issue. This is what I thought at the time (IIRC):
I agree that it’s not good to put misleading messages in your protests.
I think this particular instance of misleadingness isn’t that egregious, it does decrease my expectation of the value of PauseAI US’s future protests but not by a huge margin. If this was a recurring pattern, I’d be more concerned.
Upon my first reading, it was unclear to me what your actual objection was, so I’m not surprised that Holly also (apparently) misunderstood it. I had to read through twice to understand.
Being intentionally deceptive is close to a dealbreaker for me, but it doesn’t look to me like Holly was being intentionally deceptive.
I thought you both could’ve handled the exchange better. Holly included misleading messaging in the protest and didn’t seem to understand the problem, and you did not communicate clearly and then continued to believe that you had communicated well in spite of contrary evidence. Reading the exchange weakly decreased my evaluation of both your work and PauseAI US’s, but not by enough to change my org ranking. You both made the sorts of mistakes that I don’t think anyone can avoid 100% of the time. (I have certainly made similar mistakes.) Making a mistake once is evidence that you’ll make it more, but not very strong evidence.
I re-read your post and its comments just now and I didn’t have any new thoughts. I feel like I still don’t have great clarity on the implications of the situation, which troubles me, but by my reading, it’s just not as big a deal as you think it is.
General comments:
I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn’t have made this mistake, but I also think it’s more competent than most protest orgs that could exist (or protest orgs in other cause areas).
I reviewed PauseAI’s other materials, although not deeply or comprehensively, and they seemed good to me. I listened to a podcast with Holly and my impression was that she had an unusually clear picture of the concerns around misaligned AI.
I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn’t have made this mistake, but I also think it’s more competent than most protest orgs that could exist (or protest orgs in other cause areas).
Yes. In a short-timelines, high p(doom), world, we absolutely cannot let perfect be the enemy of the good. Being typical hyper-critical EAs might have lethal consequences[1]. We need many more people in advocacy if we are going to move the needle, so we shouldn’t be so discouraging of the people who are actually doing things. We should just accept that they won’t get everything right all the time.
In a short-timelines world, where inaction means very high p(doom), the bar for counterfactual net-negative[2] is actually pretty high. PauseAI is very far from reaching it.
This term is over-used in EA/LW spaces, to the point where I think people often don’t actually think through fully what they are actually saying by using it. Is it actually net negative, integrating over all expected future consequences in worlds where it both does and doesn’t happen? Or is it just negative?
Thanks for the comment! Disagreeing with my proposed donations is the most productive sort of disagreement. I also appreciate hearing your beliefs about a variety of orgs.
A few weeks ago, I read your back-and-forth with Holly Elmore about the “working with the Pentagon” issue. This is what I thought at the time (IIRC):
I agree that it’s not good to put misleading messages in your protests.
I think this particular instance of misleadingness isn’t that egregious, it does decrease my expectation of the value of PauseAI US’s future protests but not by a huge margin. If this was a recurring pattern, I’d be more concerned.
Upon my first reading, it was unclear to me what your actual objection was, so I’m not surprised that Holly also (apparently) misunderstood it. I had to read through twice to understand.
Being intentionally deceptive is close to a dealbreaker for me, but it doesn’t look to me like Holly was being intentionally deceptive.
I thought you both could’ve handled the exchange better. Holly included misleading messaging in the protest and didn’t seem to understand the problem, and you did not communicate clearly and then continued to believe that you had communicated well in spite of contrary evidence. Reading the exchange weakly decreased my evaluation of both your work and PauseAI US’s, but not by enough to change my org ranking. You both made the sorts of mistakes that I don’t think anyone can avoid 100% of the time. (I have certainly made similar mistakes.) Making a mistake once is evidence that you’ll make it more, but not very strong evidence.
I re-read your post and its comments just now and I didn’t have any new thoughts. I feel like I still don’t have great clarity on the implications of the situation, which troubles me, but by my reading, it’s just not as big a deal as you think it is.
General comments:
I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn’t have made this mistake, but I also think it’s more competent than most protest orgs that could exist (or protest orgs in other cause areas).
I reviewed PauseAI’s other materials, although not deeply or comprehensively, and they seemed good to me. I listened to a podcast with Holly and my impression was that she had an unusually clear picture of the concerns around misaligned AI.
Yes. In a short-timelines, high p(doom), world, we absolutely cannot let perfect be the enemy of the good. Being typical hyper-critical EAs might have lethal consequences[1]. We need many more people in advocacy if we are going to move the needle, so we shouldn’t be so discouraging of the people who are actually doing things. We should just accept that they won’t get everything right all the time.
In a short-timelines world, where inaction means very high p(doom), the bar for counterfactual net-negative[2] is actually pretty high. PauseAI is very far from reaching it.
Or maybe I should say, “might actually be net-negative in and of itself”(!)
This term is over-used in EA/LW spaces, to the point where I think people often don’t actually think through fully what they are actually saying by using it. Is it actually net negative, integrating over all expected future consequences in worlds where it both does and doesn’t happen? Or is it just negative?