I’m frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you’d have from skimming, rather than engaging, and being extremely uncharitable.
First, yes, cooperation is hard, and EAs do it “partially.” I admit that fact, and it’s certainly not the point of this post, so I don’t think we disagree. Second, you’re smuggling the entire argument into “correctly assessed counterfactual impact,” and again, sure, I agree that if it’s correct, it’s not hyperopic—but correct requires a game theoretic approach, which we don’t generally use in practice.
Third, I don’t think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, “I’m unsure if there is a simple solution to this,” and I agreed that it’s relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I’ve responded to the comment about Toby’s claims, and again note that those comments are assuming we’re not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don’t know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group’s behavior—something you seem to agree with in your comment.
I’m frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you’d have from skimming, rather than engaging, and being extremely uncharitable.
First, yes, cooperation is hard, and EAs do it “partially.” I admit that fact, and it’s certainly not the point of this post, so I don’t think we disagree. Second, you’re smuggling the entire argument into “correctly assessed counterfactual impact,” and again, sure, I agree that if it’s correct, it’s not hyperopic—but correct requires a game theoretic approach, which we don’t generally use in practice.
Third, I don’t think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, “I’m unsure if there is a simple solution to this,” and I agreed that it’s relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I’ve responded to the comment about Toby’s claims, and again note that those comments are assuming we’re not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don’t know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group’s behavior—something you seem to agree with in your comment.