I’m torn with this post as while I agree with the overall spirit (that EAs can do better at cooperation and counterfactuals, be more prosocial), I think the post makes some strong claims/assumptions which I disagree with. I find it problematic that these assumptions are stated like they are facts.
First, EA may be better at “internal” cooperation than other groups, but cooperation is hard and internal EA cooperation is far from perfect.
Second, the idea that correctly assessed counterfactual impact is hyperopic. Nope, hyperopic assessments are just a sign of not getting your counterfactual right.
Third, the idea that Shapley values are the solution. I like Shapley values but only within the narrow constraints for which they are well specified. That is, environments where cooperation should inherently be possible: when all agents agree on the value that is being created. In general you need an approach that can hand both cooperative and adversial environments and everything in between. I’d call that general approach counterfactual impact. I see another commentor has noted Toby’s old comments about this and I’ll second that.
Finally, economists may do more counterfactual reasoning than other groups but that doesn’t mean they have it all figured out. Ask your average economist to quickly model a counterfactual and it could easily end up being as myopic or hyperopic too. The solution is really to get all analysts better trained on heuristics for reasoning about counterfactuals in a way that is prosocial. To me that is what you get to if you try to implement philosophies like Toby’s global consequentialism. But we need more practical work on things like this, not repetitive claims about Shapley values.
I’m writing quickly and hope this comes across in the right spirit. I do find the strong claims in this post frustrating to see, but I welcome that you raised the topic.
I’m frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you’d have from skimming, rather than engaging, and being extremely uncharitable.
First, yes, cooperation is hard, and EAs do it “partially.” I admit that fact, and it’s certainly not the point of this post, so I don’t think we disagree. Second, you’re smuggling the entire argument into “correctly assessed counterfactual impact,” and again, sure, I agree that if it’s correct, it’s not hyperopic—but correct requires a game theoretic approach, which we don’t generally use in practice.
Third, I don’t think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, “I’m unsure if there is a simple solution to this,” and I agreed that it’s relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I’ve responded to the comment about Toby’s claims, and again note that those comments are assuming we’re not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don’t know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group’s behavior—something you seem to agree with in your comment.
I’m torn with this post as while I agree with the overall spirit (that EAs can do better at cooperation and counterfactuals, be more prosocial), I think the post makes some strong claims/assumptions which I disagree with. I find it problematic that these assumptions are stated like they are facts.
First, EA may be better at “internal” cooperation than other groups, but cooperation is hard and internal EA cooperation is far from perfect.
Second, the idea that correctly assessed counterfactual impact is hyperopic. Nope, hyperopic assessments are just a sign of not getting your counterfactual right.
Third, the idea that Shapley values are the solution. I like Shapley values but only within the narrow constraints for which they are well specified. That is, environments where cooperation should inherently be possible: when all agents agree on the value that is being created. In general you need an approach that can hand both cooperative and adversial environments and everything in between. I’d call that general approach counterfactual impact. I see another commentor has noted Toby’s old comments about this and I’ll second that.
Finally, economists may do more counterfactual reasoning than other groups but that doesn’t mean they have it all figured out. Ask your average economist to quickly model a counterfactual and it could easily end up being as myopic or hyperopic too. The solution is really to get all analysts better trained on heuristics for reasoning about counterfactuals in a way that is prosocial. To me that is what you get to if you try to implement philosophies like Toby’s global consequentialism. But we need more practical work on things like this, not repetitive claims about Shapley values.
I’m writing quickly and hope this comes across in the right spirit. I do find the strong claims in this post frustrating to see, but I welcome that you raised the topic.
I’m frustrated by your claim that I make strong claims and assumptions, since it seems like what you disagree with me about are conclusions you’d have from skimming, rather than engaging, and being extremely uncharitable.
First, yes, cooperation is hard, and EAs do it “partially.” I admit that fact, and it’s certainly not the point of this post, so I don’t think we disagree. Second, you’re smuggling the entire argument into “correctly assessed counterfactual impact,” and again, sure, I agree that if it’s correct, it’s not hyperopic—but correct requires a game theoretic approach, which we don’t generally use in practice.
Third, I don’t think we should just use Shapley values, which you seem to claim I believe. I said in the conclusion, “I’m unsure if there is a simple solution to this,” and I agreed that it’s relevant only to where we have goals that are amenable to cooperation. Unfortunately, as I pointed out, in exactly those potentially cooperative scenarios, it seems that EA organizations are the ones attempting to eke out marginal attributable impact instead of cooperating to maximize total good done. I’ve responded to the comment about Toby’s claims, and again note that those comments are assuming we’re not in a potentially cooperative scenario, or that we are pretending we get to ignore the way others respond to our decisions over time. And finally, I don’t know where your attack on economists is coming from, but it seems completely unrelated to the post. Yes, we need more practical work on this, but more than that, we need to admit there is a problem, and stop using poorly reasoned counterfactuals about other group’s behavior—something you seem to agree with in your comment.