This is a good question. I think, if we assume everything else equal (neither got the money by causing harm, both were influenced by roughly the same number of actors to be able and willing to donate their money), then I think I agree that the altruistic impact of the first is 100x that of the second.
I am not entirely sure what that implies for my own thinking on the topic. On the face of it, it clearly contradicts the conclusion in my Empirical problem section. But it does so without, as far as I can tell, addressing the subpoints I mention in that section. Does that mean the subpoints are not relevant to the empirical claim I make? They seem relevant to me, and that seems clear in examples other than the one you presented. I’m confused, and I imagine I’ll need at least a few more days to figure out how the example you gave changes my thinking.
Update: I am currently working on a Dialogue post with JWS to discuss their responses to the essay above and my reflections since publishing it. I imagine/hope that this will help streamline my thinking on some of the issues raised in comments (as well as some of the uncertainties I had while writing the essay). For that reason, I’ll hold off on comment responses here and on updates to the original essay until work on the Dialogue post has progressed a bit further, hoping to come back to this in a few days (max 1-1.5 weeks?) with a clearer take on whether & how comments such as this one by Jeff shift my thinking. Thanks again to all critical (and supportive) commenters for kicking off these further reflections!
I’m from a middle-income country, so when I first seriously engaged with EA, I remember how the fact that my order-of-magnitude lower earnings vs HIC folks proportionately reduced my giving impact made me feel really sad and left out.
It’s also why the original title of your post – the post itself is fantastic; I resonate with a lot of the points you bring up – didn’t quite land with me, so I appreciate the title change and your consideration in thinking through Jeff’s example.
New Update (as of 2024-03-27): This comment, with its very clear example to get to the bottom of our disagreement, has been extremely helpful in pushing me to reconsider some of the claims I make in the post. I have somewhat updated my views over the last few days (see the section on “the empirical problem” in the Appendix I added today), and this comment has been influential in helping me do that. Gave it a Delta for that reason; thanks Jeff!
While I now more explicitly acknowledge and agree that, when measured in terms of counterfactual impact, some actions can have hundreds of times more impact than others, I retain a sense of unease when adopting this framing:
When evaluating impact differently (e.g. through Shapley-value-like attribution of “shares of impact”, or through a collective rationality mindset (see comments here and here for what I mean by collective rationality mindset)), it seems less clear that the larger donor is 100x more impactful than the smaller donor. One way for reasoning about this would be something like: Probably—necessarily? - the person donating $100,000 had more preceding actions leading up to the situation where she is able and willing to donate that much money and there will probably—necessarily? - be more subsequent actions needed to make the money count, to ensure that it has positive consequences. There will then be many more actors and actions between which the impact of the $100,000 donation will have to be apportioned; it is not clear whether the larger donor will appear vastly more impactful when considered from this different perspective/measurement strategy...
You can shake your head and claim—rightly, I believe—that this is irrelevant for deciding whether donating $100,000 or donating $1,000 is better. Yes, for my decision as an individual, calculating the possible impact of my actions by assessing the likely counterfactual consequences resulting directly from the action will sometimes be the most sensible thing to do, and I’m glad I’ve come to realise that explicitly in response to your comment.
But I believe recognising and taking seriously the fact that, considered differently, my choice to donate $100,000 does not mean that I individually am responsible for 100x more impact than the donor of $1,000 can be relevant for decisions in two ways:
1) It prevents me from discounting and devaluing all the other actors that contribute vital inputs (even if they are “easily replaceable” as individuals)
2) It encourages me to take actions that may facilitate, enable, or support large counterfactual impact by other people. This perspective also encourages me to consider actions that may have a large counterfactual impact themselves, but in more indirect and harder-to-observe ways (even if I appear easily replaceable in theory, it’s unclear whether I will be replaced in practice, so the counterfactual impact seems extremely hard to determine; what is very clear is that by performing a relevant supportive action, I will be contributing something vital to the eventual impact).
If you find the time to come back to this so many days after the initial post, I’d be curious to hear what you think about these (still somewhat confused?) considerations :)
This is a good question. I think, if we assume everything else equal (neither got the money by causing harm, both were influenced by roughly the same number of actors to be able and willing to donate their money), then I think I agree that the altruistic impact of the first is 100x that of the second.
I am not entirely sure what that implies for my own thinking on the topic. On the face of it, it clearly contradicts the conclusion in my Empirical problem section. But it does so without, as far as I can tell, addressing the subpoints I mention in that section. Does that mean the subpoints are not relevant to the empirical claim I make? They seem relevant to me, and that seems clear in examples other than the one you presented. I’m confused, and I imagine I’ll need at least a few more days to figure out how the example you gave changes my thinking.
Update: I am currently working on a Dialogue post with JWS to discuss their responses to the essay above and my reflections since publishing it. I imagine/hope that this will help streamline my thinking on some of the issues raised in comments (as well as some of the uncertainties I had while writing the essay). For that reason, I’ll hold off on comment responses here and on updates to the original essay until work on the Dialogue post has progressed a bit further, hoping to come back to this in a few days (max 1-1.5 weeks?) with a clearer take on whether & how comments such as this one by Jeff shift my thinking. Thanks again to all critical (and supportive) commenters for kicking off these further reflections!
I’m from a middle-income country, so when I first seriously engaged with EA, I remember how the fact that my order-of-magnitude lower earnings vs HIC folks proportionately reduced my giving impact made me feel really sad and left out.
It’s also why the original title of your post – the post itself is fantastic; I resonate with a lot of the points you bring up – didn’t quite land with me, so I appreciate the title change and your consideration in thinking through Jeff’s example.
New Update (as of 2024-03-27): This comment, with its very clear example to get to the bottom of our disagreement, has been extremely helpful in pushing me to reconsider some of the claims I make in the post. I have somewhat updated my views over the last few days (see the section on “the empirical problem” in the Appendix I added today), and this comment has been influential in helping me do that. Gave it a Delta for that reason; thanks Jeff!
While I now more explicitly acknowledge and agree that, when measured in terms of counterfactual impact, some actions can have hundreds of times more impact than others, I retain a sense of unease when adopting this framing:
When evaluating impact differently (e.g. through Shapley-value-like attribution of “shares of impact”, or through a collective rationality mindset (see comments here and here for what I mean by collective rationality mindset)), it seems less clear that the larger donor is 100x more impactful than the smaller donor. One way for reasoning about this would be something like: Probably—necessarily? - the person donating $100,000 had more preceding actions leading up to the situation where she is able and willing to donate that much money and there will probably—necessarily? - be more subsequent actions needed to make the money count, to ensure that it has positive consequences. There will then be many more actors and actions between which the impact of the $100,000 donation will have to be apportioned; it is not clear whether the larger donor will appear vastly more impactful when considered from this different perspective/measurement strategy...
You can shake your head and claim—rightly, I believe—that this is irrelevant for deciding whether donating $100,000 or donating $1,000 is better. Yes, for my decision as an individual, calculating the possible impact of my actions by assessing the likely counterfactual consequences resulting directly from the action will sometimes be the most sensible thing to do, and I’m glad I’ve come to realise that explicitly in response to your comment.
But I believe recognising and taking seriously the fact that, considered differently, my choice to donate $100,000 does not mean that I individually am responsible for 100x more impact than the donor of $1,000 can be relevant for decisions in two ways:
1) It prevents me from discounting and devaluing all the other actors that contribute vital inputs (even if they are “easily replaceable” as individuals)
2) It encourages me to take actions that may facilitate, enable, or support large counterfactual impact by other people. This perspective also encourages me to consider actions that may have a large counterfactual impact themselves, but in more indirect and harder-to-observe ways (even if I appear easily replaceable in theory, it’s unclear whether I will be replaced in practice, so the counterfactual impact seems extremely hard to determine; what is very clear is that by performing a relevant supportive action, I will be contributing something vital to the eventual impact).
If you find the time to come back to this so many days after the initial post, I’d be curious to hear what you think about these (still somewhat confused?) considerations :)