I think it is uncontroversial that at least on the negative side of the scale some actions are vastly worse than others, e.g. a mass murder or a military coup of a democratic leader, compared to more ‘everyday’ bads like being a grumpy boss.
Agreed! I share the belief that there are huge differences in how bad an action can be and that there’s some relevance in distinguish between very bad and just slightly bad ones. I didn’t think this was important to mention in my post, but if it came across as suggesting that we basically should only think in terms of three buckets, I clearly communicated poorly—I agree that this would be too crude.
It feels pretty hard to know which actions are neutral, for many of the reasons you say that the world is complex and there are lots of flow-through effects and interactions.
Strongly agreed! I strongly share the worry that identifying neutral actions would be extremely hard in practice—took me a while to settle on “bullshit jobs” as a representative example in the original post, and I’m still unsure whether it’s a solid case of “neutral actions”. But I think for me, this uncertainty reinforces the case for more research/thinking to identify actions with significantly positive outcomes vs actions that are basically neutral. I find myself believing that dividing actions into “significantly positive” vs “everything else” is epistemologically more tractable than dividing them into “the very best” vs “everything else”. (I think I’d agree that there is a complementary quest—identifying very bad actions and roughly scoring them on how bad they would be—which is worthwhile pursuing alongside either of the two options mentioned in the last sentence; maybe I should’ve mentioned this in the post?)
Identifying which positive actions are significantly so versus insignificantly so feels like it just loses a lot of information compared to a finer-grained scale.
I think I disagree mostly for epistemological reasons—I don’t think we have much access to that information at a finer-grained scale; based on that, giving up on finding such information wouldn’t be a great loss because there isn’t much to lose in the first place.
I think I might also disagree from a conceptual or strategic standpoint: my thinking on this—especially when it comes to catastrophic risks, maybe a bit less for global health & development / poverty—tends to be more about “what bundle of actions and organisations and people do we need for the world to improve towards a state that is more sustainable and exhibits higher wellbeing (/less suffering)?” For that question, knowing and contributing to significantly good actions seems to be of primary importance, since I believe that we’ll need many of these good actions—not just the very best ones—for eventual success anyways. Since publishing this essay and receiving a few comments defending (or taking for granted) the counterfactual perspective on impact analysis, I’ve come to reconsider whether I should base my thinking on that perspective more often than I currently do. I remain uncertain and undecided on that point for now, but feel relatively confident that I won’t end up concluding that I should pivot to only or primarily using the counterfactual perspective (vs. the “collective rationality / how do I contribute to success at all” perspective)… Curious to hear if all that makes some sense to you (though you might continue to disagree)?
Yes I think that makes sense. I think for me the area where I am most sympathetic to your collective rationality approach is voting, where as you noted elsewhere the 80K narrow consequentialist approach is pretty convoluted. Conversely, the Categorical Imperative, universalisability perspective is very clear that voting is good, and thinking in terms of larger groups and being part of something is perhaps helpful here. So yes while I still generally prefer the counterfactual perspective, I am probably not fully settled there.
I suppose in theory being part of a loose collective like EA focused on impact could mean that individual donation choices matter less if my $X to org Y means someone else will notice Y is better funded and give to a similarly-impressive org Z. I think in practice there is enough heterogeneity incause prioritization this may not be that large an effect? Perhaps within e.g. global health though it could work, where donating directly to any GiveWell top charity is similar to any other as GiveWell might make up the difference.
Agreed! I share the belief that there are huge differences in how bad an action can be and that there’s some relevance in distinguish between very bad and just slightly bad ones. I didn’t think this was important to mention in my post, but if it came across as suggesting that we basically should only think in terms of three buckets, I clearly communicated poorly—I agree that this would be too crude.
Strongly agreed! I strongly share the worry that identifying neutral actions would be extremely hard in practice—took me a while to settle on “bullshit jobs” as a representative example in the original post, and I’m still unsure whether it’s a solid case of “neutral actions”. But I think for me, this uncertainty reinforces the case for more research/thinking to identify actions with significantly positive outcomes vs actions that are basically neutral. I find myself believing that dividing actions into “significantly positive” vs “everything else” is epistemologically more tractable than dividing them into “the very best” vs “everything else”. (I think I’d agree that there is a complementary quest—identifying very bad actions and roughly scoring them on how bad they would be—which is worthwhile pursuing alongside either of the two options mentioned in the last sentence; maybe I should’ve mentioned this in the post?)
I think I disagree mostly for epistemological reasons—I don’t think we have much access to that information at a finer-grained scale; based on that, giving up on finding such information wouldn’t be a great loss because there isn’t much to lose in the first place.
I think I might also disagree from a conceptual or strategic standpoint: my thinking on this—especially when it comes to catastrophic risks, maybe a bit less for global health & development / poverty—tends to be more about “what bundle of actions and organisations and people do we need for the world to improve towards a state that is more sustainable and exhibits higher wellbeing (/less suffering)?” For that question, knowing and contributing to significantly good actions seems to be of primary importance, since I believe that we’ll need many of these good actions—not just the very best ones—for eventual success anyways. Since publishing this essay and receiving a few comments defending (or taking for granted) the counterfactual perspective on impact analysis, I’ve come to reconsider whether I should base my thinking on that perspective more often than I currently do. I remain uncertain and undecided on that point for now, but feel relatively confident that I won’t end up concluding that I should pivot to only or primarily using the counterfactual perspective (vs. the “collective rationality / how do I contribute to success at all” perspective)… Curious to hear if all that makes some sense to you (though you might continue to disagree)?
Yes I think that makes sense. I think for me the area where I am most sympathetic to your collective rationality approach is voting, where as you noted elsewhere the 80K narrow consequentialist approach is pretty convoluted. Conversely, the Categorical Imperative, universalisability perspective is very clear that voting is good, and thinking in terms of larger groups and being part of something is perhaps helpful here. So yes while I still generally prefer the counterfactual perspective, I am probably not fully settled there.
I suppose in theory being part of a loose collective like EA focused on impact could mean that individual donation choices matter less if my $X to org Y means someone else will notice Y is better funded and give to a similarly-impressive org Z. I think in practice there is enough heterogeneity incause prioritization this may not be that large an effect? Perhaps within e.g. global health though it could work, where donating directly to any GiveWell top charity is similar to any other as GiveWell might make up the difference.