I think there are several different activities that people call âimpact attributionâ, and they differ in important ways that can lead to problems like the ones outlined in this post. For example:
if I take action A instead of action B, then the world will be X better off,
I morally âdeserve creditâ in the amount of X for the fact that I took action A instead of B.
I think the fact that any action relies enormously on context, and on other peopleâs previous actions, and so on, is a strong challenge to the second point, but Iâd argue itâs the first point that should actually influence my decision-making. If other people have already done a lot of work towards a goal, but I have the opportunity to take an action that changes whether their work succeeds or fails, then for sure I shouldnât get moral credit for the entire project, but when asking questions like âshould I take this action or some other?â or âwhat kinds of costs should I be willing to bear to ensure this happens?â, I should be using the full difference between success and failure as my benchmark. (That said, if âfailureâ means âsomeone else has to take this action insteadâ rather than âitâs as if none of the work was doneâ, the benchmark should be comparing with that instead, so you need to ensure you are comparing the most realistic alternative scenarios you can.)
I mostly-disagree with this on pragmatic grounds. I agree that thatâs the right approach to take on the first point if/âwhen you have full information about whatâs going on. But in practice you essentially never have proper information on what everyone elseâs counterfactuals would look like according to different actions you could take.
If everyone thinks in terms of something like âapproximate shares of moral creditâ, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if theyâd all done something different. Doing this properly might mean impact markets (where the âmarketâ part works as a mechanism for distributing cognition, so that each market participant is responsible for thinking through their own alternative options, and feeding that information into the system via their willingness to do work for different amounts of pay), but I think that you can get some rough approximation to the benefits of impact markets without actual markets by having people do the things they would have done with marketsâand in this context, that means paying attention to the share of credit different parties would get.
Is it at least fair to say that in situations where the other main actors arenât explicitly coordinating with you and arenât aware of your efforts (and, to an approximation, werenât expecting your efforts and wonât react to them), you should be thinking more like I suggested?
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.
You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/âmerit â because individualsâ actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandelâs book, The Tyranny of Merit. Sandel takes issue with the attitude of âwinnersâ within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst âhigh-impact individualsâ .
I think there are several different activities that people call âimpact attributionâ, and they differ in important ways that can lead to problems like the ones outlined in this post. For example:
if I take action A instead of action B, then the world will be X better off,
I morally âdeserve creditâ in the amount of X for the fact that I took action A instead of B.
I think the fact that any action relies enormously on context, and on other peopleâs previous actions, and so on, is a strong challenge to the second point, but Iâd argue itâs the first point that should actually influence my decision-making. If other people have already done a lot of work towards a goal, but I have the opportunity to take an action that changes whether their work succeeds or fails, then for sure I shouldnât get moral credit for the entire project, but when asking questions like âshould I take this action or some other?â or âwhat kinds of costs should I be willing to bear to ensure this happens?â, I should be using the full difference between success and failure as my benchmark. (That said, if âfailureâ means âsomeone else has to take this action insteadâ rather than âitâs as if none of the work was doneâ, the benchmark should be comparing with that instead, so you need to ensure you are comparing the most realistic alternative scenarios you can.)
I mostly-disagree with this on pragmatic grounds. I agree that thatâs the right approach to take on the first point if/âwhen you have full information about whatâs going on. But in practice you essentially never have proper information on what everyone elseâs counterfactuals would look like according to different actions you could take.
If everyone thinks in terms of something like âapproximate shares of moral creditâ, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if theyâd all done something different. Doing this properly might mean impact markets (where the âmarketâ part works as a mechanism for distributing cognition, so that each market participant is responsible for thinking through their own alternative options, and feeding that information into the system via their willingness to do work for different amounts of pay), but I think that you can get some rough approximation to the benefits of impact markets without actual markets by having people do the things they would have done with marketsâand in this context, that means paying attention to the share of credit different parties would get.
Is it at least fair to say that in situations where the other main actors arenât explicitly coordinating with you and arenât aware of your efforts (and, to an approximation, werenât expecting your efforts and wonât react to them), you should be thinking more like I suggested?
I think maybe yes? But Iâm a bit worried that âwonât react to themâ is actually doing a lot of work.
We could chat about more a concrete example that you think fits this description, if you like.
Thank you for writing this piece, Sarah! I think the difference stated above between: A) counterfactual impact of an action, or a person; B) moral praise-worthiness is important.
You might say that individual actions, or lives have large differences in impact, but remain sceptical of the idea of (intrinsic) moral desert/âmerit â because individualsâ actions are conditioned by prior causes. Your post reminded me a lot of Michael Sandelâs book, The Tyranny of Merit. Sandel takes issue with the attitude of âwinnersâ within contemporary meritocracy who see themselves as deserving of their success. This seems similar to your concerns about hubris amongst âhigh-impact individualsâ .