2024 edit: This is being shared as “you shouldn’t donate blood”. I think many people should donate blood and it largely depends on your situation, partially for these reasons
Original comment:
Thanks for doing a quantitative estimate, I think it’s a very useful exercise and something we should do more often.
I disagree with your main conclusion for two main reasons:
As a heuristic, in the UK the NHS values a life at roughly £1M. If an extra blood donation would save 1/30th of a life they would be willing to pay £33k for one, which seems very high.
You mention that only a fraction of donated blood is used in emergencies, I would expect that in case of blood shortages the fraction would change, so the counterfactual impact of donating blood is much lower. I think in this case counterfactuals are more useful than Shapley values (Shapley values give the same merit to all actors, counterfactuals consider the difference in value between acting and not acting).
Something that would make me change my mind is an estimate of how many people are dying because of blood shortages. (I would guess close to 0 in rich countries).
As for donating blood in general, I think the costs and benefits are very location specific. In Italy, you get free blood tests, an extra vacation day, and a nice sandwich, so it’s potentially negative cost.
If all need for blood is met regardless of your donation, your counterfactual impact should be 0. Would Shapley values agree? If it gives the same to everyone, it would mean everyone gets 0 impact.
See this other comment It depends on how you model it. If you model other donors as part of the environment (like e.g. the needle for drawing the blood), everyone gets 0. If you model each donor as an actor they all gettotalimpactnumberofactors. Nuno made http://shapleyvalue.com/ to play around with it
I think in this case counterfactuals are more useful than Shapley values
Could you say why? Or link somewhere that explains the advantages of counterfactuals? I had never heard of Shapley values, but opened the post and read some and immediately thought “oh my god yes!”.
Let’s say you had $5k to donate and you could donate it to:
An intervention that saves 3 lives, but is 99.9% going to be funded by another donor no matter what you do, and has only room for an extra $5k (if it gets $10k it still only saves 3 lives).
An intervention that saves 1 life, and would otherwise not be funded.
To maximize a naively calculated Shapley value you would choose the first one, even though the second one leads to better outcomes (more lives saved in expected value).
I’m a bit unsure about the general principle and the details, probably there’s a way to compute Shapley values that would maximize the expected value in this case as well, but I think it would also apply to the blood donor case.
Very low confidence in this, but I think Shapley values are useful for coordinating strategic agents that act in response to each other, and to avoid double counting, but not when the actions of other agents are not influenced by yours.
This comment on the Shapley values post explains it better
This post might be interesting for some details and proposes some solutions
Thanks. I think I need to dive deeper into the mathematical definition to understand this. It seems to me that counterfactual value is not as well defined.
the second one leads to better outcomes (more lives saved in expected value)
Short objection: it is not necessarily true that higher expected value = better. For example, in this scenario for low enough risk tolerance the first scenario would be better.
2024 edit: This is being shared as “you shouldn’t donate blood”. I think many people should donate blood and it largely depends on your situation, partially for these reasons
Original comment:
Thanks for doing a quantitative estimate, I think it’s a very useful exercise and something we should do more often.
I disagree with your main conclusion for two main reasons:
As a heuristic, in the UK the NHS values a life at roughly £1M. If an extra blood donation would save 1/30th of a life they would be willing to pay £33k for one, which seems very high.
You mention that only a fraction of donated blood is used in emergencies, I would expect that in case of blood shortages the fraction would change, so the counterfactual impact of donating blood is much lower. I think in this case counterfactuals are more useful than Shapley values (Shapley values give the same merit to all actors, counterfactuals consider the difference in value between acting and not acting).
Something that would make me change my mind is an estimate of how many people are dying because of blood shortages. (I would guess close to 0 in rich countries).
As for donating blood in general, I think the costs and benefits are very location specific.
In Italy, you get free blood tests, an extra vacation day, and a nice sandwich, so it’s potentially negative cost.
If all need for blood is met regardless of your donation, your counterfactual impact should be 0. Would Shapley values agree? If it gives the same to everyone, it would mean everyone gets 0 impact.
See this other comment
It depends on how you model it. If you model other donors as part of the environment (like e.g. the needle for drawing the blood), everyone gets 0. If you model each donor as an actor they all gettotal impactnumber of actors. Nuno made http://shapleyvalue.com/ to play around with it
Could you say why? Or link somewhere that explains the advantages of counterfactuals? I had never heard of Shapley values, but opened the post and read some and immediately thought “oh my god yes!”.
Let’s say you had $5k to donate and you could donate it to:
An intervention that saves 3 lives, but is 99.9% going to be funded by another donor no matter what you do, and has only room for an extra $5k (if it gets $10k it still only saves 3 lives).
An intervention that saves 1 life, and would otherwise not be funded.
To maximize a naively calculated Shapley value you would choose the first one, even though the second one leads to better outcomes (more lives saved in expected value).
I’m a bit unsure about the general principle and the details, probably there’s a way to compute Shapley values that would maximize the expected value in this case as well, but I think it would also apply to the blood donor case.
Very low confidence in this, but I think Shapley values are useful for coordinating strategic agents that act in response to each other, and to avoid double counting, but not when the actions of other agents are not influenced by yours.
This comment on the Shapley values post explains it better
This post might be interesting for some details and proposes some solutions
Thanks. I think I need to dive deeper into the mathematical definition to understand this. It seems to me that counterfactual value is not as well defined.
Short objection: it is not necessarily true that higher expected value = better. For example, in this scenario for low enough risk tolerance the first scenario would be better.