Re: “The donor lottery evaluation seems to miss that $100K would have been donated otherwise”: I don’t think it does. In the “total project impact” section, I clarify that “Note that in order to not double count impact, the impact has to be divided between the funding providers and the grantee (and possibly with the new hires as well).”
Am I understand correctly that the Shapley value multiplier (0.3 to 0.5) is responsible for preventing double counting?
If so why don’t you apply it to Positive status effects? The effect was also partially enabled by the funding providers (maybe less so).
Huh! I am surprised that your Shapley value calculation is not explicit but is reasonable.
Let’s limit ourselves to two players (= funding providers who are only capable of shallow evaluations and grantmakers who are capable of in-depth evaluation but don’t have their own funds). You get Shapley mult.=V(lottery, funding in-depth evaluated projects)−V(default, funding shallowly evaluated projects)2V(lottery). Your estimate of “0.3 to 0.5” implies that shallowly evaluated giving is as impactful as “0 to 0.4″ of in-depth evaluated giving.
This x2.5..∞ multiplier is reasonable but doesn’t feel quite right to put 10% on above ∞ :)
This makes me further confused about the gap between the donor lottery and the alignment review.
You are understanding correctly that the Shapley value multiplier is responsible for preventing double-counting, but you’re making a mistake when you say that it “implies that shallowly evaluated giving is as impactful as “0 to 0.4″ of in-depth evaluated giving”; the latter doesn’t follow.
In the two player game, you have Value({}), Value({1}), Value({2}), Value({1,2}), and the Shapley value for player 1 (the funders) is ([Value({1})- Value({})] + [Value({1,2})- Value({2})] )/2, and the value of player 2 (the donor lottery winner) is ([Value({2})- Value({})] + [Value({1,2})- Value({1})] )/2
In this case, I’m taking ([Value({2})- Value({})] to be ~0 for simplicity, so the value of player 2 is [Value({1,2})- Value({1})] )/2. Note that this is just the counterfactual value divided by a fraction.
If there were more players, it would be a little bit more complicated, but you’d end up with something similar to [Value({1,2,3})- Value({1,3})] )/3. Note again this is just the counterfactual value divided by a fraction.
But now, I don’t know how many players there are, so I just consider [Value({The World})- Value({The world without player 2})] )/(some estimates of how many players there are).
And the Shapley value multiplier would be 1/(some estimates of how many players there are).
At no point am I assuming that “shallowly evaluated giving is as impactful as 0 to 0.4 of in-depth evaluated giving”; the thing that I’m doing is just allocating value so that the sum of the value of each player is equal to the total value.
First, “note that this [misha: Shapley value of evaluator] is just the counterfactual value divided by a fraction [misha: by two].” Right, this is exactly the same in my comment. I further divide by total impact to calculate the Shapley multiplier.
Do you think we disagree?
Why isn’t my conclusion follows?
Second, you conclude “And the Shapley value multiplier would be 1/(some estimates of how many players there are)”, while your estimate is”0.3 to 0.5″. There have been like 30 participants over two lotteries that year, so you should have ended up with something an order of magnitude less like “3% to 10%”.
Am I missing something?
Third, for the model with more than two players, it’s unclear to me who the players are. If these are funders + N evaluators. You indeed will end up with 1N(1−V(funders)V(lottery)) because
Shapley multipliers should add up to 1, and
Shapley value of the funders is easy to calculate (any coalition without them lacks any impact).
Please note that V(funders) is V(default, …) from the comment above.
(Note that this model ignores that the beneficiary might win the lottery and no donations will be made.)
In the end,
I think that it is necessary to estimate X in “shallowly evaluated giving is as impactful as X times of in-depth evaluated giving”. Because if X≈1 impact of the evaluator is close to nil.
I might not understand how you model impact here, please, be more specific about the modeling setup and assumptions.
I don’t think that you should split evaluators. Well, basically because you want to disentangle the impact of evaluation and funding provision and not to calculate Adam’s personal impact.
Like, take it to the extreme: it would be pretty absurd to say that the overwhelmingly successful (e.g. seeding a new ACE Top Charity in yet unknown but highly tractable area of animal welfare and e.g. discovering AI alignment prodigy) donor lottery had an impact less than an average comment because there have been too many people (100K) contributing a dollar to participate in it.
No, we don’t agree. I think that Adam did better than other potential donor lottery winners, and so his counterfactual value is higher, and thus his Shapley value is also higher. If all the other donors had been clones of Adam, I agree that you’d just divide by n. Thus, the “In every example here, this will be equivalent to calculating counterfactual value, and dividing by the number of necessary stakeholders” is in fact wrong, and I was implicitly doing both of the following in one step: a. Calculating Shapley values with “evaluators” as one agent and b. thinking of Adam’s impact as a high proportion of the SV of the evaluator round,
The rest of our disagreements hinge on 2., and I agree that judging the evaluator step alone would make more sense.
Good point re: value of information
Re: “The donor lottery evaluation seems to miss that $100K would have been donated otherwise”: I don’t think it does. In the “total project impact” section, I clarify that “Note that in order to not double count impact, the impact has to be divided between the funding providers and the grantee (and possibly with the new hires as well).”
Thank you, Nuno!
Am I understand correctly that the Shapley value multiplier (0.3 to 0.5) is responsible for preventing double counting?
If so why don’t you apply it to Positive status effects? The effect was also partially enabled by the funding providers (maybe less so).
Huh! I am surprised that your Shapley value calculation is not explicit but is reasonable.
Let’s limit ourselves to two players (= funding providers who are only capable of shallow evaluations and grantmakers who are capable of in-depth evaluation but don’t have their own funds). You get Shapley mult.=V(lottery, funding in-depth evaluated projects)−V(default, funding shallowly evaluated projects)2V(lottery). Your estimate of “0.3 to 0.5” implies that shallowly evaluated giving is as impactful as “0 to 0.4″ of in-depth evaluated giving.
This x2.5..∞ multiplier is reasonable but doesn’t feel quite right to put 10% on above ∞ :)
This makes me further confused about the gap between the donor lottery and the alignment review.
You are understanding correctly that the Shapley value multiplier is responsible for preventing double-counting, but you’re making a mistake when you say that it “implies that shallowly evaluated giving is as impactful as “0 to 0.4″ of in-depth evaluated giving”; the latter doesn’t follow.
In the two player game, you have Value({}), Value({1}), Value({2}), Value({1,2}), and the Shapley value for player 1 (the funders) is ([Value({1})- Value({})] + [Value({1,2})- Value({2})] )/2, and the value of player 2 (the donor lottery winner) is ([Value({2})- Value({})] + [Value({1,2})- Value({1})] )/2
In this case, I’m taking ([Value({2})- Value({})] to be ~0 for simplicity, so the value of player 2 is [Value({1,2})- Value({1})] )/2. Note that this is just the counterfactual value divided by a fraction.
If there were more players, it would be a little bit more complicated, but you’d end up with something similar to [Value({1,2,3})- Value({1,3})] )/3. Note again this is just the counterfactual value divided by a fraction.
But now, I don’t know how many players there are, so I just consider [Value({The World})- Value({The world without player 2})] )/(some estimates of how many players there are).
And the Shapley value multiplier would be 1/(some estimates of how many players there are).
At no point am I assuming that “shallowly evaluated giving is as impactful as 0 to 0.4 of in-depth evaluated giving”; the thing that I’m doing is just allocating value so that the sum of the value of each player is equal to the total value.
Thank you for engaging!
First, “note that this [misha: Shapley value of evaluator] is just the counterfactual value divided by a fraction [misha: by two].” Right, this is exactly the same in my comment. I further divide by total impact to calculate the Shapley multiplier.
Do you think we disagree?
Why isn’t my conclusion follows?
Second, you conclude “And the Shapley value multiplier would be 1/(some estimates of how many players there are)”, while your estimate is”0.3 to 0.5″. There have been like 30 participants over two lotteries that year, so you should have ended up with something an order of magnitude less like “3% to 10%”.
Am I missing something?
Third, for the model with more than two players, it’s unclear to me who the players are. If these are funders + N evaluators. You indeed will end up with 1N(1−V(funders)V(lottery)) because
Shapley multipliers should add up to 1, and
Shapley value of the funders is easy to calculate (any coalition without them lacks any impact).
Please note that V(funders) is V(default, …) from the comment above.
(Note that this model ignores that the beneficiary might win the lottery and no donations will be made.)
In the end,
I think that it is necessary to estimate X in “shallowly evaluated giving is as impactful as X times of in-depth evaluated giving”. Because if X≈1 impact of the evaluator is close to nil.
I might not understand how you model impact here, please, be more specific about the modeling setup and assumptions.
I don’t think that you should split evaluators. Well, basically because you want to disentangle the impact of evaluation and funding provision and not to calculate Adam’s personal impact.
Like, take it to the extreme: it would be pretty absurd to say that the overwhelmingly successful (e.g. seeding a new ACE Top Charity in yet unknown but highly tractable area of animal welfare and e.g. discovering AI alignment prodigy) donor lottery had an impact less than an average comment because there have been too many people (100K) contributing a dollar to participate in it.
Yes, we agree
No, we don’t agree. I think that Adam did better than other potential donor lottery winners, and so his counterfactual value is higher, and thus his Shapley value is also higher. If all the other donors had been clones of Adam, I agree that you’d just divide by n. Thus, the “In every example here, this will be equivalent to calculating counterfactual value, and dividing by the number of necessary stakeholders” is in fact wrong, and I was implicitly doing both of the following in one step: a. Calculating Shapley values with “evaluators” as one agent and b. thinking of Adam’s impact as a high proportion of the SV of the evaluator round,
The rest of our disagreements hinge on 2., and I agree that judging the evaluator step alone would make more sense.