First, “note that this [misha: Shapley value of evaluator] is just the counterfactual value divided by a fraction [misha: by two].” Right, this is exactly the same in my comment. I further divide by total impact to calculate the Shapley multiplier.
Do you think we disagree?
Why isn’t my conclusion follows?
Second, you conclude “And the Shapley value multiplier would be 1/(some estimates of how many players there are)”, while your estimate is”0.3 to 0.5″. There have been like 30 participants over two lotteries that year, so you should have ended up with something an order of magnitude less like “3% to 10%”.
Am I missing something?
Third, for the model with more than two players, it’s unclear to me who the players are. If these are funders + N evaluators. You indeed will end up with 1N(1−V(funders)V(lottery)) because
Shapley multipliers should add up to 1, and
Shapley value of the funders is easy to calculate (any coalition without them lacks any impact).
Please note that V(funders) is V(default, …) from the comment above.
(Note that this model ignores that the beneficiary might win the lottery and no donations will be made.)
In the end,
I think that it is necessary to estimate X in “shallowly evaluated giving is as impactful as X times of in-depth evaluated giving”. Because if X≈1 impact of the evaluator is close to nil.
I might not understand how you model impact here, please, be more specific about the modeling setup and assumptions.
I don’t think that you should split evaluators. Well, basically because you want to disentangle the impact of evaluation and funding provision and not to calculate Adam’s personal impact.
Like, take it to the extreme: it would be pretty absurd to say that the overwhelmingly successful (e.g. seeding a new ACE Top Charity in yet unknown but highly tractable area of animal welfare and e.g. discovering AI alignment prodigy) donor lottery had an impact less than an average comment because there have been too many people (100K) contributing a dollar to participate in it.
No, we don’t agree. I think that Adam did better than other potential donor lottery winners, and so his counterfactual value is higher, and thus his Shapley value is also higher. If all the other donors had been clones of Adam, I agree that you’d just divide by n. Thus, the “In every example here, this will be equivalent to calculating counterfactual value, and dividing by the number of necessary stakeholders” is in fact wrong, and I was implicitly doing both of the following in one step: a. Calculating Shapley values with “evaluators” as one agent and b. thinking of Adam’s impact as a high proportion of the SV of the evaluator round,
The rest of our disagreements hinge on 2., and I agree that judging the evaluator step alone would make more sense.
Thank you for engaging!
First, “note that this [misha: Shapley value of evaluator] is just the counterfactual value divided by a fraction [misha: by two].” Right, this is exactly the same in my comment. I further divide by total impact to calculate the Shapley multiplier.
Do you think we disagree?
Why isn’t my conclusion follows?
Second, you conclude “And the Shapley value multiplier would be 1/(some estimates of how many players there are)”, while your estimate is”0.3 to 0.5″. There have been like 30 participants over two lotteries that year, so you should have ended up with something an order of magnitude less like “3% to 10%”.
Am I missing something?
Third, for the model with more than two players, it’s unclear to me who the players are. If these are funders + N evaluators. You indeed will end up with 1N(1−V(funders)V(lottery)) because
Shapley multipliers should add up to 1, and
Shapley value of the funders is easy to calculate (any coalition without them lacks any impact).
Please note that V(funders) is V(default, …) from the comment above.
(Note that this model ignores that the beneficiary might win the lottery and no donations will be made.)
In the end,
I think that it is necessary to estimate X in “shallowly evaluated giving is as impactful as X times of in-depth evaluated giving”. Because if X≈1 impact of the evaluator is close to nil.
I might not understand how you model impact here, please, be more specific about the modeling setup and assumptions.
I don’t think that you should split evaluators. Well, basically because you want to disentangle the impact of evaluation and funding provision and not to calculate Adam’s personal impact.
Like, take it to the extreme: it would be pretty absurd to say that the overwhelmingly successful (e.g. seeding a new ACE Top Charity in yet unknown but highly tractable area of animal welfare and e.g. discovering AI alignment prodigy) donor lottery had an impact less than an average comment because there have been too many people (100K) contributing a dollar to participate in it.
Yes, we agree
No, we don’t agree. I think that Adam did better than other potential donor lottery winners, and so his counterfactual value is higher, and thus his Shapley value is also higher. If all the other donors had been clones of Adam, I agree that you’d just divide by n. Thus, the “In every example here, this will be equivalent to calculating counterfactual value, and dividing by the number of necessary stakeholders” is in fact wrong, and I was implicitly doing both of the following in one step: a. Calculating Shapley values with “evaluators” as one agent and b. thinking of Adam’s impact as a high proportion of the SV of the evaluator round,
The rest of our disagreements hinge on 2., and I agree that judging the evaluator step alone would make more sense.