This comment starts out nitpicky but hopefully gets better
“If each team has an uncorrelated 50% chance of succeeding, and with three teams, the probability of being the sole group with a vaccine is therefore 12.5% - justifying the investment. But that means the combined value of all three was only 37.5% of the value of success, and if we invested on that basis, we would underestimate the value for each”
There’s something wrong with this calculation. I don’t think it makes sense to sum marginal values in this way. It’s like saying that if the last input has zero marginal value then the average input also has zero marginal value. But if we lower inputs based on this, the marginal value will go up and we will have a contradiction.
You want the expected value of trying to develop each vaccine, which requires an assumption about how others decide if to try or not. Then, it won’t be obvious that all three try, and in the cases they don’t , the marginal value of those that do is higher.
I think you’re right that we should be careful with counterfactual thinking, but cooperative game theory won’t solve this problem. Shapley value is based on the marginal contribution of each player. So it’s also counterfactual.
Instead, I think the natural solution to the problem you outline is to think more clearly about our utility function. We sometimes talk as if it’s equal to the sum of utilities of all present and future sentient beings and sometimes as if it’s equal to the marginal contribution attributable to a specific action by EA. neither is good. The first is too broad and the latter too narrow. If we think EA is about doing good better, we should care about converting people and resources to our way of doing good and also about to be transparent and credible and rigorous about doing good. So if we cooperate with others in ways that promote these goals, great, otherwise, we should try to compete with them for resources.
Thanks David, this made me think quite a bit.
This comment starts out nitpicky but hopefully gets better
“If each team has an uncorrelated 50% chance of succeeding, and with three teams, the probability of being the sole group with a vaccine is therefore 12.5% - justifying the investment. But that means the combined value of all three was only 37.5% of the value of success, and if we invested on that basis, we would underestimate the value for each”
There’s something wrong with this calculation. I don’t think it makes sense to sum marginal values in this way. It’s like saying that if the last input has zero marginal value then the average input also has zero marginal value. But if we lower inputs based on this, the marginal value will go up and we will have a contradiction.
You want the expected value of trying to develop each vaccine, which requires an assumption about how others decide if to try or not. Then, it won’t be obvious that all three try, and in the cases they don’t , the marginal value of those that do is higher.
I think you’re right that we should be careful with counterfactual thinking, but cooperative game theory won’t solve this problem. Shapley value is based on the marginal contribution of each player. So it’s also counterfactual.
Instead, I think the natural solution to the problem you outline is to think more clearly about our utility function. We sometimes talk as if it’s equal to the sum of utilities of all present and future sentient beings and sometimes as if it’s equal to the marginal contribution attributable to a specific action by EA. neither is good. The first is too broad and the latter too narrow. If we think EA is about doing good better, we should care about converting people and resources to our way of doing good and also about to be transparent and credible and rigorous about doing good. So if we cooperate with others in ways that promote these goals, great, otherwise, we should try to compete with them for resources.