In the first case, I examined a world in which a Random Altruist charity is already works on welfare asks. This charity chooses randomly from the set of welfare asks; this could be similar to (although probably worse than) choosing asks based on salience and emotional impact. In this case, I modeled a sample of 30 random welfare ask ranges and selections. I found that entering the space with the welfare increase method leads to more optimal outcomes ~17% of the time. The counterfactual speed-up approach leads to more optimal outcomes ~60% of the time. The two models were equally good ~23% of the time. Therefore, if we did not have perfect information and so cannot select asks which result in highest utility then maximizing counterfactual speed-up is a superior strategy. No surprise there.
I think the ranking of the two approaches could depend substantially on the distribution of magnitudes of welfare asks and the number of asks.
For example, consider a distribution which is constant in magnitude, except a few rare and very large outliers. Suppose specifically it’s always positive, and constant except for exactly one large positive outlier. In this case, the optimal solution is to ensure the outlier comes as early as possible, so you choose the outlier first, and then choose any other asks after that. The welfare increase method does this, so it will always be optimal (but might tie with counterfactual speed-up). On the other hand, if the number of asks is high enough, the counterfactual speed-up approach will often choose the last ask the Random Altruist charity would have chosen so it could speed it up, which would be suboptimal.
To illustrate, consider the following sequence of asks (and their present value) that Random Altruist charity would have chosen:
1, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
That’s one 1, one 10 and then ten 1s.
Choosing the 10 has a value of 10 according to counterfactual speed-up, since it advances it by one year.
Choosing the very last 1 in the sequence has a value of 11 according to counterfactual speed-up, since it advances it by 11 years, but it wouldn’t have made a real difference if you had chosen the 2nd 1 instead (the two sequences would be indistinguishable by welfare), and even choosing the very 1st 1 would have been better, since it would make the 10 come one year earlier.
What this might suggest in general to me is that if most asks aren’t very impactful or have similar impact, but there are some much more impactful outliers, we should use the welfare increase approach. This seems kind of intuitive if you thought most animal welfare charities aren’t focused on farmed animals at all or the most numerous and worst treated ones (I’m not sure this is actually the case, most animal charity goes to shelters according to ACE, but I don’t know if that counts as animal welfare asks). (EDIT: I suppose if there’s still quite a lot of spread among the outliers, then the counterfactual speed-up approach could be better.) Of course, we could just ignore those charities, but once we do, we might be in a situation similar to the one you described as:
However, I then examined how varying the strategy of the existing charity would change the outcome. If the existing charity follows the simple welfare increase approach, the results change. Now following the simple welfare increase strategy is the optimal outcome 100% of the time! This would be the same if the actor in the space is trying maximize counterfactual speed-up, as the best way to do this when alone is to choose the biggest welfare asks.
Also, did you happen to estimate (via Monte Carlo) the expected value of each approach?
I think the ranking of the two approaches could depend substantially on the distribution of magnitudes of welfare asks and the number of asks.
For example, consider a distribution which is constant in magnitude, except a few rare and very large outliers. Suppose specifically it’s always positive, and constant except for exactly one large positive outlier. In this case, the optimal solution is to ensure the outlier comes as early as possible, so you choose the outlier first, and then choose any other asks after that. The welfare increase method does this, so it will always be optimal (but might tie with counterfactual speed-up). On the other hand, if the number of asks is high enough, the counterfactual speed-up approach will often choose the last ask the Random Altruist charity would have chosen so it could speed it up, which would be suboptimal.
To illustrate, consider the following sequence of asks (and their present value) that Random Altruist charity would have chosen:
1, 10, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
That’s one 1, one 10 and then ten 1s.
Choosing the 10 has a value of 10 according to counterfactual speed-up, since it advances it by one year.
Choosing the very last 1 in the sequence has a value of 11 according to counterfactual speed-up, since it advances it by 11 years, but it wouldn’t have made a real difference if you had chosen the 2nd 1 instead (the two sequences would be indistinguishable by welfare), and even choosing the very 1st 1 would have been better, since it would make the 10 come one year earlier.
What this might suggest in general to me is that if most asks aren’t very impactful or have similar impact, but there are some much more impactful outliers, we should use the welfare increase approach. This seems kind of intuitive if you thought most animal welfare charities aren’t focused on farmed animals at all or the most numerous and worst treated ones (I’m not sure this is actually the case, most animal charity goes to shelters according to ACE, but I don’t know if that counts as animal welfare asks). (EDIT: I suppose if there’s still quite a lot of spread among the outliers, then the counterfactual speed-up approach could be better.) Of course, we could just ignore those charities, but once we do, we might be in a situation similar to the one you described as:
Also, did you happen to estimate (via Monte Carlo) the expected value of each approach?