In terms of the underlying worldview differences, I think the key questions are something like:
(i) How confident should we be in our explicit expected value estimates? How strongly should we discount highly speculative endeavors, relative to “commonsense” do-gooding?
(ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction?
[Aside: I think it’s much more reasonable to be uncertain about these (largely empirical) questions than about the (largely moral) questions that underpin the orthodox breakdown of EA worldviews.]
Shouldn’t you just set a probability distribution[1] over those stances[2] and allow the obvious comparisons between them? We’re estimating the impact of an intervention, and we could have different stances on how to estimate impact, including how much to discount and different priors, but as long as impact is measured in the same terms, e.g. the same utility function over which we maximize expected utility, as a function of the welfares of moral patients and/or other things, we can just treat this like uncertainty about the effects of our interventions. For expected utility maximizers, we could just take the weighted average across the stances when we take the expected utility.
If you can’t put a probability distribution over those stances, then I don’t know how you could ground any split of resources across them (although you could still pick portfolios that aren’t robustly dominated by any other). Approaches for handling normative uncertainty generally require such distributions
I would say a better place to break intertheoretic comparisons (and form buckets, if going for a bucket approach) would be different attitudes towards risk and ambiguity, i.e. different degrees and versions of risk aversion, difference-making risk aversion and ambiguity aversion. And of course between moral stances, like variants of utilitarianism, alternatives to utilitarianism, the normative parts of moral weights, etc..
GHD looks most plausible as a priority with at least modest difference-making risk aversion or ambiguity aversion, or with certain person-affecting or non-aggregative views. Otherwise some x-risk work will likely end up beating it.
(Minor edits.)
Shouldn’t you just set a probability distribution[1] over those stances[2] and allow the obvious comparisons between them? We’re estimating the impact of an intervention, and we could have different stances on how to estimate impact, including how much to discount and different priors, but as long as impact is measured in the same terms, e.g. the same utility function over which we maximize expected utility, as a function of the welfares of moral patients and/or other things, we can just treat this like uncertainty about the effects of our interventions. For expected utility maximizers, we could just take the weighted average across the stances when we take the expected utility.
If you can’t put a probability distribution over those stances, then I don’t know how you could ground any split of resources across them (although you could still pick portfolios that aren’t robustly dominated by any other). Approaches for handling normative uncertainty generally require such distributions
I would say a better place to break intertheoretic comparisons (and form buckets, if going for a bucket approach) would be different attitudes towards risk and ambiguity, i.e. different degrees and versions of risk aversion, difference-making risk aversion and ambiguity aversion. And of course between moral stances, like variants of utilitarianism, alternatives to utilitarianism, the normative parts of moral weights, etc..
GHD looks most plausible as a priority with at least modest difference-making risk aversion or ambiguity aversion, or with certain person-affecting or non-aggregative views. Otherwise some x-risk work will likely end up beating it.
Or multiple, with imprecise probabilities.
Especially for (i), maybe less so for (ii), given moral uncertainty about the moral weights of nonhuman animals.