Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.