Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren’t in a particular order.
I’m sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I’m also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn’t shared across all of HLI’s supporters and contributors, hence it isn’t true to say there is an ‘HLI view’. I don’t plan to insist on one either.
And perhaps an organization such as HLI is more useful as a broad tent that unites ‘near-term happiness maximizers’ irrespective of their reasons for why they focus on the near term.
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you’re wondering why this might be of interest, note that one might hold a wide person-affecting view on which it’s good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one’s future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it’s worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
However, I’m struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn’t raise objections to your argument your opponent wouldn’t consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say ‘we’re focused on finding the best ways to make people happier’ rather than ‘we’re focused on near-term human happiness maximisation’, even though the latter is more accurate, as it will cause less confusion.
More generally, it’s unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don’t recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I’m sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I’m not sure anyone else in HLI shares that view (I haven’t asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the ‘standard’ view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we’ll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
(Whereas, without such an explanation, I would be confused why someone would start their own organization “[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.”)
I don’t see why this is confusing. Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.
Hello Max,
Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren’t in a particular order.
I’m sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I’m also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn’t shared across all of HLI’s supporters and contributors, hence it isn’t true to say there is an ‘HLI view’. I don’t plan to insist on one either.
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you’re wondering why this might be of interest, note that one might hold a wide person-affecting view on which it’s good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one’s future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it’s worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn’t raise objections to your argument your opponent wouldn’t consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say ‘we’re focused on finding the best ways to make people happier’ rather than ‘we’re focused on near-term human happiness maximisation’, even though the latter is more accurate, as it will cause less confusion.
More generally, it’s unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don’t recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I’m sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I’m not sure anyone else in HLI shares that view (I haven’t asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the ‘standard’ view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we’ll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
I don’t see why this is confusing. Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.