Congratulations to launching HLI. From my outside perspective, it looks like you have quite some momentum, and I’m glad to see more diverse approaches being pursued within EA. (Even though I don’t anticipate to support yours in particular.)
One thing I’m curious about is to what extent HLI’s strategy or approach depend on views in population ethics (as opposed to other normative questions, including the theory of well-being), and to what extent you think the question whether maximizing consequentialisism would recommend to support HLI hinges on population ethics.
I’m partly asking because I vaguely remember you having written elsewhere that regarding population ethics you think that (i) death is not bad in itself for any individual’s well-being, (ii) creating additional people is never good for the world. My impression is that (i) and (ii) have major implications for how to do ‘cause prioritization’, and for how to approach the question of “how to do the most good we can” more broadly. It thus would make sense to me that someone endorsing (i) and (ii) thought that, say, they need to research and provide their own career advice as it would likely differ from the one provided by 80K and popular views in EA more generally. (Whereas, without such an explanation, I would be confused why someone would start their own organization “[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.”) More broadly, it would make sense to me that people endorsing (i) and (ii) embark on their own research programme and practical projects.
However, I’m struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post. It seems to me that everything you say is consistent with (i) and (ii), and that e.g. in your vision you almost suggest a view that is neutral about ‘making happy people’. But on the face of it, ‘increasing the expected number of [happy] individuals living in the future, for example by reducing the risk of human extinction’ seems a reasonable candidate answer to your guiding question, i.e., “What are the most cost-effective ways to increase self-reported subjective well-being?”
Put differently, I’d expect that your post raises questions such as ‘How is this different from what others EA orgs are doing?’ or ‘How will your career advice differ from 80K’s?’ for many people. I appreciate there are many other reasons why one might focus to, as you put it, “welfare-maximization in the nearer-term”—most notably empirical beliefs. For example, someone might think that the risk of human extinction this century was extremely small, or that reducing that risk was extremely intractable. And perhaps an organization such as HLI is more useful as a broad tent that unites ‘near-term happiness maximizers’ irrespective of their reasons for why they focus on the near term. You do mention some of the differences, but it doesn’t seem to me that you provide sufficient reasons for why you’re taking this different approach. Instead, you stress that you take value to exclusively consist of happiness (and suffering), how you operationalize happiness etc. - but unless I’m mistaken, these points belonging to the theory of well-being don’t actually provide an answer to the question that to me seems a bit like the unacknowledged elephant in the room: ‘So why are you not trying to reduce existential risk?’ Indeed, if you were to ask me why I’m not doing roughly the same things as you with my EA resources, I’d to a first approximation say ‘because we disagree about population ethics’ rather than ‘because we disagree about the theory of well-being’ or ‘I don’t care as much about happiness as you do’, and my guess is this is similar for many EAs in the ‘longtermist mainstream’.
To be clear, this is just something I was genuinely surprised by, and am curious to understand. The launch post currently does seem slightly misleading to me, but not more so than I’d expect posts in this reference class to generally be, and not so much that I clearly wish you’d change anything. I do think some people in your target audience will be similarly confused, and so perhaps it would make sense for you to at least mention this issue and possibly link to a page with a more in-depth explanation for readers who are interested in the details.
Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren’t in a particular order.
I’m sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I’m also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn’t shared across all of HLI’s supporters and contributors, hence it isn’t true to say there is an ‘HLI view’. I don’t plan to insist on one either.
And perhaps an organization such as HLI is more useful as a broad tent that unites ‘near-term happiness maximizers’ irrespective of their reasons for why they focus on the near term.
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you’re wondering why this might be of interest, note that one might hold a wide person-affecting view on which it’s good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one’s future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it’s worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
However, I’m struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn’t raise objections to your argument your opponent wouldn’t consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say ‘we’re focused on finding the best ways to make people happier’ rather than ‘we’re focused on near-term human happiness maximisation’, even though the latter is more accurate, as it will cause less confusion.
More generally, it’s unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don’t recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I’m sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I’m not sure anyone else in HLI shares that view (I haven’t asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the ‘standard’ view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we’ll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
(Whereas, without such an explanation, I would be confused why someone would start their own organization “[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.”)
I don’t see why this is confusing. Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.
Congratulations to launching HLI. From my outside perspective, it looks like you have quite some momentum, and I’m glad to see more diverse approaches being pursued within EA. (Even though I don’t anticipate to support yours in particular.)
One thing I’m curious about is to what extent HLI’s strategy or approach depend on views in population ethics (as opposed to other normative questions, including the theory of well-being), and to what extent you think the question whether maximizing consequentialisism would recommend to support HLI hinges on population ethics.
I’m partly asking because I vaguely remember you having written elsewhere that regarding population ethics you think that (i) death is not bad in itself for any individual’s well-being, (ii) creating additional people is never good for the world. My impression is that (i) and (ii) have major implications for how to do ‘cause prioritization’, and for how to approach the question of “how to do the most good we can” more broadly. It thus would make sense to me that someone endorsing (i) and (ii) thought that, say, they need to research and provide their own career advice as it would likely differ from the one provided by 80K and popular views in EA more generally. (Whereas, without such an explanation, I would be confused why someone would start their own organization “[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.”) More broadly, it would make sense to me that people endorsing (i) and (ii) embark on their own research programme and practical projects.
However, I’m struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post. It seems to me that everything you say is consistent with (i) and (ii), and that e.g. in your vision you almost suggest a view that is neutral about ‘making happy people’. But on the face of it, ‘increasing the expected number of [happy] individuals living in the future, for example by reducing the risk of human extinction’ seems a reasonable candidate answer to your guiding question, i.e., “What are the most cost-effective ways to increase self-reported subjective well-being?”
Put differently, I’d expect that your post raises questions such as ‘How is this different from what others EA orgs are doing?’ or ‘How will your career advice differ from 80K’s?’ for many people. I appreciate there are many other reasons why one might focus to, as you put it, “welfare-maximization in the nearer-term”—most notably empirical beliefs. For example, someone might think that the risk of human extinction this century was extremely small, or that reducing that risk was extremely intractable. And perhaps an organization such as HLI is more useful as a broad tent that unites ‘near-term happiness maximizers’ irrespective of their reasons for why they focus on the near term. You do mention some of the differences, but it doesn’t seem to me that you provide sufficient reasons for why you’re taking this different approach. Instead, you stress that you take value to exclusively consist of happiness (and suffering), how you operationalize happiness etc. - but unless I’m mistaken, these points belonging to the theory of well-being don’t actually provide an answer to the question that to me seems a bit like the unacknowledged elephant in the room: ‘So why are you not trying to reduce existential risk?’ Indeed, if you were to ask me why I’m not doing roughly the same things as you with my EA resources, I’d to a first approximation say ‘because we disagree about population ethics’ rather than ‘because we disagree about the theory of well-being’ or ‘I don’t care as much about happiness as you do’, and my guess is this is similar for many EAs in the ‘longtermist mainstream’.
To be clear, this is just something I was genuinely surprised by, and am curious to understand. The launch post currently does seem slightly misleading to me, but not more so than I’d expect posts in this reference class to generally be, and not so much that I clearly wish you’d change anything. I do think some people in your target audience will be similarly confused, and so perhaps it would make sense for you to at least mention this issue and possibly link to a page with a more in-depth explanation for readers who are interested in the details.
In any case, all the best for HLI!
Hello Max,
Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren’t in a particular order.
I’m sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I’m also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn’t shared across all of HLI’s supporters and contributors, hence it isn’t true to say there is an ‘HLI view’. I don’t plan to insist on one either.
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you’re wondering why this might be of interest, note that one might hold a wide person-affecting view on which it’s good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one’s future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it’s worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn’t raise objections to your argument your opponent wouldn’t consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say ‘we’re focused on finding the best ways to make people happier’ rather than ‘we’re focused on near-term human happiness maximisation’, even though the latter is more accurate, as it will cause less confusion.
More generally, it’s unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don’t recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I’m sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I’m not sure anyone else in HLI shares that view (I haven’t asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the ‘standard’ view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we’ll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
I don’t see why this is confusing. Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Hi Michael, thank you for your thoughtful reply. This all makes a lot of sense to me.
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically. (Though I appreciate that maybe you didn’t write that post specifically for this Forum, and that maybe it just isn’t worth the effort to do so.) Waiting and checking if other people flag similar concerns seems like a very sensible response to me.
One quick reply:
I agree I didn’t make intelligible why this would be confusing to me. I think my thought was roughly:
(i) Contingently, we can have an outsized impact on the expected size of the total future population (e.g. by reducing specific extinction risks).
(ii) If you endorse totalism in population ethics (or a sufficiently similar aggregative and non-person-affecting view), then whatever your theory of well-being, because of (i) you should think that we can have an outsized impact on total future well-being by affecting the expected size of the total future population.
Here, I take “outsized” to mean something like “plausibly larger than through any other type of intervention, and in particular larger than through any intervention that optimized for any measure of near-term well-being”. Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being, or how to measure well-being—that is, my guess is that reducing existential risk would be (contingently!) a convergent priority (at least on the axiological, even though not necessarily normative level) of all bundles of ethical views that include totalism, in particular irrespective of their theory of well-being. [Of course, taken literally this claim would probably be falsified by some freak theory of well-being or other ethical view optimized for making it false, I’m just gesturing at a suitably qualified version I might actually be willing to defend.]
However, I agree that there is nothing conceptually confusing about the assumption that a different theory of well-being would imply different career priorities. I also concede that my case isn’t decisive—for example, one might disagree with the empirical premise (i), and I can also think of other at least plausible defeaters such as claims that improving near-term happiness correlates with improving long-term happiness (in fact, some past GiveWell blog posts on flow-through effects seem to endorse such a view).
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.