To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other;
Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)
Is putting some non-trivial budget into cash prizes for arguments against what you do the only way to show you’re self-critical? Your statement suggests you believe something like that. But that doesn’t seem the only way to show you’re self-critical. I can’t think of any other organisation that have ever done that, so if it is the only way to show you’re self-critical, that suggests no organisation (I’ve heard of) is self-critical, which seems false. I wonder if you’re holding CEA to a peculiarly high standard; would you expect MIRI, 80k, the Gates Foundation, Google, etc. to do the same?
Despite your reservations, I think it would actually be very useful for you to input your best guess inputs (and its likely to be more useful for you to do it than an average EA, given you’ve thought about this more). My thinking is this. I’m not sure I entirely followed the argument, but I took it that the thrust of what you’re saying is “we should do uncertainty analysis (use Monte Carlo simulations instead of point-estimates) as our cost-effectiveness might be sensitive to it”. But you haven’t shown that GiveWell’s estimates are sensitive to a reliance on point estimates (have you?), so you haven’t (yet) demonstrated it’s worth doing the uncertainty analysis you propose after all. :)
More generally, if someone says “here’s a new, really complicated methodology we *could* use”, I think its incumbent on them to show that we *should* use it, given the extra effort involved.
Well, how about starting “Tinder for sparerooms”?
I note your main project is writing a book on longtermism. Would you like to see the EA movement going in a direction where it focuses exclusively, or almost exclusively, on longtermist issues? If not, why not?
To explain the second question, it would seem answering ‘no’ to the first question would be in tension with advocating (strong) longtermism.
shows a major problem
You mean, shows a major finding, no? :)
suggesting a violation of transitivity
The (normal) person-affecting response here is to say that options 1 and 3 and incomparable in value to 2 - existence is neither better than, worse than, or equally good as, non-existence for someone. However, if Sam exists necessarily, then 2 isn’t a option, so then we say 3 is better than 1. Hence, no issues with transitivity.
(ii) Society currently privileges those who live today above those who will live in the future; and
(iii) We should take action to rectify that, and help ensure the long-run future goes well.
Do you mean Necessitarians wouldn’t accept (iii) of the above? Necessitarians will agree with (ii) and deny (iii). (Not sure if this is what you were referring to).
I’m sympathetic to Necessitarianism, but I don’t know how fringe it is. It strikes me as the most philosophically defensible population axiology that rejects long-termism which leans me towards thinking the definition shouldn’t fall foul of it. (I think Hilary’s suggestion would fall foul of it, but yours would not).
An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.
Just to make a brief, technical (pedantic?) comment, I don’t think this definition would give you want you want. (Strict) Necessitarianism holds the only persons who matter are those who exist whatever we do. On such a view, the practical implication is, in effect, that only present people matter. The view is thus not longtermist on your chosen definition. However, Necessitarianism doesn’t discount for time per se (the discounting is contingent on time) and hence is longtermist on the quoted definition.
One idea for evening things out is to offer prizes for the best arguments against established EA donation targets!
This is a great idea!
Thank for linking the article, which I enjoyed. I’ll jump straight to the three objections I have. I’ll first state these briefly and then explain them in greater detail. First, the Duty of Beneficence, as you state it, seems both ad hoc and under-demanding. Second, your first argument for Maximising Beneficence does not provide the support you claim it does. Third, insufficient explanation is offered for the relationship between, one the one hand, assessing problems by their scale, neglectedness, and tractability and, on the other, cost-effectiveness. I’m (obviously) an EA enthusiast, so I offer these in the spirit of trying to improve the arguments for EA.
1. The Duty of Beneficence
You state “Most middle or upper class people in rich countries have a duty to make helping others a significant part of their lives” but why is it the case that “Most middle or upper class people in rich countries” have the duty, rather than everyone? Suppose if I’m rich but in a poor country, or poor but in a rich country. Do I have no duty of beneficence? That seems implausible. What if I am rich and currently live in a rich country, but move to a poor country—does my duty of beneficence disappear?
Let’s call your version The Middle-Class-Rich-Country Duty of Beneficence (MCRC) and distinguish that from the General Duty of Beneficence: all individuals have a duty to make helping others a significant part of their lives. As you don’t defend the stronger, General Duty, I assume you’re taking the position that there is no General Duty, just MCRC.
I suppose you’re taking this as an argumentative strategy (rather than you actually believe it) and doing this because you assume your readers would accept MCRC but not the General duty: presumably many people think that, at the very least, those who are globally fortunate ought to help others. Your (clever) move is to point out the reader is themself likely part of this elite.
One problem with this move is that MCRC is arbitrary. I do not think you provide an justification for this particular specification, nor do I think there is one. Second, as noted, it is grossly under-demanding to those who are not both middle-class and in rich countries.
I worry the response to your argument from those who dislike Effective Altruism will be to claim 1. MCRC is implausible, 2. note you haven’t provided an argument for the General duty and thus 3. assume there is no general duty. I wonder if the better strategy is just to argue there must be a general duty to benefit others if the costs to us are small (a la Singer in Famine, Affluence and Morality). You would then note the costs to the world’s rich are clearly small relative to the benefit they would give to others.
2. Maximising beneficence
Your first of two arguments for maximising beneficence is an appeal to cases. You write:
Suppose that, as a volunteer doctor in a resource-starved hospital in a poor country, you can do one of two things with your last day of work before you return home. First, you could perform surgery on an elderly man with prostate cancer, thereby saving his life. Or you could treat two children from malaria, thereby saving both their lives. If you had a personal attachment to the cause of fighting prostate cancer, would that give you sufficient reason to save the the life of the elderly man rather than the two children? Clearly not. The importance of saving two lives rather than one, and of saving people who have much more to gain from their treatment, clearly outweighs whatever reason a personal attachment might bring.
Immediately after you state:
Yet this is morally analogous to the decisions that we actually face when we try to use our resources to do good. The only way in which it is morally disanalogous is with respect to what’s at stake. (emphasis added)
One important way in which the case is morally disanalogous is that it is about a doctor performing their duties. We often think people, acting their professional roles, have duties that do not apply to individuals acting their private lives. To highlight this, I imagine many people have the intuition that if this same doctor were to run a marathon for charity, that doctor would be morally permitted to fundraise for whatever cause they wanted, not just whatever cause saves lives most cost-effectively (or, more broadly, does the most good). When running the marathon, the doctor is a merely a private citizen and, qua private citizen, they can do good any way they want. One could accept the doctor must save the two rather than the one while denying individuals are generally required to maximise beneficence.
Potentially, using a case where someone, acting a private citizen, can save either one life or two at the same cost—e.g. you stop the trolley killing two people rather than one—would have some the right intuitive pull whilst avoiding the objection it was morally disanalogous.
I think there might be an objection to your second argument for maximising beneficience, but I haven’t been able to formulate it yet.
3. Scale, neglectedness, and tractability
You raise the problem of comparing effectiveness across different problems:
This is a legitimate concern, and effective altruists have developed an alternative heuristic framework for prioritising among causes, including when the impact within some of those causes is difficult to measure. According to this framework, the following factors are indicative of which causes are highest-priority [scale, neglectedness, tractability]
My concern is that in this essay you don’t offer an explanation to the reader as to why they should find it plausible that “the following three factors are indicative of which causes are highest-priority”. Why and exactly how should I use scale, neglectedness and tractability to compare farmed animal welfare to global health and development, two causes effective altruists find high priority?
You do provide a citation to your own book, Doing Good Better, but (as you and I have discussed before) it’s not very clear from that book either (a) why scale, neglectedness and tractability are indicative of which causes are highest priority or (b) how precisely those factors can be used by individuals to determine what the priorities are.
The most plausible precisification of the framework is the one offered by Owen Cotton-Barratt here, where the three factors are multiplied together to give marginal cost-effectiveness (you mention this version of the framework in your Palgrave article). I presume you didn’t want to get into the nitty-gritty in this essay, but I imagine some readers will be left confused about how the framework you mention in this article does the work of comparing causes. My only suggestion here is that you could consider also citing Owen’s and/or 80k’s explanations of the framework.
As an active forum user, I would also be curious to hear about this.
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would “screen off” questions about the theory of well-being
Yes, this seems a sensible conclusion to me. I think we’re basically in agreement: varying one’s account of the good could lead to a new approach to prioritisation, but probably won’t make a practical difference given totalism and some further plausible empirical assumptions.
That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).
FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad—because of the downsides you mention—for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.
Oh I’m glad you agree—I don’t really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.
Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren’t in a particular order.
I’m sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I’m also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn’t shared across all of HLI’s supporters and contributors, hence it isn’t true to say there is an ‘HLI view’. I don’t plan to insist on one either.
And perhaps an organization such as HLI is more useful as a broad tent that unites ‘near-term happiness maximizers’ irrespective of their reasons for why they focus on the near term.
I expect that HLI’s primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you’re wondering why this might be of interest, note that one might hold a wide person-affecting view on which it’s good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one’s future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it’s worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.
However, I’m struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post
Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it’s not (yet) totally clear what HLI will focus on, hence we don’t know what our colours are so as to be able to nail them to the mast, so to speak.
Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn’t raise objections to your argument your opponent wouldn’t consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say ‘we’re focused on finding the best ways to make people happier’ rather than ‘we’re focused on near-term human happiness maximisation’, even though the latter is more accurate, as it will cause less confusion.
More generally, it’s unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don’t recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.
Much of the above equally applies to discussing the value of saving lives . I’m sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I’m not sure anyone else in HLI shares that view (I haven’t asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the ‘standard’ view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we’ll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.
(Whereas, without such an explanation, I would be confused why someone would start their own organization “[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.”)
I don’t see why this is confusing. Holding one’s views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.
Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.
Hello Nathan. I think HLI will probably focus on what we can do for others. There is already quite a lot of work by psychologists on what individuals can do for themselves, see e.g. The How of Happiness by Lyubormirsky and what is called ‘positive psychology’ more broadly. Hence, our comparative advantage and counterfactual impact will be on how best to altruistically promote happiness.
Sure though there are some kinds of misery you don’t want to reduce
I think we should be maximising happiness over any organism’s whole lifespan; hence, some sadness now and then may be good for maximising happiness over the whole life. It’s an empirical question how much sadness is optimal for maximum lifetime happiness.
On the funeral point, I think you’re capturing an intuition about what we ought to do rather than what makes life go well for someone: you might think that not going the funeral would make your life go better for you, but that you ought to go anyway. Hence, I don’t think your point counts against happiness being what makes your life go well for you (leaving other considerations to the side).
The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn’t matter as long as the individual is (more) satisfied.
Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.
Life satisfaction and preference satisfaction are different—the former refers to a judgement about one’s life, the latter to one’s preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn’t seem that his life is going well. You’re right that preference satisfactionists often appeal to ‘laundered’ preferences—their have to prefer what their rationally ideal self would prefer, or something—but it’s hard and unsatisfying to spell out what this looks like. Further, it’s unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What’s more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn’t count towards my well-being because they ‘irrational’ you don’t seem to be respecting the view that my well-being consists in whatever I say it does.
On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one’s life is going well, it doesn’t matter how you come to that judgement.
I’m not sure what Kahneman believes. I don’t think he’s publicly stated well-being consists in life satisfaction rather than happiness (or anything else). I don’t think his personal beliefs are significant for the (potential) view either way (unless one was making an appeal to authority).
Thanks for these.
I remember we discussed (1) a while back but I’m afraid I don’t really remember the details anymore. To check, what exactly is the bias you have in mind—that people inflate their self-reports scores generally when they are being given treatment? Is there one or more studies you can point me to so I can read up on this, or is this a hypothetical concern?
I don’t think I understand what you’re getting at with (2): are you asking what we infer if some intervention increases consumption but doesn’t increase self-reported life satisfaction in a scenario S but does in other scenarios? That sounds like a normal case where we get contradictory evidence. Let me know if I’ve missed something here.
What evidence currently exists around the external validity of the links between outcomes and ultimate impact (i.e. life satisfaction)?
I’m not sure what you mean by this. Are you asking what the evidence is on what the causes and correlated of life satisfaction is? Dolan et al 2008 have a much cited paper on this.
RomeoStevens, thanks for this comment. I think you’re getting at something interesting, but I confess I this quite hard to follow. Do you think you could possibly restate it, but do so more simply (i.e. with less jargon)? For instance, I don’t know how to make sense of
There seems to be strong status quo bias and typical mind fallacy with regard to hedonic set point.
In the ‘measuring happiness’ bit of HLI’s website we say
The ‘gold standard’ for measuring happiness is the experience sampling method (ESM), where participants are prompted to record their feelings and possibly their activities one or more times a day. While this is an accurate record of how people feel, it is expensive to implement and intrusive for respondents. A more viable approach is the day reconstruction method (DRM) where respondents use a time-diary to record and rate their previous day. DRM produces comparable results to ESM, but is less burdensome to use (Kahneman et al. 2004).
Further, I don’t think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper ‘Objective happiness’.)