On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, “there is no sensible comparison to be made” between their status and that in the case where they are alive
Person-affecting views are those will hold not all possible people matter. Once you’ve decided who matters (the present, necessary or actual people), it’s then a different question how you think about the badness of death for those that matter. You can say creating people isn’t good/bad, but it’s still bad if already existing people die early. FWIW, I also find Epicureanism about the badness of death rather plausible, i.e. I don’t think we compare the value of living longer for someone. I recognise this makes me something of a ‘moral hipster’ but I think the arguments for it are pretty good, although I won’t get into that here. As such, I think death, whether by murder or other means, isn’t bad for someone. I think we tend to have the intuition that murder is wrong over and above what it deprives the deceased from, which it why we think it’s just as wrong to murder someone with 1 month vs 10 years left to live. hence I think you’re getting at a deontological intuition, not one about value.
I find the stuff about posthumous harms and benefits very implausible. If Socrates wants us to say ‘Socrates’ and we do, does it really make his life go better?
I agree this makes more sense in terms of mission hedging
I don’t think my argument here is analogous to trying to beat the market. (i.e. I’m not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven’t considered some factor—the growth potential of AI companies—and that’s why they are such a good purchase relative to other stocks and shares.
Roger. Points taken.
Another thing I’d be interested in seeing would be the percentage changes in support for causes year-on-year as that would indicate what the internal dynamics of the movement are. I’m (at least) partly motivated to see this because mental health, which I’ve written quite a lot on, may be the smallest top priority cause, but this is also the first time it’s snuck into the list.
Thanks for this. Were there any causes you considered adding beyond those stated? Those seems like the main causes EAs support, but it would be nice to include ‘minor’ ones to see what the community feeling is about those, e.g. wild animal suffering, education, social justice, immigration reform, etc.
Yes, if the chance of death each year is constant it turns out that remaining life expectancy is around 1/chance of death
Can you explain this is the case? Sorry if this is obvious, but I’m not getting it and can’t think offhand how to do the maths.
On population ethics, for totalists it then seems the dominating concern will be how valuable it is to have a population with longer lives, which puts the emphasis in a difference place from the value of keeping particular individuals alive longer.
Thanks for writing this.
Can you explain in a bit more detail, and without complicated formalisation, why life expectancy after LEV is 1000. I note life expectancy is 1000 and the chance of death in 1 year is 1/1000. Is that a coincidence, or is life expectancy post-LEV just 1/annual chance of death?
I know you’ve said you’re going to cover this later, but I want to flag how sensitive this is to population ethics. On totalism (the value of the outcome is the sum total of well-being of everyone who will ever life), it’s good to create lives, so it’s not necessarily a problem that there’s a higher ‘turnover’ of lives, i.e. people die and other people replace them. Totalists will want to know how longevity affects the long run for everyone, not just those that get to live longer. By contrast, if you’re a person-affecting deprivationism (there is no value creating new lives, but for those lives that count, the badness of death is the amount of well-being they would have had had they lived), life extension looks super important!
Relevant to this, in the following article MacAskill provides the following account of what EA is:
What Is Effective altruism?
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis.11 So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good. There are some defining characteristics of the effective altruist research project. The project is:
Maximizing. The point of the project is to try to do as much good as possible.
Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models.
Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals.
Impartial. Everyone’s welfare is to count equally
Also, you’ve accidentally posted the same thing three times, if you hadn’t noticed already.
Hello Matthew and thanks for your points. I don’t think it counts as bias if favour of X if you chose to do X because you thought X was best!
On the first, I haven’t looked, but I wouldn’t consider that to be the right evidence. It seems pretty plausible people could below hedonic/satisfaction neutrality and not want to kill themselves; I’d expect our evolutionary insight is to keep living even in such circumstances—those who committed suicide easily would have their genes removed from the pool.
On the second, I haven’t, but I’d welcome someone doing that research.
On the third, I am familiar with that stuff and am in regular communication with the economists who write the big reports, e.g. the World Happiness Report. However, I tend to think that, given there are people working on the policy problem, and I don’t have much to add there, but there isn’t really anyone thinking about the EA-type questions of what the best things for individuals to do with their time and money, I do more by contributing to this latter issue.
I couldn’t find a single mention of mental health. If someone finds something from them on this, please let me know!
On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness
Are you using the two ‘neglectedness’ words differently? Why would you calculate X if you already knew X in general?
This being said, when we don’t know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on
I think that’s right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven’t or can’t calculate cost-effectiveness. It’s unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you’d done that, you’ve incorporated the ITN stuff and it would be double-counting to appeal to them again (“I know X is more cost-effective than Y, but Y is more neglected” etc).
Thanks for all these great points (Derek sent these to me privately and I suggested it would be valuable for him to share them here for other interested parties). My brief replies, in order, to those comments that weren’t just informative:
1. fair cop. I think I was lazily using those as I first compiled these numbers back in 2015 (at the start of my PhD).
2. agree it’s unclear what these breakthrough drugs imply for EA
5. it makes sense to compare to GW because that’s who our audience is. People who already think GW is irrelevant and focus on e.g. far future are unlikely to be interested in the analysis here.
6. yes, there are probably flaws in the SM analysis. I look forward to mine being made obsolete in due course. I note that my points on negative spillovers should cause us to downgrade the effectiveness of anti-poverty charities.
8. agree, but this applies to mental health intervention too: their effects could also be larger if we take spillovers into account, e.g. reduced strain on family who care for them.
9. As I’m sympathetic to person-affecting views, I’m not too concerned about the long-term anyway. Even if I were a long-termist, the problem with including indirect effects is that it tends to make the analysis incredibly ‘hand-wavey’ (“ah, saving lives speeds up growth, which is bad for climate change, etc.). I think it makes sense to calculate what can easily be calculated first. If you can’t look anywhere else, at least look under the lamppost.
10. Probably correct. A better analysis would factor in how the LS of AMF recipients would change over their lives (presumably upwards and societal conditions improve)
11. I agree LS is not the ideal thing. If we had affect scores, I would say we use those, but we don’t! (“slaves to the data” etc)
12. I also agree moving to affect would make mental health score better than poverty. I left that out because I thought the analysis was complicated enough already.
Hello Sanjay. I didn’t do this because I think the idea of comparing causes by numerically assigned scores to I, N and T is of illusory helpfulness and I wish we would all stop doing it(!). What we care about is knowing the expected value of the dollar you would donate (or, more complicatedly, the hour you would spend). I’ve produced some numbers by doing cost-effectiveness estimates of a charity you could donate to. Given that’s what we ultimately want, it’s unclear what the positive value is of representing things via the INT approach. I have a thesis chapter/EA forum post forthcoming on this topic, but I’ll make a couple of points here.
First, note that on the 80k framework the INT literally is a cost-effectiveness calculation and not, which is what Will uses in Doing Good Better, 3 independent heuristics which somehow combine to give a rough idea of cost effectiveness. Indeed, it’s more confusing to do expected value the way 80k suggests, than how I did it, as their method requires redundant and arbitrary steps. 80k specify neglectedness as “% increase in resources/extra person or dollar”. It is later defined as “How many people, or dollars, are currently being dedicated to solving the problem?” But, deciding what counts as dollars being dedicated to “solving the problem” is arbitrary, hence there cannot be a precise answer to this question.
Further, if I wanted to put mental health in 80k’s framework, note that in addition to establishing an arbitrary neglectedness score, I’d have to ascertain solvability—found by asking “If we doubled direct effort on this problem, by what fraction of the remaining problem would we expect to solve?” How would I do that? I’d have to work out the total size of the problem, then assess how much of it would be solved by some given intervention. To do that, I’d need to work out the cost-effectiveness of a mental health intervention. But I’ve already done that, so I can only calculate the tractability/solvability number once I already have the information that is ultimately of interest to me.
I don’t see how it’s an improvement over the formula cost-effectiveness = effect/cost to say to say Cost-effectiveness = (effect/ % of a problem solved)/(-% of a problem solved / %increase in resources)/(% increase in resources /cost). As demonstrated, it’s (at least sometimes) harder to calculate cost-effectiveness this latter way. If we really think scale is important to keep in mind, we could have a two-factor model, scale (value of solving whole problem) and solvability* (% of problem solved/cost).
Second, I don’t see what the point is to take one ranking of scale/neglectedness/tractability for each of two causes and compare those. What does it tell us that X is more neglected/tractable/large that Y, if that is all we know about X and Y? By itself, it literally tells us nothing about the expected value of marginal resources to X vs Y. We only understand that once we’ve thought how scale, neglectedness and tractability combine to give us cost-effectiveness. To bring this out, imagine you and I are having a conversation.
Sanjay: “mental health is more neglected than poverty”.
Michael: “and? That doesn’t tell me which one has higher expected value”.
S: “hmm. Poverty is bigger”.
M: “again? So what? That doesn’t tell me which one has higher expected value either”.
S: “Okay, well, poverty is more tractable than mental health”.
M: “and? So what? In fact, what do you mean by ‘tractable’? if you mean ‘has higher expected value’, then you’re just saying poverty is better than mental health health and I don’t know how you factored in neglectedness and size when assessing tractability. If by tractability you mean ‘if we doubled direct effort on this problem by, what fraction of the remaining problem would we expect to solve?’ then I only know which cause you think has higher expected value when you give me precise scores of scale, neglectedness and tractability and tell me how you’re combining those scores to give expected value”
S: Michael, why are you always so difficult? [curtain falls]
By analogy, if we want to know the speed of some object (speed = distance/time), knowing just the distance its traveled, or just the time it took, gives us absolutely no insight into its speed. Do objects which travel further tend to travel faster? Always travel faster?
Third, I don’t think it even makes sense to talk about comparing causes as opposed to comparing interventions. What we’re really doing when we do cause prioritisation is saying “there are problems of types A, B and C. I’m going to find the best intervention I can that tackles each of A, B and C. Then I’m going to compare the best item I’ve found in each ‘bucket’.” Given we can’t give money to poverty (the abstract noun), but we can give to interventions that reduce poverty, we should just think in terms of interventions instead of causes.
I may have misunderstood your first comment, but if I had estimated the effects for GiveDirectly it would have been (on my best guess) less effective than the study showed. From the 2016 paper I inferred GD increased life satisfaction (LS) by 0.3/10 per person. In the Origins of Happiness, Clark et al find a doubling of income increase LS 0.12/10 by. IIRC (and I may not), the $750 transfer from GD is less than a doubling of household income. So the estimated effects would have been approx. 3 times smaller for GD.
Regarding StrongMinds’ treatment, Reay et al. (2012) have a 2 year study of how much of the benefits are retained for interpersonal group therapy (which is what StrongMinds delivers). I agree it is more appropriate to use this than using the Wiles et. al (2016) model—which I interpret as a constant effect for 4 years and then nothing thereafter—as Wiles et al. is based on UK CBT, I think delivered individually. To account for this, in my spreadsheet, I do two estimates, one where I assume the treatment effect is constant as lasts only 4 years, another where 75% of the benefits are retained annually. This latter estimation method is taken from Halstead and Snowden’s Founder’s Pledge report on mental health where they also assess StrongMinds. It turns out the estimates give practically identical results so, in this case, the cost-effectiveness is not sensitive to how duration of effect is modeled.
I agree with you that the best current mental health charity is probably far less cost-effective, relative to whatever the best possible intervention is, than the best current development or physical health charities, on the grounds more effort has been put into the latter. (As you and I have discussed) I am optimistic about finding/developing even better ways to do provide mental health treatments. I didn’t stress this point on the grounds the reader was probably more interested in current interventions than hypothetical interventions, but that could have been an error on my part.
First, it’s unclear how many EAs are totalists or long-termists. I suppose this post is addressed at those who support global poverty and development, which is (from surveys) the majority of EAs. To support global poverty and development you could—this is not an exhaustive list - (a) be a person-affector or (b) be a totalist who is sceptical about the effectiveness of far-future stuff or (c) be a long-termist who think near-term interventions have strong long-term impacts, such that they are cost-competitive with X-risk.
Second, on why I’m sympathetic to person-affecting view, the short answer is because I find the following two concepts highly plausible.
First, the person-affecting restriction: an outcome can only be better or worse if it is better or worse for someone. (Parfit, Reasons and Persons attributes such a view to Narveson, explaining “On [Narveson’s] view, it is not good that people exist because their lives contain happiness. Rather, happiness is good because it is good for people”)
Second, non-comparativism about existence: non-existence is neither better than, worse than, or equally good as, existence for someone. Why believe this? For the personal betterness relation to hold (i.e. for an outcome to be better for someone) the person needs to exist in both of those outcomes. If the person only exists in one outcome, there is no comparison to be made. By analogy, to say “X is taller than Y”, X and Y need to have a height. If X or Y lack the property of height, they cannot stand in the relationship of “being taller than”. It’s confused to say “the Eiffel Tower is taller than nothing”. “Nothing” lacks a height (rather than has a height of zero), thus the Eiffel tower’s height is incomparable to the height of “nothing”. If we’re concerned with the personal betterness relationship, we are comparing two states of the person (i.e. the person needs to exist and have some good, bad, or neutral-making properties). A non-existent entity cannot stand in the personal betterness relationship with an existing person. There is no sensible comparison to be made; one cannot compare something with nothing.
Taken together, these two statements entail that creating new lives is incomparable in value to not creating them.
That’s about the quickest answer I can give.
Yes, I had a few paragraphs on the potential indirect effects of treating mental health but decided to cut them out at last moment as (a) I wasn’t sure how many people would be interested in them and (b) the whole analysis is just extremely handwavey.
It’s possible that someone could think focusing on mental health/happiness now could have very long-run effects and would be justified primarily on the impact it would have to future people. This also applies to bednets, economic development etc. and it seems very hard to sensibly compare these things. My hunch is that if someone was taking this angle they would do more good by trying to get governments to measure policies by their SWB impact, rather than by treating more people for depression through developing world micro-interventions.
I want to note a tension in this article. It was about being welcoming by, roughly, not assuming all people you speak to are from a certain group. However, while ‘conservative’ is a general term, the conservatives under discussion were clearly conservatives in the USA; in the UK, from where I write, there isn’t much in the way of creationists, pro-lifers, or Trump supporters. As such, I would like to suggest that one way effective altruists can be welcoming is by not presuming everyone interested in effective altruism is an US citizen.