My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.
These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we’re going to be stuck with this problem for a while.
In terms of how 80,000 Hours handles it:
Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).
This is a huge topic, but I disagree. Here are some quick reasons.
First, you should value the far future even if you only put some credence on theories like total utilitarianism.
e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.
This is a better approximation of our approach—we’re not confident in total utilitarianism, but some weight on it due to moral uncertainty.
Second, even if you don’t put any value on the far future, it wouldn’t completely change our list.
First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.
Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don’t change that much.
E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn’t dramatically shift the rankings.
I think AI could be similar. It seems like there’s a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there’s a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.
I think the choice of promoting EA and global priorities research are even more robust to different value judgements.
We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones:
https://80000hours.org/problem-quiz/
Hm, I’m a little sad about this. I always thought that it was nice to have GWWC presenting a more “conservative” face of EA, which is a lot easier for people to get on board with.
But I guess this is less true with the changes to the pledge—GWWC is more about the pledge than about global poverty.
That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.
You might think The Life You Can Save plays this role.
I’ve generally been surprised over the years by the extent to which the more general ‘helping others as much as we can, using evidence and reason’ has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I’m not actually convinced that’s the case anymore. And if it’s not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.
I hadn’t thought the TLYCS as an/the anti-poverty org. I guess I didn’t think about it as they’re not so present in my part of the EA blogsphere. Maybe it’s less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it’s less than ideal).
For as long as it’s the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.
But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.
However, it does seem a bit hard to reconcile GWWC’s and 80k’s positions on this topic. GWWC (i.e. you) seem to be saying “most EAs care about poverty, so that’s what we’ll emphasise” whereas 80k (i.e. Ben Todd above) seems to saying “most EAs do (/should?) care about X-risk, so that’s what we’ll emphasise”.
These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.
We’re fiscally sponsored by CEA (so legally within the same entity) and have the same board of trustees, but we operate like a separate organisation.
Our career guide also doesn’t mention EA until the final article, so we’re not claiming that our views represent those of the EA movement. GWWC also doesn’t claim on the website to represent the EA movement.
That’s right—we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it’s still true we don’t give the impression of representing the EA movement.
Might it be that 80k recommend X-risk because it’s neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?
I don’t think that’d be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won’t be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.
Talking about people in the abstract, or in a tone as some kind of “other”, is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There’s lots of skirmishes online in effective altruism between causes, and I expect most of us don’t all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That’s what my comment reads like. That’s not my intention.
I’m just trying to be frank. On the Effective Altruism Forum, I try to follow Grice’s Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn’t the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.
The effects of ingroup favouritism in terms of competing cause selections in the community don’t seem healthy to the EA ecosystem. If we want to get very specific, here’s how finely the EA community can be sliced up by cause-selection-as-group-identity.
global poverty EAs; climate change EAs?; social justice EAs...?
The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There’s strawmanning and steelmanning, and then there’s representations of concepts in EA which fall in between.
I think if we as a community expect everyone to impeccably steelman everyone all the time, we’re being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren’t going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren’t written down anywhere, they’re going to exit fast. I’m not going anywhere, but if we aren’t more willing to be more charitable to people we at first disagree with than they are to us, this movement won’t grow. That’s because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they’re still curious. Spurning doesn’t lead to learning.
All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.
Also, as of this comment, my parent comment above has net positive 1 upvote, so it’s all good.
I obviously can’t speak for GWWC but I can imagine some reasons it could reach different conclusions. For example, GWWC is a membership organization and might see itself as, in part, representing its members or having a duty to be responsive to their views. At times, listeners might understand statements by GWWC as reflecting the views of its membership.
80k’s mission seems to be research/advising so its users might have more of an expectation that statements by 80k reflect the current views of its staff.
Thanks for the correction on 80k. I’m pleased to hear 80k stopped doing this ages ago: I saw the new, totalist-y update and assumed that was more of a switch in 80k’s position than I thought. I’ll add a note.
I agree moral uncertainty is potentially important, but there are two issues.
I’m not sure EMV is the best approach to moral uncertainty. I’ve been doing some stuff on meta-moral uncertainty and think I’ve found some new problems I hope to write up at some point.
I’m also not sure, even if you adopt an EMV approach, the result is that totalism becomes your effective axiology as Hilary and Toby suggest in their paper (http://users.ox.ac.uk/~mert2255/papers/mu-about-pe.pdf). I’m also working on a paper on this.
Those are basically holding responses which aren’t that helpful for the present discussion. Moving on then.
I disagree with your analysis that person-affecting views are committed to being very concerned about X-risks. Even supposed you’re taking a person-affecting view, there’s still a choice to be made about your view of the badness of death. If you’re an Epicurean about death (it’s bad for no one to die) you wouldn’t be concerned about something suddenly killing everyone (you’d still be concerned about the suffering as everyone died though). I find both person-affecting views and Epicureanism pretty plausible: Epicureanism is basically just taking a person-affecting view to creating lives and applying it to ending lives, so if you like one, you should like both. On my (heretical and obviously deeply implausible) axiology, X-risk doesn’t turn out to be important.
FWIW, I’m (emotionally) glad people are working on X-rosk because I’m not sure what to do about moral uncertainty either, but I don’t think I’m making a mistake in not valuing it. Hence I focus on trying to find the best ways to ‘improve lives’ - increasing the happiness of currently living people whilst they are alive.
You’re right that if you combine person-affecting-ness and a deprivationist view of death (i.e. badness of death = years of happiness lost) you should still be concerned about X-risk to some extent. I won’t get into the implications of deprivationism here.
What I would say, regarding transparency, is that if you think everyone should be concerned about the far future because you endorse EMV as the right answer to moral uncertainty, you should probably state that somewhere too, because that belief is doing most of the prioritisation work. It’s not totally uncontentious, hence doesn’t meet the ‘moral inclusivity’ test.
I agree that if you accept both Epicureanism and the person-affecting view, then you don’t care about an xrisk that suddenly kills everyone, perhaps like AI.
However, you might still care a lot about pandemics or nuclear war due to their potential to inflict huge suffering on the present generation, and you’d still care about promoting EA and global priorities research. So even then, I think the main effect on our rankings would be to demote AI. And even then, AI might still rank due to the potential for non-xrisk AI disasters.
Moreover, this combination of views seems pretty rare, at least among our readers. I can’t think of anyone else who explicitly endorses it.
And this is all before taking account of moral uncertainty, which is an additional reason to put some value on future generations that most people haven’t already considered.
I agree it would be better if we could make all of this even more explicit, and we plan to, but I don’t think these questions are on the mind of many of our more readers, and we rarely get asked about them in workshops and so on. In general, there’s a huge amount we could write about, and we try to address people’s most pressing questions first.
On transparency, if you want to be really transparent about what you value and why, I don’t think you can assume people agree with you on topics they’ve never considered, that you don’t mention, and that do basically all the work of cause prioritisation. The number of people worlwide who understand moral uncertainty well enough to explain it could fill one seminar room. If moral uncertainty is your “this is why everyone should agree with us” fall back, then that should presumably feature somewhere. Readers should know that’s why you put forward your cause areas so they’re not surprised later on to realise that’s the reason.
On exclusivity, you response seems to ammount to “most people want to focus on the far future and, what’s more, even if they don’t, they should because of moral uncertainty, so we’re just going to say it’s what really matters”. It’s not true that most EAs want to focus on the far future—see Ben Hurford’s post below. Given that it’s not true, saying people should focus on it is, in fact, quite exclusive.
The third part of my original post argued we should want EA should be morally inclusive even if we endorse a particular moral theory. Do you disagree with that? Unless you disagree, it doesn’t matter whether people are or should be totalists: it’s worse from a totalist perspective for 80k to only endorse totalist-y causes.
Less important comments:
FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and ‘ordinary human unhappiness’ (that is, the sub-maximal happiness many people have even if they are entirely healthy and economically secure). Say a nuclear war kills everyone, then that’s just few moments of suffering. Say it kills most people, but leaves 10m left who eek out a miserable existence in a post apocalyptic world, then you’re just concerned with 10m people, which is 50 times less than just the 500m who have either anxiety or depression worldwide.
I know some people who implicitly or explicitly endorse this, but I wouldn’t expect you to, and that’s one of my worries: if you come out in favour of theory X, you disproportionately attract those who agree with you, and that’s bad for truth seeking. By analogy, I don’t imagine many people at a Jeremy Corbyn rally vote Tory. I’m not sure Jeremy shouldn’t take that as further evidence that a) the Tories are wrong or b) no one votes for them.
I’m curious where you get your 90% figure from. Is this from asking people if they would:
“Prevent one person from suffering next year.
Prevent 100 people from suffering (the same amount) 100 years from now.”?
I assume it is, because that’s how you put it in the advanced workshop at EAGxOX last year. If it is, it’s a pretty misleading question to ask for a bunch of reasons that will take too long to type out fully. Briefly, one problem is that I think we should help the 100 people in 100 years if those people already exist today (both necessitarians and presentists get this results). So I ‘agree’ with your intuition pump but don’t buy your conclusions, which suggests the pump is faulty. Another problem is the Hawthorne effect. Another is population ethics is a mess and you’ve cherry picked a scenario that suits your conclusion. If I asked a room of undergraduate philosophers “would you rather relieve 100 living people of suffering or create 200 happy people” I doubt many would pick the latter.
I feel like I’m being interpreted uncharitably, so this is making me feel a bit defensive.
Let’s zoom out a bit. The key point is that we’re already morally inclusive in the way you suggest we should be, as I’ve shown.
You say:
for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up].
Comparing global problems involves difficult judgement calls, so different people come to different conclusions. We made a tool that asks you some key questions, then re-ranks the lists based on your answers.
And provide this: https://80000hours.org/problem-quiz/
Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.
In general, 80k has a range of options, from most exclusive to least:
1) State our personal views about which causes are best
2) Also state the main judgement calls required to accept these views, so people can see whether to update or not.
3) Give alternative lists of causes for nearby moral views.
4) Give alternative lists of causes for all major moral views.
We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.
It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don’t think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason—the bigger factor is that the scale scores don’t actually change that much if you stop valuing xrisk.
Your response was that you’re also epicurean, but then that’s such an unusual combination of views that it falls within (4) rather than (3).
But, finally, let’s accept epicureanism too. You claim:
FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and ‘ordinary human unhappiness’
For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that’s 150m QALYs per year, so would get 12 on our scale.
What about for pandemics? The Spanish Flu infected 500m people, so let’s call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we’re being Epicurean, or the suffering inflicted on non-infected people). If there’s a 50% chance that happens within 50 years, then that’s 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it’s a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)
But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.
We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.
I’m not sure how much gets spent on mental health, but I’d guess it’s much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.
All the above is highly, highly approximate—it’s just meant to illustrate that, on your views, it’s not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.
I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.
So, I don’t mean to be attacking you on these things. I’m responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I’ve probably forgotten the details and I don’t think it would be useful to go back and enage in a ‘you wrote this here’ to check.
A few quick things as this has already been a long exchange.
Given I accept I’m basically a moral hipster, I’d understand if you put my views in the 3 rather 4 category.
If it’s of any interest, I’m happy to suggest how you might update your problem quiz to capture my views and views in the area.
I wouldn’t think the same way about Spanish flu vs mental health. I’m assuming happiness is duration x intensity (#Bentham). What I think you’re discounting is the duration of mental illnesses—they are ‘full-time’ in that they take up your conscious space for lots of the day. They often last a long time. I don’t know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it’s not clear it’s worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the ‘peak-end’ effect show this is exactly how we remember things: our brains only really remember the intensity of events.
One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don’t think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven’t offered anything to do with solvability of neglectedness yet.
Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don’t read the comment threads.
I’ll add a note saying you provide a decision tool, but I don’t think you do what I suggest (obviously, you don’t have to do what I suggest and can think I’m wrong!).
I don’t think it’s correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that’s the choice you’ve made. In the OP I suggest there were advantages to inclusivity over exclusivity and I’d be interested to hear if/why you disagree.
I’m also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you’ve allowed. I’ve very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it’s also a bit odd if you’ve left some of the biggest problems (by scale) off the list. I accept I haven’t made substantial arguments for all of these in writing, but I’m not sure what evidence you’d consider relevant.
I’ve also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I’d like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn’t worth your time.
Hi Michael,
I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.
80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: https://80000hours.org/2014/01/which-cause-is-most-effective-300/ The page you link to listed other causes at least as early as 2015: https://web.archive.org/web/20150911083217/https://80000hours.org/articles/cause-selection/
My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.
These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we’re going to be stuck with this problem for a while.
In terms of how 80,000 Hours handles it:
This is a huge topic, but I disagree. Here are some quick reasons.
First, you should value the far future even if you only put some credence on theories like total utilitarianism.
e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.
This is a better approximation of our approach—we’re not confident in total utilitarianism, but some weight on it due to moral uncertainty.
Second, even if you don’t put any value on the far future, it wouldn’t completely change our list.
First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.
Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here: https://80000hours.org/articles/problem-framework/#how-to-assess-it
Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don’t change that much.
E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn’t dramatically shift the rankings.
I think AI could be similar. It seems like there’s a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there’s a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.
I think the choice of promoting EA and global priorities research are even more robust to different value judgements.
We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones: https://80000hours.org/problem-quiz/
Ben’s right that we’re in the process of updating the GWWC website to better reflect our cause-neutrality.
Hm, I’m a little sad about this. I always thought that it was nice to have GWWC presenting a more “conservative” face of EA, which is a lot easier for people to get on board with.
But I guess this is less true with the changes to the pledge—GWWC is more about the pledge than about global poverty.
That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.
You might think The Life You Can Save plays this role.
I’ve generally been surprised over the years by the extent to which the more general ‘helping others as much as we can, using evidence and reason’ has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I’m not actually convinced that’s the case anymore. And if it’s not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.
I hadn’t thought the TLYCS as an/the anti-poverty org. I guess I didn’t think about it as they’re not so present in my part of the EA blogsphere. Maybe it’s less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it’s less than ideal).
And what are your/GWWC’s thoughts on moral inclusivity?
For as long as it’s the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.
But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.
Thanks for the update. That’s helpful.
However, it does seem a bit hard to reconcile GWWC’s and 80k’s positions on this topic. GWWC (i.e. you) seem to be saying “most EAs care about poverty, so that’s what we’ll emphasise” whereas 80k (i.e. Ben Todd above) seems to saying “most EAs do (/should?) care about X-risk, so that’s what we’ll emphasise”.
These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.
I edited to clarify that I meant members of GWWC, not EAs in general.
80k is now seperate from CEA or is in the process of being separated from CEA. They are allowed to come to different conclusions.
We’re fiscally sponsored by CEA (so legally within the same entity) and have the same board of trustees, but we operate like a separate organisation.
Our career guide also doesn’t mention EA until the final article, so we’re not claiming that our views represent those of the EA movement. GWWC also doesn’t claim on the website to represent the EA movement.
The place where moral exclusivity would be most problematic is EA.org. But it mentions a range of causes without prioritising them, and links to this tool, which also does exactly what the original post recommends (and has been there for a year). https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/#which-cause https://www.effectivealtruism.org/cause-prioritization-tool/
I think it’s actually mentioned briefly at the end of Part 5: https://80000hours.org/career-guide/world-problems/
(In fact, the mention is so brief that you could easily remove it if your goal is to wait until the end to mention effective altruism.)
That’s right—we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it’s still true we don’t give the impression of representing the EA movement.
Might it be that 80k recommend X-risk because it’s neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?
I don’t think that’d be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won’t be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.
This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.
Talking about people in the abstract, or in a tone as some kind of “other”, is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There’s lots of skirmishes online in effective altruism between causes, and I expect most of us don’t all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That’s what my comment reads like. That’s not my intention.
I’m just trying to be frank. On the Effective Altruism Forum, I try to follow Grice’s Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn’t the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.
The effects of ingroup favouritism in terms of competing cause selections in the community don’t seem healthy to the EA ecosystem. If we want to get very specific, here’s how finely the EA community can be sliced up by cause-selection-as-group-identity.
vegan, vegetarian, reducetarian, omnivore/carnist
animal welfarist, animal liberationist, anti-speciesist, speciesist
AI safety, x-risk reducer (in general), s-risk reducer
classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
global poverty EAs; climate change EAs?; social justice EAs...?
The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There’s strawmanning and steelmanning, and then there’s representations of concepts in EA which fall in between.
I think if we as a community expect everyone to impeccably steelman everyone all the time, we’re being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren’t going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren’t written down anywhere, they’re going to exit fast. I’m not going anywhere, but if we aren’t more willing to be more charitable to people we at first disagree with than they are to us, this movement won’t grow. That’s because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they’re still curious. Spurning doesn’t lead to learning.
All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.
Also, as of this comment, my parent comment above has net positive 1 upvote, so it’s all good.
Sure. But in that case GWWC should take the same sort of line, presumably. I’m unsure how/why the two orgs should reach different conclusions.
I obviously can’t speak for GWWC but I can imagine some reasons it could reach different conclusions. For example, GWWC is a membership organization and might see itself as, in part, representing its members or having a duty to be responsive to their views. At times, listeners might understand statements by GWWC as reflecting the views of its membership.
80k’s mission seems to be research/advising so its users might have more of an expectation that statements by 80k reflect the current views of its staff.
Hello again Ben and thanks for the reply.
Thanks for the correction on 80k. I’m pleased to hear 80k stopped doing this ages ago: I saw the new, totalist-y update and assumed that was more of a switch in 80k’s position than I thought. I’ll add a note.
I agree moral uncertainty is potentially important, but there are two issues.
I’m not sure EMV is the best approach to moral uncertainty. I’ve been doing some stuff on meta-moral uncertainty and think I’ve found some new problems I hope to write up at some point.
I’m also not sure, even if you adopt an EMV approach, the result is that totalism becomes your effective axiology as Hilary and Toby suggest in their paper (http://users.ox.ac.uk/~mert2255/papers/mu-about-pe.pdf). I’m also working on a paper on this.
Those are basically holding responses which aren’t that helpful for the present discussion. Moving on then.
I disagree with your analysis that person-affecting views are committed to being very concerned about X-risks. Even supposed you’re taking a person-affecting view, there’s still a choice to be made about your view of the badness of death. If you’re an Epicurean about death (it’s bad for no one to die) you wouldn’t be concerned about something suddenly killing everyone (you’d still be concerned about the suffering as everyone died though). I find both person-affecting views and Epicureanism pretty plausible: Epicureanism is basically just taking a person-affecting view to creating lives and applying it to ending lives, so if you like one, you should like both. On my (heretical and obviously deeply implausible) axiology, X-risk doesn’t turn out to be important.
FWIW, I’m (emotionally) glad people are working on X-rosk because I’m not sure what to do about moral uncertainty either, but I don’t think I’m making a mistake in not valuing it. Hence I focus on trying to find the best ways to ‘improve lives’ - increasing the happiness of currently living people whilst they are alive.
You’re right that if you combine person-affecting-ness and a deprivationist view of death (i.e. badness of death = years of happiness lost) you should still be concerned about X-risk to some extent. I won’t get into the implications of deprivationism here.
What I would say, regarding transparency, is that if you think everyone should be concerned about the far future because you endorse EMV as the right answer to moral uncertainty, you should probably state that somewhere too, because that belief is doing most of the prioritisation work. It’s not totally uncontentious, hence doesn’t meet the ‘moral inclusivity’ test.
Hi Michael,
I agree that if you accept both Epicureanism and the person-affecting view, then you don’t care about an xrisk that suddenly kills everyone, perhaps like AI.
However, you might still care a lot about pandemics or nuclear war due to their potential to inflict huge suffering on the present generation, and you’d still care about promoting EA and global priorities research. So even then, I think the main effect on our rankings would be to demote AI. And even then, AI might still rank due to the potential for non-xrisk AI disasters.
Moreover, this combination of views seems pretty rare, at least among our readers. I can’t think of anyone else who explicitly endorses it.
I think it’s far more common for people to put at least some value on future generations and/or to think it’s bad if people die. In our informal polls of people who attend our workshops, over 90% value future generations. So, I think it’s reasonable to take this as our starting point (like we say we do in the guide: https://80000hours.org/career-guide/how-much-difference-can-one-person-make/#what-does-it-mean-to-make-a-difference).
And this is all before taking account of moral uncertainty, which is an additional reason to put some value on future generations that most people haven’t already considered.
In terms of transparency, we describe our shift to focusing on future generations here: https://80000hours.org/career-guide/world-problems/#how-to-preserve-future-generations-8211-find-the-more-neglected-risks If someone doesn’t follow that shift, then it’s pretty obvious that they shouldn’t (necc) follow the recommendations in that section.
I agree it would be better if we could make all of this even more explicit, and we plan to, but I don’t think these questions are on the mind of many of our more readers, and we rarely get asked about them in workshops and so on. In general, there’s a huge amount we could write about, and we try to address people’s most pressing questions first.
Hello Ben,
Main comments:
There are two things going on here.
On transparency, if you want to be really transparent about what you value and why, I don’t think you can assume people agree with you on topics they’ve never considered, that you don’t mention, and that do basically all the work of cause prioritisation. The number of people worlwide who understand moral uncertainty well enough to explain it could fill one seminar room. If moral uncertainty is your “this is why everyone should agree with us” fall back, then that should presumably feature somewhere. Readers should know that’s why you put forward your cause areas so they’re not surprised later on to realise that’s the reason.
On exclusivity, you response seems to ammount to “most people want to focus on the far future and, what’s more, even if they don’t, they should because of moral uncertainty, so we’re just going to say it’s what really matters”. It’s not true that most EAs want to focus on the far future—see Ben Hurford’s post below. Given that it’s not true, saying people should focus on it is, in fact, quite exclusive.
The third part of my original post argued we should want EA should be morally inclusive even if we endorse a particular moral theory. Do you disagree with that? Unless you disagree, it doesn’t matter whether people are or should be totalists: it’s worse from a totalist perspective for 80k to only endorse totalist-y causes.
Less important comments:
FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and ‘ordinary human unhappiness’ (that is, the sub-maximal happiness many people have even if they are entirely healthy and economically secure). Say a nuclear war kills everyone, then that’s just few moments of suffering. Say it kills most people, but leaves 10m left who eek out a miserable existence in a post apocalyptic world, then you’re just concerned with 10m people, which is 50 times less than just the 500m who have either anxiety or depression worldwide.
I know some people who implicitly or explicitly endorse this, but I wouldn’t expect you to, and that’s one of my worries: if you come out in favour of theory X, you disproportionately attract those who agree with you, and that’s bad for truth seeking. By analogy, I don’t imagine many people at a Jeremy Corbyn rally vote Tory. I’m not sure Jeremy shouldn’t take that as further evidence that a) the Tories are wrong or b) no one votes for them.
I’m curious where you get your 90% figure from. Is this from asking people if they would:
“Prevent one person from suffering next year. Prevent 100 people from suffering (the same amount) 100 years from now.”?
I assume it is, because that’s how you put it in the advanced workshop at EAGxOX last year. If it is, it’s a pretty misleading question to ask for a bunch of reasons that will take too long to type out fully. Briefly, one problem is that I think we should help the 100 people in 100 years if those people already exist today (both necessitarians and presentists get this results). So I ‘agree’ with your intuition pump but don’t buy your conclusions, which suggests the pump is faulty. Another problem is the Hawthorne effect. Another is population ethics is a mess and you’ve cherry picked a scenario that suits your conclusion. If I asked a room of undergraduate philosophers “would you rather relieve 100 living people of suffering or create 200 happy people” I doubt many would pick the latter.
I feel like I’m being interpreted uncharitably, so this is making me feel a bit defensive.
Let’s zoom out a bit. The key point is that we’re already morally inclusive in the way you suggest we should be, as I’ve shown.
You say:
In the current materials, we describe the main judgement calls behind the selection in this article: https://80000hours.org/career-guide/world-problems/ and within the individual profiles.
Then on the page with the ranking, we say:
And provide this: https://80000hours.org/problem-quiz/ Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.
Moreover, we’ve been doing this since 2014, as you can see in the final section of this article: https://80000hours.org/2014/01/which-cause-is-most-effective-300/
In general, 80k has a range of options, from most exclusive to least:
1) State our personal views about which causes are best 2) Also state the main judgement calls required to accept these views, so people can see whether to update or not. 3) Give alternative lists of causes for nearby moral views. 4) Give alternative lists of causes for all major moral views.
We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.
It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don’t think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason—the bigger factor is that the scale scores don’t actually change that much if you stop valuing xrisk.
Your response was that you’re also epicurean, but then that’s such an unusual combination of views that it falls within (4) rather than (3).
But, finally, let’s accept epicureanism too. You claim:
For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that’s 150m QALYs per year, so would get 12 on our scale.
What about for pandemics? The Spanish Flu infected 500m people, so let’s call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we’re being Epicurean, or the suffering inflicted on non-infected people). If there’s a 50% chance that happens within 50 years, then that’s 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it’s a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)
But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.
We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.
I’m not sure how much gets spent on mental health, but I’d guess it’s much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.
All the above is highly, highly approximate—it’s just meant to illustrate that, on your views, it’s not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.
I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.
Hey.
So, I don’t mean to be attacking you on these things. I’m responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I’ve probably forgotten the details and I don’t think it would be useful to go back and enage in a ‘you wrote this here’ to check.
A few quick things as this has already been a long exchange.
Given I accept I’m basically a moral hipster, I’d understand if you put my views in the 3 rather 4 category.
If it’s of any interest, I’m happy to suggest how you might update your problem quiz to capture my views and views in the area.
I wouldn’t think the same way about Spanish flu vs mental health. I’m assuming happiness is duration x intensity (#Bentham). What I think you’re discounting is the duration of mental illnesses—they are ‘full-time’ in that they take up your conscious space for lots of the day. They often last a long time. I don’t know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it’s not clear it’s worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the ‘peak-end’ effect show this is exactly how we remember things: our brains only really remember the intensity of events.
One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don’t think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven’t offered anything to do with solvability of neglectedness yet.
Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don’t read the comment threads.
I’ll add a note saying you provide a decision tool, but I don’t think you do what I suggest (obviously, you don’t have to do what I suggest and can think I’m wrong!).
I don’t think it’s correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that’s the choice you’ve made. In the OP I suggest there were advantages to inclusivity over exclusivity and I’d be interested to hear if/why you disagree.
I’m also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you’ve allowed. I’ve very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it’s also a bit odd if you’ve left some of the biggest problems (by scale) off the list. I accept I haven’t made substantial arguments for all of these in writing, but I’m not sure what evidence you’d consider relevant.
I’ve also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I’d like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn’t worth your time.
Finally, I think my point about moral uncertainty still stands. If you think it is really important, it should probably feature somewhere. I can’t see a mention of it here: https://80000hours.org/career-guide/world-problems/