I feel like I’m being interpreted uncharitably, so this is making me feel a bit defensive.
Let’s zoom out a bit. The key point is that we’re already morally inclusive in the way you suggest we should be, as I’ve shown.
You say:
for instance, 80,000 Hours should be much morally inclusive than they presently are. Instead of “these are the most important things”, it should “these are the most important things if you believe A, but not everyone believes A. If you believe B, you should think these are the important things [new list pops up].
Comparing global problems involves difficult judgement calls, so different people come to different conclusions. We made a tool that asks you some key questions, then re-ranks the lists based on your answers.
And provide this: https://80000hours.org/problem-quiz/
Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.
In general, 80k has a range of options, from most exclusive to least:
1) State our personal views about which causes are best
2) Also state the main judgement calls required to accept these views, so people can see whether to update or not.
3) Give alternative lists of causes for nearby moral views.
4) Give alternative lists of causes for all major moral views.
We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.
It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don’t think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason—the bigger factor is that the scale scores don’t actually change that much if you stop valuing xrisk.
Your response was that you’re also epicurean, but then that’s such an unusual combination of views that it falls within (4) rather than (3).
But, finally, let’s accept epicureanism too. You claim:
FWIW, if you accept both person-affecting views and Epicureanism, you should find X-risk, pandemics or nuclear war pretty trivial in scale compared to things like mental illness, pain and ‘ordinary human unhappiness’
For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that’s 150m QALYs per year, so would get 12 on our scale.
What about for pandemics? The Spanish Flu infected 500m people, so let’s call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we’re being Epicurean, or the suffering inflicted on non-infected people). If there’s a 50% chance that happens within 50 years, then that’s 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it’s a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)
But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.
We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.
I’m not sure how much gets spent on mental health, but I’d guess it’s much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.
All the above is highly, highly approximate—it’s just meant to illustrate that, on your views, it’s not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.
I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.
So, I don’t mean to be attacking you on these things. I’m responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I’ve probably forgotten the details and I don’t think it would be useful to go back and enage in a ‘you wrote this here’ to check.
A few quick things as this has already been a long exchange.
Given I accept I’m basically a moral hipster, I’d understand if you put my views in the 3 rather 4 category.
If it’s of any interest, I’m happy to suggest how you might update your problem quiz to capture my views and views in the area.
I wouldn’t think the same way about Spanish flu vs mental health. I’m assuming happiness is duration x intensity (#Bentham). What I think you’re discounting is the duration of mental illnesses—they are ‘full-time’ in that they take up your conscious space for lots of the day. They often last a long time. I don’t know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it’s not clear it’s worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the ‘peak-end’ effect show this is exactly how we remember things: our brains only really remember the intensity of events.
One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don’t think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven’t offered anything to do with solvability of neglectedness yet.
Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don’t read the comment threads.
I’ll add a note saying you provide a decision tool, but I don’t think you do what I suggest (obviously, you don’t have to do what I suggest and can think I’m wrong!).
I don’t think it’s correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that’s the choice you’ve made. In the OP I suggest there were advantages to inclusivity over exclusivity and I’d be interested to hear if/why you disagree.
I’m also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you’ve allowed. I’ve very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it’s also a bit odd if you’ve left some of the biggest problems (by scale) off the list. I accept I haven’t made substantial arguments for all of these in writing, but I’m not sure what evidence you’d consider relevant.
I’ve also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I’d like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn’t worth your time.
I feel like I’m being interpreted uncharitably, so this is making me feel a bit defensive.
Let’s zoom out a bit. The key point is that we’re already morally inclusive in the way you suggest we should be, as I’ve shown.
You say:
In the current materials, we describe the main judgement calls behind the selection in this article: https://80000hours.org/career-guide/world-problems/ and within the individual profiles.
Then on the page with the ranking, we say:
And provide this: https://80000hours.org/problem-quiz/ Which produces alternative rankings given some key value judgements i.e. it does exactly what you say we should do.
Moreover, we’ve been doing this since 2014, as you can see in the final section of this article: https://80000hours.org/2014/01/which-cause-is-most-effective-300/
In general, 80k has a range of options, from most exclusive to least:
1) State our personal views about which causes are best 2) Also state the main judgement calls required to accept these views, so people can see whether to update or not. 3) Give alternative lists of causes for nearby moral views. 4) Give alternative lists of causes for all major moral views.
We currently do (1)-(3). I think (4) would be a lot of extra work, so not worth it, and it seems like you agree.
It seemed like your objection is more that within (3), we should put more emphasis on the person-affecting view. So, the other part of my response was to argue that I don’t think the rankings depend as much on that as it first seems. Moral uncertainty was only one reason—the bigger factor is that the scale scores don’t actually change that much if you stop valuing xrisk.
Your response was that you’re also epicurean, but then that’s such an unusual combination of views that it falls within (4) rather than (3).
But, finally, let’s accept epicureanism too. You claim:
For mental health, you give the figure of 500m. Suppose those lives have a disability weighting of 0.3, then that’s 150m QALYs per year, so would get 12 on our scale.
What about for pandemics? The Spanish Flu infected 500m people, so let’s call that 250m QALYs of suffering (ignoring the QALYs lost by people who died since we’re being Epicurean, or the suffering inflicted on non-infected people). If there’s a 50% chance that happens within 50 years, then that’s 2.5m expected QALYs lost per year, so it comes out 9 on our scale. So, it’s a factor of 300 less, but not insignificant. (And this is ignoring engineered pandemics.)
But, the bigger issue is that the cause ranking also depends on neglectedness and solvability.
We think pandemics only get $1-$10bn of spending per year, giving them a score of ~4 for neglectedness.
I’m not sure how much gets spent on mental health, but I’d guess it’s much larger. Just for starters, it seems like annual sales of antidepressants are well over $10bn, and that seems like fairly small fraction of the overall effort that goes into it. The 500m people who have a mental health problem are probably already trying pretty hard to do something about it, whereas pandemics are a global coordination problem.
All the above is highly, highly approximate—it’s just meant to illustrate that, on your views, it’s not out of the question that the neglectedness of pandemics could make up for their lower scale, so pandemics might still be an urgent cause.
I think you could make a similar case for nuclear war (a nuclear war could easily leave 20% of people alive in a dystopia) and perhaps even AI. In general, our ranking is driven more by neglectedness than scale.
Hey.
So, I don’t mean to be attacking you on these things. I’m responding to what you said in the comments above and maybe more of a general impression, and perhaps not keeping in mind how 80k do things on their website; you write a bunch of (cool) stuff, I’ve probably forgotten the details and I don’t think it would be useful to go back and enage in a ‘you wrote this here’ to check.
A few quick things as this has already been a long exchange.
Given I accept I’m basically a moral hipster, I’d understand if you put my views in the 3 rather 4 category.
If it’s of any interest, I’m happy to suggest how you might update your problem quiz to capture my views and views in the area.
I wouldn’t think the same way about Spanish flu vs mental health. I’m assuming happiness is duration x intensity (#Bentham). What I think you’re discounting is the duration of mental illnesses—they are ‘full-time’ in that they take up your conscious space for lots of the day. They often last a long time. I don’t know what the distribution of duration is, but if you have chronic depression (anhedonia) that will make you less happy constantly. In contrast, the experience of having flu might be bad (although it’s not clear it’s worse, moment per moment, than say, depression), but it doesnt last for very long. Couple of weeks? So we need to accounts for the fact a case of Spanish flu has 1/26th of the duration than anhedonia, before we even factor in intensity. More generally, I think we suffer from something like scope insensity when we do affecting forecasting: we tend to consider the intensity of events rather than duration. Studies into the ‘peak-end’ effect show this is exactly how we remember things: our brains only really remember the intensity of events.
One conclusion I reach (on my axiology) is that the things which cause daily misery/happy are the biggest in terms of scale. This is why I think don’t think x-risks are the most important thing. I think a totalist should accept this sort of reasoning and bump up the scale of things like mental health, pain and ordinary human unhappiness, even though x-risk will be much bigger in scale on totalism. I accept I haven’t offered anything to do with solvability of neglectedness yet.
Thanks. Would you consider adding a note to the original post pointing out that 80k already does what you suggest re moral inclusivity? I find that people often don’t read the comment threads.
I’ll add a note saying you provide a decision tool, but I don’t think you do what I suggest (obviously, you don’t have to do what I suggest and can think I’m wrong!).
I don’t think it’s correct to call 80k morally inclusive because you substantially pick a prefered outcome/theory and then provide the decision tool as a sort of after thought. By my lights, being morally inclusive is incompatible with picking a preferred theory. You might think moral exclusivity is, all things considered, the right move, but we should at least be a clear that’s the choice you’ve made. In the OP I suggest there were advantages to inclusivity over exclusivity and I’d be interested to hear if/why you disagree.
I’m also not sure if you disagree with me that the scale of suffering on the living from a X-risk disaster is probably quite small, and that the happiness lost to long-term conditions (mental health, chronic pains, ordinary human unhappiness) is of much larger scale than you’ve allowed. I’ve very happy to discuss this with you in person to hear what, if anything, would cause you to change your views on this. It would be a bit of a surprise if every moral view agreed X-risks were the most important thing, and it’s also a bit odd if you’ve left some of the biggest problems (by scale) off the list. I accept I haven’t made substantial arguments for all of these in writing, but I’m not sure what evidence you’d consider relevant.
I’ve also offered to help rejig the decision tool (perhaps subsequently to discussing it with you) and that offer still stands. On a personal level, I’d like the decision tool to tell me what I think the most important problems are and better reflection the philosophical decision process! You may decide this isn’t worth your time.
Finally, I think my point about moral uncertainty still stands. If you think it is really important, it should probably feature somewhere. I can’t see a mention of it here: https://80000hours.org/career-guide/world-problems/