Thank you so much for this feedback! I’m sorry to hear our messaging has been discouraging. I want to be very clear that I think it’s harmful to discourage people from working on such important issues, and would like to minimise the extent to which we do that.
I wrote the newsletter you’re referencing, so I particularly wanted to reply to this. I also wrote the 80,000 Hours article on climate change, explaining our view that it’s less pressing than our highest priority areas.
I don’t consider myself fundamentally a longtermist. Instead, I try my best to be impartial and cause-neutral. I try to find the ways in which I can best help others – including others in future generations, and animals, as I think they are moral patients.
Here are some specifically relevant things that I currently believe:
Existential risks are the most pressing problems we currently face (where by pressing I mean some combination of importance, which is determined in part by the expected number of individuals that could be affected, tractability, and neglectedness).
Climate change is less pressing than some other existential risks.
Cost-effectiveness is heavy-tailed. By trying to find the very best things to work on, you can substantially increase your impact.
It’s tractable to convince people to work on very important issues. It’s similarly tractable to convince people to work on existential risks as on other very important issues.
Therefore, it’s good to convince people to work on very important problems, but even better to convince people to work on existential risks.
I wrote that existential risks are the biggest problems we face, and that climate change is less pressing than other existential risks because I believe these things are both true and that communicating them is a highly cost-effective way to do good.
I don’t think everyone should work on existential risk reduction – personal fit is really important, and if so many people worked on it that they became very non-neglected, I’d think it was less useful for more people to work on them at the margin. Partly for these reasons, 80,000 Hours has generally promoted a range of areas – and has some positive evidence of people being convinced to work on poverty reduction and animal welfare as a result of 80,000 Hours.
On the newsletter audience
The 80,000 Hours newsletter is sent to a large audience who are largely unfamiliar with effective altruism. So that’s why the newsletter spoke about the importance of poverty reduction and animal welfare. For example, I wrote that “my best guess is that the negative effects of factory farming alone make the world worse than it’s ever been.” It would be brilliant if that newsletter convinced people to work on poverty reduction and animal welfare.
The newsletter also explained that, as far as we can tell, there are even bigger problems than these two.
I think it’s unlikely that the 80,000 Hours newsletter on net discouraged work on poverty reduction or animal welfare primarily because the vast majority (>99%) of newsletter subscribers aren’t working on any of poverty reduction, animal welfare or existential risk reduction.
If it did convince someone with equally good personal fit to work on existential risk reduction when they would have otherwise have worked on poverty reduction or animal welfare, that would be worse than convincing someone who wouldn’t have otherwise done anything very useful. However, since I think existential risks are the most pressing issues, I don’t think it’d be doing net expected harm,.
On whether I / 80,000 Hours value(s) work on non-existential threats
We value people working on animal welfare and poverty reduction (as well as other causes that aren’t our top priorities) a lot. We just don’t think those issues are the very most pressing problems in the world.
For example, where we list factory farming and global health on the problem profiles page you cite, we say:
We’d also love to see more people working on the following issues, even though given our worldview and our understanding of the individual issues, we’d guess many of our readers could do even more good by focusing on the problems listed above.
Factory farming and global health are common focuses in the effective altruism community. These are important issues on which we could make a lot more progress.
It’s genuinely really difficult to send the message that something seems more pressing than other things, without implying that the other things are not important or that we wouldn’t want to see more people working on them. My colleague Arden, who wrote those two paragraphs above, also feels this way, and had this in mind when she wrote them.
On whether I / 80,000 Hours should defer more
One thing to consider is whether, given that many people disagree with 80,000 Hours on the relative importance of existential risks, we should lower our ranking.
I agree with this idea. Our ranking is post-deferral – we still think that existential risks seem more pressing than other issues, even after deferral. We have had conversations within the last year about whether, for example, factory farming should be included in our list of the top problems, and decided against making that change (for now), based on our assessment of its neglectedness and relative importance.
I also think that saying what we believe (all things considered) to be true is a good heuristic for deciding what to say. This is what the newsletter and problem profiles page try to do.
My personal current guess is that existential risk reduction is something like 100x more important than factory farming, and is also more neglected (although less tractable).
Because of our fundamental cause-neutrality, this is something that could (and hopefully will) change, for example if existential risks become less neglected, or the magnitude of these risks decrease.
Finally, on climate change
As I mentioned above, I think climate change is likely less important than other existential risks. Saying climate change is less pressing than the world’s literal biggest problem is a far cry from “unimportant” – I think that climate change is a hugely important problem. It just seems less likely to cause an existential catastrophe, and is far less neglected, than other possible risks (like nuclear-, bio-, or AI-related risks). My article on climate change defends this at length, and I’ve also responded to critiques of that article on the forum, e.g here.
By “longtermism”, I mean the idea that improving the long-run future is a key moral priority. By “longtermist” I mean someone who personally identifies with belief in longtermism.
I think x-risks are the most pressing problems from a cause-neutral perspective (although I’m not confident about this, there are a number of plausible alternatives, including factory farming).
I think longtermism is also (approximately) true from a cause neutral perspective (I’m also not confident about this).
The implication between these two beliefs could go either way, depending on how you structure the argument. You could first argue that x-risks are pressing, which in turn implies that protecting the long-run future is a priority. Or you could argue the other way, that improving the long-run future is important and reducing x-risks are a tractable way of doing so.
Most importantly though, I think you can believe that x-risks are the most pressing issue, and indeed believe that improving the long-run future is a key moral priority of our time, without identifying as a “longtermist”.
Indeed, I think that there’s sufficient objectivity in the normative claims underlying the pressing-ness of x-risks that, according to my current meta-ethical and empirical beliefs, I just believe it’s true that x-risks are the most pressing problems (again, I’m not hugely confident in this claim). The truth of this statement is independent of the identity of the actor, hence my answer “yes”.
Caveat:
If, by your question, you mean “Do you think working on x-risks is the best thing to do for non-longtermists?” the answer is “sometimes, but often no”. This is because a problem being pressing on average doesn’t imply that all work on that problem is equally valuable: personal fit, and the choice of intervention both play an important role. I’d guess that it would be best for someone with lots of experience working on a particularly cost-effective animal welfare intervention to work on that intervention rather than move into x-risks.
Dear Benjamin, thank you so much for taking the time to write this thorough response. That’s certainly more than I ever expected. I hope you don’t feel like I meant to attack you personally for picking out copy you wrote—this was certainly not my intention and merely a coincidence.
I can only imagine how difficult it is for 80k to navigate all the different stakeholders and their opinions. And like I’ve said in many comments, I definitely think 80,000 Hours should pursue what they deem most important and right. However, I still wanted to raise this question, as I could really feel myself getting demotivated—it didn’t happen abruptly, but gradually with every piece of messaging I perceived to be devaluating the values I hold and the work I do. Of course I got biased over time. But then again, I know people who feel the same or similar as me, and some people here on the forum do as well, apparently.
I think the key issue might be that 80k ranks cause areas in a “rational” way in terms of their possible impact and neglectedness—but as a human, I think it’s natural to perceive this rather as a ranking of values (which in some sense it is), and of course having your personal values ranked “at the bottom” doesn’t exactly feel nice… Especially since I guess for many people, the decision to work in a certain cause area is probably mostly based on personal interests and less on objective considerations. There are many exceptions, surely, but I think for many people “choosing” animal welfare over longtermism isn’t so much an active choice, but rather a subconscious inclination that’s already set up long before you ever start to think about what you want to do. And when you’re then reading, that the thing you “chose” based on your intrinsic motivations isn’t “all that important” … well that’s where the demoralisation kicks in. 80k never puts it that drastically, of course, quite the opposite—but we’re talking about deep seated values here, the very core of what we are. So it’s probably natural to be quite defensive of them.
So for me, the whole thing is partly about how these messages might influence future decisions on career choice, but also strongly about how they make people feel about the choices they’ve already made and the values they currently hold—which they probably often don’t have all that much control over, like I said. It’s quite frustrating to think “Well, I’d really like to care deeply about all this, but I just don’t and there’s nothing I can do to change that, since I’m not a 100% rational being”.
It’s certainly not my place to give you advice on how to do your job and of course you have way more insight and experience in these trade-offs—but I feel sometimes wordings could be altered slightly to have less of a “rebuffing” effect whilst still having longtermism as the top cause area. At the same time I promise to try and actively perceive how you’re highlighting other cause areas instead of constantly nitpicking over exapmles where you don’t.
But a lot of people feel demotivated by EA/80k? A lot of left wing criticisms have this affect when talking about EA. “How dare you talk down to my systemic poverty social movement” etc. I just don’t think 80k’s job is to make you feel good? It’s to communicate true information. The truth is sometimes demotivating. For instance, I’m incredibly grateful for the ACLU existing but I don’t think it’s the most effective thing to go work for them right now.
Maybe your deep seated values don’t match the type of things happening in EA and that’s perfectly ok and legitimate. People should feel comfortable disagreeing with 80k and making choices different to them. I think at the end of the day their job is to communicate their beliefs/the truth not to motivate you.
Maybe your deep seated values don’t match the type of things happening in EA
Whoa okay, that’s a bit of an extreme statement—EA is incredibly broad and obviously I care about certaine cause areas that are deemed valid by the broader EA community—just not by 80k, apparently . But, as other commenters have pointed out, 80k doesn’t equal EA. Sure, longtermism plays a big part in the rest of EA (as I mentioned in my post) but it’s not EA’s top priority, as far as I know. Unlike 80k, I don’t think EA has a “top priority” because that would imply that the whole movement agrees on it which I don’t think is very likely going to happen. So it’s a little offending for you to suggest I’m not “suitable” for EA—when in fact I’m doing what the community always encourages you to do when you have an idea or feedback: share it.
Thank you so much for this feedback! I’m sorry to hear our messaging has been discouraging. I want to be very clear that I think it’s harmful to discourage people from working on such important issues, and would like to minimise the extent to which we do that.
I wrote the newsletter you’re referencing, so I particularly wanted to reply to this. I also wrote the 80,000 Hours article on climate change, explaining our view that it’s less pressing than our highest priority areas.
I don’t consider myself fundamentally a longtermist. Instead, I try my best to be impartial and cause-neutral. I try to find the ways in which I can best help others – including others in future generations, and animals, as I think they are moral patients.
Here are some specifically relevant things that I currently believe:
Existential risks are the most pressing problems we currently face (where by pressing I mean some combination of importance, which is determined in part by the expected number of individuals that could be affected, tractability, and neglectedness).
Climate change is less pressing than some other existential risks.
Cost-effectiveness is heavy-tailed. By trying to find the very best things to work on, you can substantially increase your impact.
It’s tractable to convince people to work on very important issues. It’s similarly tractable to convince people to work on existential risks as on other very important issues.
Therefore, it’s good to convince people to work on very important problems, but even better to convince people to work on existential risks.
I wrote that existential risks are the biggest problems we face, and that climate change is less pressing than other existential risks because I believe these things are both true and that communicating them is a highly cost-effective way to do good.
I don’t think everyone should work on existential risk reduction – personal fit is really important, and if so many people worked on it that they became very non-neglected, I’d think it was less useful for more people to work on them at the margin. Partly for these reasons, 80,000 Hours has generally promoted a range of areas – and has some positive evidence of people being convinced to work on poverty reduction and animal welfare as a result of 80,000 Hours.
On the newsletter audience
The 80,000 Hours newsletter is sent to a large audience who are largely unfamiliar with effective altruism. So that’s why the newsletter spoke about the importance of poverty reduction and animal welfare. For example, I wrote that “my best guess is that the negative effects of factory farming alone make the world worse than it’s ever been.” It would be brilliant if that newsletter convinced people to work on poverty reduction and animal welfare.
The newsletter also explained that, as far as we can tell, there are even bigger problems than these two.
I think it’s unlikely that the 80,000 Hours newsletter on net discouraged work on poverty reduction or animal welfare primarily because the vast majority (>99%) of newsletter subscribers aren’t working on any of poverty reduction, animal welfare or existential risk reduction.
If it did convince someone with equally good personal fit to work on existential risk reduction when they would have otherwise have worked on poverty reduction or animal welfare, that would be worse than convincing someone who wouldn’t have otherwise done anything very useful. However, since I think existential risks are the most pressing issues, I don’t think it’d be doing net expected harm,.
On whether I / 80,000 Hours value(s) work on non-existential threats
We value people working on animal welfare and poverty reduction (as well as other causes that aren’t our top priorities) a lot. We just don’t think those issues are the very most pressing problems in the world.
For example, where we list factory farming and global health on the problem profiles page you cite, we say:
It’s genuinely really difficult to send the message that something seems more pressing than other things, without implying that the other things are not important or that we wouldn’t want to see more people working on them. My colleague Arden, who wrote those two paragraphs above, also feels this way, and had this in mind when she wrote them.
On whether I / 80,000 Hours should defer more
One thing to consider is whether, given that many people disagree with 80,000 Hours on the relative importance of existential risks, we should lower our ranking.
I agree with this idea. Our ranking is post-deferral – we still think that existential risks seem more pressing than other issues, even after deferral. We have had conversations within the last year about whether, for example, factory farming should be included in our list of the top problems, and decided against making that change (for now), based on our assessment of its neglectedness and relative importance.
I also think that saying what we believe (all things considered) to be true is a good heuristic for deciding what to say. This is what the newsletter and problem profiles page try to do.
My personal current guess is that existential risk reduction is something like 100x more important than factory farming, and is also more neglected (although less tractable).
Because of our fundamental cause-neutrality, this is something that could (and hopefully will) change, for example if existential risks become less neglected, or the magnitude of these risks decrease.
Finally, on climate change
As I mentioned above, I think climate change is likely less important than other existential risks. Saying climate change is less pressing than the world’s literal biggest problem is a far cry from “unimportant” – I think that climate change is a hugely important problem. It just seems less likely to cause an existential catastrophe, and is far less neglected, than other possible risks (like nuclear-, bio-, or AI-related risks). My article on climate change defends this at length, and I’ve also responded to critiques of that article on the forum, e.g here.
Do you think x-risks are the most pressing problem even for non-longtermists?
(Personal views, not representing 80k)
My basic answer is “yes”.
Longer version:
I think this depends what you mean.
By “longtermism”, I mean the idea that improving the long-run future is a key moral priority. By “longtermist” I mean someone who personally identifies with belief in longtermism.
I think x-risks are the most pressing problems from a cause-neutral perspective (although I’m not confident about this, there are a number of plausible alternatives, including factory farming).
I think longtermism is also (approximately) true from a cause neutral perspective (I’m also not confident about this).
The implication between these two beliefs could go either way, depending on how you structure the argument. You could first argue that x-risks are pressing, which in turn implies that protecting the long-run future is a priority. Or you could argue the other way, that improving the long-run future is important and reducing x-risks are a tractable way of doing so.
Most importantly though, I think you can believe that x-risks are the most pressing issue, and indeed believe that improving the long-run future is a key moral priority of our time, without identifying as a “longtermist”.
Indeed, I think that there’s sufficient objectivity in the normative claims underlying the pressing-ness of x-risks that, according to my current meta-ethical and empirical beliefs, I just believe it’s true that x-risks are the most pressing problems (again, I’m not hugely confident in this claim). The truth of this statement is independent of the identity of the actor, hence my answer “yes”.
Caveat:
If, by your question, you mean “Do you think working on x-risks is the best thing to do for non-longtermists?” the answer is “sometimes, but often no”. This is because a problem being pressing on average doesn’t imply that all work on that problem is equally valuable: personal fit, and the choice of intervention both play an important role. I’d guess that it would be best for someone with lots of experience working on a particularly cost-effective animal welfare intervention to work on that intervention rather than move into x-risks.
Dear Benjamin, thank you so much for taking the time to write this thorough response. That’s certainly more than I ever expected. I hope you don’t feel like I meant to attack you personally for picking out copy you wrote—this was certainly not my intention and merely a coincidence.
I can only imagine how difficult it is for 80k to navigate all the different stakeholders and their opinions. And like I’ve said in many comments, I definitely think 80,000 Hours should pursue what they deem most important and right.
However, I still wanted to raise this question, as I could really feel myself getting demotivated—it didn’t happen abruptly, but gradually with every piece of messaging I perceived to be devaluating the values I hold and the work I do. Of course I got biased over time. But then again, I know people who feel the same or similar as me, and some people here on the forum do as well, apparently.
I think the key issue might be that 80k ranks cause areas in a “rational” way in terms of their possible impact and neglectedness—but as a human, I think it’s natural to perceive this rather as a ranking of values (which in some sense it is), and of course having your personal values ranked “at the bottom” doesn’t exactly feel nice… Especially since I guess for many people, the decision to work in a certain cause area is probably mostly based on personal interests and less on objective considerations. There are many exceptions, surely, but I think for many people “choosing” animal welfare over longtermism isn’t so much an active choice, but rather a subconscious inclination that’s already set up long before you ever start to think about what you want to do. And when you’re then reading, that the thing you “chose” based on your intrinsic motivations isn’t “all that important” … well that’s where the demoralisation kicks in. 80k never puts it that drastically, of course, quite the opposite—but we’re talking about deep seated values here, the very core of what we are. So it’s probably natural to be quite defensive of them.
So for me, the whole thing is partly about how these messages might influence future decisions on career choice, but also strongly about how they make people feel about the choices they’ve already made and the values they currently hold—which they probably often don’t have all that much control over, like I said. It’s quite frustrating to think “Well, I’d really like to care deeply about all this, but I just don’t and there’s nothing I can do to change that, since I’m not a 100% rational being”.
It’s certainly not my place to give you advice on how to do your job and of course you have way more insight and experience in these trade-offs—but I feel sometimes wordings could be altered slightly to have less of a “rebuffing” effect whilst still having longtermism as the top cause area. At the same time I promise to try and actively perceive how you’re highlighting other cause areas instead of constantly nitpicking over exapmles where you don’t.
But a lot of people feel demotivated by EA/80k? A lot of left wing criticisms have this affect when talking about EA. “How dare you talk down to my systemic poverty social movement” etc. I just don’t think 80k’s job is to make you feel good? It’s to communicate true information. The truth is sometimes demotivating. For instance, I’m incredibly grateful for the ACLU existing but I don’t think it’s the most effective thing to go work for them right now.
Maybe your deep seated values don’t match the type of things happening in EA and that’s perfectly ok and legitimate. People should feel comfortable disagreeing with 80k and making choices different to them. I think at the end of the day their job is to communicate their beliefs/the truth not to motivate you.
Whoa okay, that’s a bit of an extreme statement—EA is incredibly broad and obviously I care about certaine cause areas that are deemed valid by the broader EA community—just not by 80k, apparently . But, as other commenters have pointed out, 80k doesn’t equal EA. Sure, longtermism plays a big part in the rest of EA (as I mentioned in my post) but it’s not EA’s top priority, as far as I know. Unlike 80k, I don’t think EA has a “top priority” because that would imply that the whole movement agrees on it which I don’t think is very likely going to happen. So it’s a little offending for you to suggest I’m not “suitable” for EA—when in fact I’m doing what the community always encourages you to do when you have an idea or feedback: share it.