I’m not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there’s enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.
Some reasons for thinking cause diversification by the community/central orgs is good:
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.
Timelines have shortened, but it doesn’t seem obvious whether the case for AGI being an existential risk has gotten stronger or weaker. There are signs of both progress and setbacks, and evidence of shorter timelines but potentially slower takeoff.
I’m also a bit confused because 80K seemed to recently re-elevate some non-existential risk causes on its problem profiles (great power war and factory farming; many more under emerging challenges). This seemed like the right call and part of a broader shift away from going all-in on longtermism in the FTX era. I think that was a good move and that keeping an EA community that is not only AGI is valuable.
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent that 80k staff and leadership’s beliefs changed with the new evidence, I’m excited for them to be acting on it.
I wasn’t involved in this strategic pivot, but when I was considering the job, I was excited to see a certain kind of leaping to action in the organization as I was considering whether to join.
It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn). In the past I’ve worried that various parts of the community were jumping too fast into what’s shiny and new, but 80k has been talking about this for more than a year, which is reassuring.
I think the 80k leadership have thoughts about all of these, but I agree that this blog post alone doesn’t fully make the case.
I think the right answer to these uncertainties is some combination of digging in and arguing about them (as you’ve started here — maybe there’s a longer conversation to be had), or waiting and see how these bets turn out.
Anyway, I appreciate considerations like the ones you’ve laid out because I think they’ll help 80k figure out if it’s making a mistake (now or in the future), even though I’m currently really energized and excited by the strategic pivot.
Thanks @ChanaMessinger I appreciate this comment, and think that your kind of tone here is healthier than the original announcement. Your well written one sentence captures many of the important issues well.
”It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn).”
FWIW I think a clear mistake is the poor communication here. That the most obvious and serious potential community impacts have been missed and the tone is poor. If this had been presented in a way that it looked like the most serious potential downsides were considered, I would both feel better about it and be more confident that 80k has done a deep SWAT analysis here rather than the really basic framing of the post which is more like...
“AI risk is really bad and urgent let’s go all in”
This makes the decision seem not only insensitive but also poorly thought through which in sure is not the case. I imagine the chief concerns of the commenters were discussed at the highest level.
I’m assuming there are comms people at 80k and it surprises me that this would slip through like this.
Thanks for the feedback here. I mostly want to just echo Niel’s reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability’s sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I’d also done more to help it demonstrate the thought we’ve put into the tradeoffs involved and awareness of the costs. For what it’s worth, & we don’t have dedicated comms staff at 80k—helping with comms is currently part of my role, which is to lead our web programme.
From an altruistic cause prioritization perspective, existential risk seems to require longtermism
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
When I’m talking to non-philosophers, I prefer an “existential risk” framework to a “long-termism” framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it’s non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we’re all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
I’m not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it’s also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier “From an altruistic cause prioritization perspective” because I think that from an impartial cause prioritization perspective, the case is different. If you’re comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it’s plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.
Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess
Yeah, FWIW, it’s mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.
I’m not sure exactly what this change will look like, but my current impression from this post leaves me disappointed. I say this as someone who now works on AI full-time and is mostly persuaded of strong longtermism. I think there’s enough reason for uncertainty about the top cause and value in a broad community that central EA organizations should not go all-in on a single cause. This seems especially the case for 80,000 Hours, which brings people in by appealing to a general interest in doing good.
Some reasons for thinking cause diversification by the community/central orgs is good:
From an altruistic cause prioritization perspective, existential risk seems to require longtermism, including potentially fanatical views (see Christian Tarsney, Rethink Priorities). It seems like we should give some weight to causes that are non-fanatical.
Existential risk is not most self-identified EAs’ top cause, and about 30% of self-identified EAs say they would not have gotten involved if it did not focus on their top cause (EA survey). So it does seem like you miss an audience here.
Organizations like 80,000 Hours set the tone for the community, and I think there’s good rule-of-thumb reasons to think focusing on one issue is a mistake. As 80K’s problem profile on factory farming says, factory farming may be the greatest moral mistake humanity is currently making, and it’s good to put some weight on rules of thumb in addition to expectations.
Timelines have shortened, but it doesn’t seem obvious whether the case for AGI being an existential risk has gotten stronger or weaker. There are signs of both progress and setbacks, and evidence of shorter timelines but potentially slower takeoff.
I’m also a bit confused because 80K seemed to recently re-elevate some non-existential risk causes on its problem profiles (great power war and factory farming; many more under emerging challenges). This seemed like the right call and part of a broader shift away from going all-in on longtermism in the FTX era. I think that was a good move and that keeping an EA community that is not only AGI is valuable.
Hey Zach,
(Responding as an 80k team member, though I’m quite new)
I appreciate this take; I was until recently working at CEA, and was in a lot of ways very very glad that Zach Robinson was all in on general EA. It remains the case (as I see it) that, from a strategic and moral point of view, there’s a ton of value in EA in general. It says what’s true in a clear and inspiring way, a lot of people are looking for a worldview that makes sense, and there’s still a lot we don’t know about the future. (And, as you say, non-fanaticism and pluralistic elements have a lot to offer, and there are some lessons to be learned about this from the FTX era)
At the same time, when I look around the EA community, I want to see a set of institutions, organizations, funders and people that are live players, responding to the world as they see it, making sure they aren’t missing the biggest thing currently happening. (or, if like 80k they are an org where one of its main jobs is communicating important things, letting their audiences miss it.) Most importantly, I want people to act on their beliefs (with appropriate incorporation of heuristics, rules of thumb, outside views, etc). And to the extent that 80k staff and leadership’s beliefs changed with the new evidence, I’m excited for them to be acting on it.
I wasn’t involved in this strategic pivot, but when I was considering the job, I was excited to see a certain kind of leaping to action in the organization as I was considering whether to join.
It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn). In the past I’ve worried that various parts of the community were jumping too fast into what’s shiny and new, but 80k has been talking about this for more than a year, which is reassuring.
I think the 80k leadership have thoughts about all of these, but I agree that this blog post alone doesn’t fully make the case.
I think the right answer to these uncertainties is some combination of digging in and arguing about them (as you’ve started here — maybe there’s a longer conversation to be had), or waiting and see how these bets turn out.
Anyway, I appreciate considerations like the ones you’ve laid out because I think they’ll help 80k figure out if it’s making a mistake (now or in the future), even though I’m currently really energized and excited by the strategic pivot.
Thanks @ChanaMessinger I appreciate this comment, and think that your kind of tone here is healthier than the original announcement. Your well written one sentence captures many of the important issues well.
”It could definitely be a mistake even within this framework (by causing 80k to not appeal parts of its potential audience) or empirically (on size of AI risk, or sizes of other problems) or long term (because of the damage it does to the EA community or intellectual lifeblood / eating the seed corn).”
FWIW I think a clear mistake is the poor communication here. That the most obvious and serious potential community impacts have been missed and the tone is poor. If this had been presented in a way that it looked like the most serious potential downsides were considered, I would both feel better about it and be more confident that 80k has done a deep SWAT analysis here rather than the really basic framing of the post which is more like...
“AI risk is really bad and urgent let’s go all in”
This makes the decision seem not only insensitive but also poorly thought through which in sure is not the case. I imagine the chief concerns of the commenters were discussed at the highest level.
I’m assuming there are comms people at 80k and it surprises me that this would slip through like this.
Thanks for the feedback here. I mostly want to just echo Niel’s reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability’s sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I’d also done more to help it demonstrate the thought we’ve put into the tradeoffs involved and awareness of the costs. For what it’s worth, & we don’t have dedicated comms staff at 80k—helping with comms is currently part of my role, which is to lead our web programme.
No it doesn’t! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.
Caring about existential risk does not require longtermism, but existential risk being the top EA priority probably requires longtermism or something like it. Factory farming interventions look much more cost-effective in the near term than x-risk interventions, and GiveWell top charities look probably more cost-effective.
I’m not sure if GiveWell top charities do? Preventing extinction is a lot of QALYs, and it might not cost more than a few $B per year of extra time bought in terms of funding Pause efforts (~$1/QALY!?)
By my read, that post and the excerpt from it are about the rhetorical motivation for existential risk rather than the impartial ethical motivation. I basically agree that longtermism is not the right framing in most conversations, and it’s also not necessary for thinking existential risk work would be more valuable than the marginal public dollar.
I included the qualifier “From an altruistic cause prioritization perspective” because I think that from an impartial cause prioritization perspective, the case is different. If you’re comparing existential risk to animal welfare and global health, the links in my comment I think make the case pretty persuasively that you need longtermism.
It’s not “longtermist” or “fanatical” at all (or even altruistic) to try and prevent yourself and everyone else on the planet (humans and animals) being killed in the near future by uncontrollable ASI[1] (quite possibly in a horrible, painful[2], way[3]).
Indeed, there are many non-EAs who care a great deal about this issue now.
I mention this as it’s a welfarist consideration, even if one doesn’t care about death in and of itself.
Ripped apart by self-replicating computronium-building nanobots, anyone?
Strongly endorsing Greg Colbourn’s reply here.
When ordinary folks think seriously about AGI risks, they don’t need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
I’m not that surprised that the above comment has been downvoted to −4 without any replies (and this one will probably buried by an even bigger avalanche of downvotes!), but it still makes me sad. EA will be ivory-tower-ing until the bitter end it seems. It’s a form of avoidance. These things aren’t nice to think about. But it’s close now, so it’s reasonable for it to feel viscerally real. I guess it won’t be EA that saves us (from the mess it helped accelerate), if we do end up saved.
The comment you replied to
acknowledges the value of x-risk reduction in general from a non-longtermist perspective
clarifies that it is making a point about the marginal altruistic value of x-risk vs AW or GHW work and points to a post making this argument in more detail
Your response merely reiterates that x-risk prevention has substantial altruistic (and non-altruistic) value. This isn’t responsive to the claim about whether, under non-longtermist assumptions, that value is greater on the margin than AW or GHW work.
So even though I actually agree with the claims in your comment, I downvoted it (along with this one complaining about the downvotes) for being off-topic and not embodying the type of discourse I think the EA Forum should strive for.
Thanks for the explanation.
Whilst zdgroff’s comment “acknowledges the value of x-risk reduction in general from a non-longtermist perspective” it downplays it quite heavily imo (and the OP comment does even more, using the pejorative “fanatical”).
I don’t think the linked post makes the point very persuasively. Looking at the table, at best there is an equivalence.
I think a rough estimate of the cost effectiveness of pushing for a Pause is orders of magnitude higher.
You don’t need EAs Greg—you’ve got the general public!
Adding a bit more to my other comment:
For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I’m not totally sure—EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).
I think that existential risks from various issues with AGI (especially if one includes trajectory changes) are high enough that one needn’t accept fanatical views to prioritise them (though it may require caring some about potential future beings). (We have a bit on this here)
I agree this means we will miss out on an audience we could have if we fronted content on more causes. We hope to also appeal to new audiences with this shift, such as older people who are less naturally drawn to our previous messaging, and e.g. who are more motivated by urgency. However, it seems plausible this shrinks our audience. This seems worth it because we think in doing so we’ll be telling people what we think about how urgent and pressing AI risks seem to us, and that this could still lead us to having more impact overall since impact varies so much between careers, in part based on what causes people focus on.
I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it’s plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.
Hey Zach. I’m about to get on a plane so won’t have time to write a full response, sorry! But wanted to say a few quick things before I do.
Agree that it’s not certain or obvious that AI risk is the most pressing issue (though it is 80k’s best guess & my personal best guess, and I don’t personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues—wherever they think they can have the biggest positive impact.
However, our top commitment at 80k is to do our best to help people find careers that will allow them to have as much positive impact as they can. & We think that to do that, more people should strongly consider and/or try out working on reducing the variety of risks that we think transformative AI poses. So we want to do much more to tell them that!
In particular, from a web specific perspective, I feel that the website doesn’t feel consistent right now with the possibility of short AI timelines & the possibility that AI might not only pose risks from catastrophic misalignment, but also other risks, plus that it will probably affect many other cause areas. Given the size of our team, I think we need to focus our new content capacity on changing that.
I think this post I wrote a while ago might also be relevant here!
https://forum.effectivealtruism.org/posts/iCDcJdqqmBa9QrEHv/faq-on-the-relationship-between-80-000-hours-and-the
Will circle back more tomorrow / when I’m off the flight!
Yeah, FWIW, it’s mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.