tl;dr I don’t think it will stop, and might continue as before, though it’s possible it will be reduced some.
More:
I am not sure whether 80k’s contribution to building ea in terms of sheer numbers of people getting involved is likely to go down due to this focus vs. what it would otherwise be if we simply continued to scale our programmes as they currently are without this change in direction.
My personal guess at this time is that it will reduce at least slightly.
At a prosaic level, some projects that would have been particularly useful for building EA vs. helping with AGI in a more targeted way are going to be de-prioritised—e.g. I personally dropped a project I began of updating our “building ea” problem profile in order to focus more on AGI targeted things
Our framings will probably change. It’s possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we’re actively unsure of.
We might sometimes link off to the AI safety community in places where we might have linked off to EA before (though it is much less developed, so we’re not sure).
However, I do expect us to continue to significantly contribute to building EA – and we might even continue to do so at a similar level vs. before. This is for a few reasons:
We still think EA values are important, so still plan to talk about them a lot. E.g. we will talk about *why* we’re especially concerned about AGI using EA-style reasoning, emphasise the importance of impartiality and scope sensitivity, etc.
We don’t currently have any plans for reducing our links to the ea community – e.g. we don’t plan to stop linking to the EA forum, or stop using our newsletter to notify people about EAGs.
We still plan to list meta EA jobs on our job board, put advisees in touch with people from the EA community when it makes sense, and by default keep our library of content online
We’re not sure whether, in terms of numbers, the changes we’re making will cause our audience to grow or shrink. On the one hand, it’s a more narrow focus, so will appeal less to people who aren’t interested in AI. On the other, we are hoping to appeal more to AI-interested people, as well as older people, who might not have been as interested in our previous framings.
This will probably lead directly and indirectly to a big chunk of our audience continuing to get involved in EA due to engaging with us. This is valuable according to our new focus, because we think that getting involved in EA is often useful for being able to contribute positively to things going well with AGI.
To be clear, we also think EA growing is valuable for other reasons (we still think other cause areas matter, of course!). But it’s actually never been an organisational target[1] of ours to build EA (or at least it hasn’t since I joined the org 5 years ago); growing EA has always been something we cause as a side effect of helping people pursue high impact careers (because, as above, we’ve long thought that getting involved in EA is one useful step for pursuing a high impact career!)
Note on all the above: the implications of our new strategic focus for our programmes are still being worked out, so it’s possible that some of this will change.
[1] Except to the extent that helping people into careers building EA constitutes helping them pursue a high impact career - & it is one of many ways of doing that (along with all the other careers we recommend on the site, plus others). We do also sometimes use our impact on the growth of EA as one proxy for our total impact, because the data is available, and we think it’s often a useful step to having an impactful career, & it’s quite hard to gather data on people we’ve helped pursue high impact careers more directly.
I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years.
That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via ‘passive’ outreach (e.g. someone googles “ethical career” and finds the 80K website’, and for 80K specifically about 50% of recruitment was ‘passive’), rather than active outreach, and it seems plausible that much of that could continue even after 80K’s strategic shift.
Our framings will probably change. It’s possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we’re actively unsure of.
As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.
I’m not sure the “passive” finding should be that reassuring.
I’m imagining someone googling “ethical career” 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now. If they have no interest in AI based careers (either through interest or skillset), they’ll just move on to somewhere else. Maybe they would have been a really good fit for an animal advocacy org, but if their first impressions don’t tell them that animal advocacy is still a large part of EA they aren’t gonna know.
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over. In hypothetical future 80k, where malaria nets are de-emphasised, that person may bounce off the movement instantly.
I’m imagining someone googling “ethical career” 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now.
I definitely agree that would eventually become the case (eventually all the older non-AI articles will become out of date). I’m less sure it will be a big factor 2 years from now (though it depends on exactly how articles are arranged on the website and so how salient it is that the non-AI articles are old).
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over.
I also think this is true in general (I don’t have a strong view about the net balance in the case of 80K’s outreach specifically).
Previous analyses we conducted suggested that over half of Longtermists (~60%) previously prioritised a different cause and that this is consistent across time.
You can see the overall self-reported flows (in 2019) here.
@titotal I’m curious whether or to what extent we substantively disagree, so I’d be interested in what specific numbers you’d anticipate, if you’d be interested in sharing.
My guess is that we’ll most likely see <30% reduction in people first hearing about EA from 80K next time we run the survey (though this might be confounded if 80K don’t promote the EA Survey so much, so we’d need to control for that).
Obviously we can’t directly observe this counterfactual, but I’d guess that if a form of outreach that was 100% active shut down, we’d observe close to a 100% reduction (e.g. if everyone stopped running EA Groups or EAGs, we’d soon see ~0% people hearing about EA from these sources).[1]
I don’t say strictly 0% only because I think there’s always the possibility for a few unusual cases, e.g. someone is googling how to do good and happens across an old post about EAG or their inactive local group.
Over half of long termists starting on something else is kind of insane. Although given the current landscape I suspect many of those if there entered now would have entered directly into long termism. Looking forward to seeing the data unfold!
Thanks thats a useful reply with your points 1 and 2 being quite reassuring.
Your no 4. that seems very optimistic. A more narrow focus send unlikely to increase interest over the whole spectrum of seekers coming to the sure, when the default is 80k being the front page of the EA Internet for all coners. The number of AI interested people getting hooked increasing more than the fallout for all other areas seems pretty unlikely.
And I can’t really see a world where older people would be more attracted to a site which focuses on an emerging and largely young person’s issue.
Thanks as always for this valuable data!
Given 80k is a large and growing source of people hearing about and getting involved in EA, some people reading this might be worried that 80k will stop contributing to EA’s growth, given our new strategic focus on helping people work on safely navigating the transition to a world with AGI.
tl;dr I don’t think it will stop, and might continue as before, though it’s possible it will be reduced some.
More:
I am not sure whether 80k’s contribution to building ea in terms of sheer numbers of people getting involved is likely to go down due to this focus vs. what it would otherwise be if we simply continued to scale our programmes as they currently are without this change in direction.
My personal guess at this time is that it will reduce at least slightly.
Why would it?
We will be more focused on helping people work on helping AGI go well—that means that e.g. university groups might be hesitant to recommend us to members who are not interested in AIS as a cause area
At a prosaic level, some projects that would have been particularly useful for building EA vs. helping with AGI in a more targeted way are going to be de-prioritised—e.g. I personally dropped a project I began of updating our “building ea” problem profile in order to focus more on AGI targeted things
Our framings will probably change. It’s possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we’re actively unsure of.
We might sometimes link off to the AI safety community in places where we might have linked off to EA before (though it is much less developed, so we’re not sure).
However, I do expect us to continue to significantly contribute to building EA – and we might even continue to do so at a similar level vs. before. This is for a few reasons:
We still think EA values are important, so still plan to talk about them a lot. E.g. we will talk about *why* we’re especially concerned about AGI using EA-style reasoning, emphasise the importance of impartiality and scope sensitivity, etc.
We don’t currently have any plans for reducing our links to the ea community – e.g. we don’t plan to stop linking to the EA forum, or stop using our newsletter to notify people about EAGs.
We still plan to list meta EA jobs on our job board, put advisees in touch with people from the EA community when it makes sense, and by default keep our library of content online
We’re not sure whether, in terms of numbers, the changes we’re making will cause our audience to grow or shrink. On the one hand, it’s a more narrow focus, so will appeal less to people who aren’t interested in AI. On the other, we are hoping to appeal more to AI-interested people, as well as older people, who might not have been as interested in our previous framings.
This will probably lead directly and indirectly to a big chunk of our audience continuing to get involved in EA due to engaging with us. This is valuable according to our new focus, because we think that getting involved in EA is often useful for being able to contribute positively to things going well with AGI.
To be clear, we also think EA growing is valuable for other reasons (we still think other cause areas matter, of course!). But it’s actually never been an organisational target[1] of ours to build EA (or at least it hasn’t since I joined the org 5 years ago); growing EA has always been something we cause as a side effect of helping people pursue high impact careers (because, as above, we’ve long thought that getting involved in EA is one useful step for pursuing a high impact career!)
Note on all the above: the implications of our new strategic focus for our programmes are still being worked out, so it’s possible that some of this will change.
Also relevant: FAQ on the relationship between 80k & EA (from 2023 but I still agree with it)
[1] Except to the extent that helping people into careers building EA constitutes helping them pursue a high impact career - & it is one of many ways of doing that (along with all the other careers we recommend on the site, plus others). We do also sometimes use our impact on the growth of EA as one proxy for our total impact, because the data is available, and we think it’s often a useful step to having an impactful career, & it’s quite hard to gather data on people we’ve helped pursue high impact careers more directly.
Thanks Arden!
I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years.
That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via ‘passive’ outreach (e.g. someone googles “ethical career” and finds the 80K website’, and for 80K specifically about 50% of recruitment was ‘passive’), rather than active outreach, and it seems plausible that much of that could continue even after 80K’s strategic shift.
As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.
I’m not sure the “passive” finding should be that reassuring.
I’m imagining someone googling “ethical career” 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now. If they have no interest in AI based careers (either through interest or skillset), they’ll just move on to somewhere else. Maybe they would have been a really good fit for an animal advocacy org, but if their first impressions don’t tell them that animal advocacy is still a large part of EA they aren’t gonna know.
It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over. In hypothetical future 80k, where malaria nets are de-emphasised, that person may bounce off the movement instantly.
I definitely agree that would eventually become the case (eventually all the older non-AI articles will become out of date). I’m less sure it will be a big factor 2 years from now (though it depends on exactly how articles are arranged on the website and so how salient it is that the non-AI articles are old).
I also think this is true in general (I don’t have a strong view about the net balance in the case of 80K’s outreach specifically).
Previous analyses we conducted suggested that over half of Longtermists (~60%) previously prioritised a different cause and that this is consistent across time.
You can see the overall self-reported flows (in 2019) here.
@titotal I’m curious whether or to what extent we substantively disagree, so I’d be interested in what specific numbers you’d anticipate, if you’d be interested in sharing.
My guess is that we’ll most likely see <30% reduction in people first hearing about EA from 80K next time we run the survey (though this might be confounded if 80K don’t promote the EA Survey so much, so we’d need to control for that).
Obviously we can’t directly observe this counterfactual, but I’d guess that if a form of outreach that was 100% active shut down, we’d observe close to a 100% reduction (e.g. if everyone stopped running EA Groups or EAGs, we’d soon see ~0% people hearing about EA from these sources).[1]
I don’t say strictly 0% only because I think there’s always the possibility for a few unusual cases, e.g. someone is googling how to do good and happens across an old post about EAG or their inactive local group.
Over half of long termists starting on something else is kind of insane. Although given the current landscape I suspect many of those if there entered now would have entered directly into long termism. Looking forward to seeing the data unfold!
Thanks thats a useful reply with your points 1 and 2 being quite reassuring.
Your no 4. that seems very optimistic. A more narrow focus send unlikely to increase interest over the whole spectrum of seekers coming to the sure, when the default is 80k being the front page of the EA Internet for all coners. The number of AI interested people getting hooked increasing more than the fallout for all other areas seems pretty unlikely.
And I can’t really see a world where older people would be more attracted to a site which focuses on an emerging and largely young person’s issue.