At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss.
I wonder how this would look different from the current status quo:
Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn’t advertise what it’s been used for to date)
Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
The global health and development fund seems to have been discontinued . The infrastructure fund, I’ve heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as ‘to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing’. The Nonlinear Network is also explicitly focused on AI safety, and the metacharity fund is nonspecific. The only EA-facing fund I know of that excludes longtermist concerns is the animal welfare one. Obviously there’s also Givewell, but they’re not really part of the EA movement, inasmuch as they only support existing and already-well-developed-and-evidenced charities and not EA startups/projects/infrastructure like the other funding groups mentioned do.
Thesethreeposts by very prominent EAs all make the claim that we should basically stop talking about either EA and/or longtermism and just tell people they’re highly likely to die from AI (thus guiding them to ignore the—to my mind comparable—risks that they might die from supervolcanoes, natural or weakly engineered pandemics, nuclear war, great power war, and all the other stuff that longtermists uniquely would consider to be of much lesser importance because of the lower extinction risk).
And anecdotally, I share the OP’s experience that AI risk dominates EA discussion at EA cocktail parties.
To me this picture makes everything but AI safety already look like an afterthought.
I realise I didn’t make this distinction, so I’m shifting the goalposts slightly, but I think it’s worth distinguishing between ‘direct work’ organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.
Im a bit confused about the grammar of the last sentence—are you saying that EA infrastructure is getting more emphasis than direct work, or that people interested in infrastructural work are being encouraged to primarily support longtermism?
I’d imagine it’s much harder to argue that something like community building is cost-effective within something like global health, than within longtermist focused areas? There’s much more capacity to turn money into direct work/bednets, and those direct options seem pretty hard to beat in terms of cost effectiveness.
Community building can be nonspecific, where you try to get a build a group of people who have some common interest (such as something under big tent EA), or specific, where you try to get people who are working on some specific thing (such as working on AI/longtermist projects, or moving in that direction). My sense is that (per the OP), community builders are being pressured to do the latter.
The theory of change for community building is much stronger for long-termist cause areas than for global poverty.
For global poverty, it’s much easier to take a bunch of money and just pay people outside of the community to do things like hand out bed nets.
For x-risk, it seems much more valuable to develop a community of people who deeply care about the problem so that you can hire people who will autonomously figure out what needs to be done. This compares favourably to just throwing money at the problem, in which case you’re just likely to get work that sounds good, rather than work advancing your objective.
Right, although one has to watch for a possible effect on community composition. If not careful, this will end up with a community full of x-risk folks not necessarily because x-risk is correct cause prioritization, but because it was recruited for due to the theory of change issue you identify.
This seems like a self-fulfilling prophecy. If we never put effort into building a community around ways to reduce global poverty, we’ll never know what value they could have generated.
Also it seems a priori really implausible that longtermists could usefully do more things in their sphere alone than that EAs focusing on the whole of the rest of EA-concern-space could.
The flipside argument would be that funding is a greater bottleneck for global poverty than longtermism, and one might convince university students focused on global poverty to go into earning-to-give (including entrepreneurship-to-give). So the goals of community building may well be different between fields, and community building in each cause area should be primarily judged on its contribution to that cause area’s bottleneck.
I could see a world in which the maths works out for that.
I guess the tricky thing there is that you need the amount raised with discount factor applied to exceed the cost, incl. the opportunity cost of community builders potentially earning to give themselves.
And this seems to be a much tighter constraint than that imposed by longtermist theories of change.
True—although I think the costs would be much lower for university groups run by (e.g.) undergraduate student organizers who were paid typical student-worker wages (at most). The opportunity costs would seem much stronger for community organizing by college graduates than by students working a few hours a week.
I wonder how this would look different from the current status quo:
Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn’t advertise what it’s been used for to date)
Projects already seem to be highly preferentially supported based on how longtermist/AI-themed they are. I recently had a conversation with someone at OpenPhil in which, if I understood/remembered correctly, they said the proportion of OP funding going to nonlongtermist stuff was about 10%. [ETA sounds like this is wrong]
The global health and development fund seems to have been discontinued . The infrastructure fund, I’ve heard on the grapevine, strongly prioritises projects with a longtermist/AI focus. The other major source of money in the EA space is the Survival and Flourishing Fund, which lists its goal as ‘to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing’. The Nonlinear Network is also explicitly focused on AI safety, and the metacharity fund is nonspecific. The only EA-facing fund I know of that excludes longtermist concerns is the animal welfare one. Obviously there’s also Givewell, but they’re not really part of the EA movement, inasmuch as they only support existing and already-well-developed-and-evidenced charities and not EA startups/projects/infrastructure like the other funding groups mentioned do.
These three posts by very prominent EAs all make the claim that we should basically stop talking about either EA and/or longtermism and just tell people they’re highly likely to die from AI (thus guiding them to ignore the—to my mind comparable—risks that they might die from supervolcanoes, natural or weakly engineered pandemics, nuclear war, great power war, and all the other stuff that longtermists uniquely would consider to be of much lesser importance because of the lower extinction risk).
And anecdotally, I share the OP’s experience that AI risk dominates EA discussion at EA cocktail parties.
To me this picture makes everything but AI safety already look like an afterthought.
Regarding the funding aspect:
As far as I can tell, Open Phil has always given the majority of their budget to non-longtermist focus areas.
This is also true of the EA portfolio more broadly.
GiveWell has made grants to less established orgs for several years, and that amount has increased dramatically of late.
Holden also stated in his recent 80k podcast episode that <50% of OP’s grantmaking goes to longtermist areas.
I realise I didn’t make this distinction, so I’m shifting the goalposts slightly, but I think it’s worth distinguishing between ‘direct work’ organisations and EA infrastructure. It seems pretty clear from the OP that the latter is being strongly encouraged to primarily support EA/longtermist work.
Im a bit confused about the grammar of the last sentence—are you saying that EA infrastructure is getting more emphasis than direct work, or that people interested in infrastructural work are being encouraged to primarily support longtermism?
Sorry—the latter.
I’d imagine it’s much harder to argue that something like community building is cost-effective within something like global health, than within longtermist focused areas? There’s much more capacity to turn money into direct work/bednets, and those direct options seem pretty hard to beat in terms of cost effectiveness.
Community building can be nonspecific, where you try to get a build a group of people who have some common interest (such as something under big tent EA), or specific, where you try to get people who are working on some specific thing (such as working on AI/longtermist projects, or moving in that direction). My sense is that (per the OP), community builders are being pressured to do the latter.
The theory of change for community building is much stronger for long-termist cause areas than for global poverty.
For global poverty, it’s much easier to take a bunch of money and just pay people outside of the community to do things like hand out bed nets.
For x-risk, it seems much more valuable to develop a community of people who deeply care about the problem so that you can hire people who will autonomously figure out what needs to be done. This compares favourably to just throwing money at the problem, in which case you’re just likely to get work that sounds good, rather than work advancing your objective.
Right, although one has to watch for a possible effect on community composition. If not careful, this will end up with a community full of x-risk folks not necessarily because x-risk is correct cause prioritization, but because it was recruited for due to the theory of change issue you identify.
This seems like a self-fulfilling prophecy. If we never put effort into building a community around ways to reduce global poverty, we’ll never know what value they could have generated.
Also it seems a priori really implausible that longtermists could usefully do more things in their sphere alone than that EAs focusing on the whole of the rest of EA-concern-space could.
Well EA did build a community around it and we’ve seen that talent is a greater bottleneck for longtermism than it is for global poverty.
The flipside argument would be that funding is a greater bottleneck for global poverty than longtermism, and one might convince university students focused on global poverty to go into earning-to-give (including entrepreneurship-to-give). So the goals of community building may well be different between fields, and community building in each cause area should be primarily judged on its contribution to that cause area’s bottleneck.
I could see a world in which the maths works out for that.
I guess the tricky thing there is that you need the amount raised with discount factor applied to exceed the cost, incl. the opportunity cost of community builders potentially earning to give themselves.
And this seems to be a much tighter constraint than that imposed by longtermist theories of change.
True—although I think the costs would be much lower for university groups run by (e.g.) undergraduate student organizers who were paid typical student-worker wages (at most). The opportunity costs would seem much stronger for community organizing by college graduates than by students working a few hours a week.