I really like this post and agree with the message.
I think this problem is exacerbated by the problem that it seems that AI safety does not have a larger ecosystem of which EA is a part. What % of AI safety funding/work is EA affiliated? I would guess a very large number. Contrasted with GHD, which has a much larger ecosystem of funders/orgs/thought leadership of which EA is just a part. It is easier to think of EA as a valuable contributor to GHD, while it feels like AI safety is JUST EA. A natural consequence of this is that the choice to prioritize AI safety is viewed as “radical”, seeing as no one else seems to really agree with EAs about it (not that I think EA is wrong, but I think this affects perceptions).
Say you had a random person who is concerned with the world and wants to have a positive impact, but has little in the way of priors about what causes should be prioritized. If we were to rank different EA causes in order by how “believable” they are (1 being most believable), I think it would look something like:
Global Health and Development
Animal Welfare
Global Catastrophic Risks (not AI Safety)
GCR—AI Safety
When the community puts the least believable/most radical cause area front and center (again, not necessarily disagreeing with the analysis that leads to this), it becomes more difficult to bring people into the movement.
When I talk to people about EA as it relates to GHD and Animal Welfare, people tend to be somewhat willing to agree with the points. I haven’t had any luck at all talking about longtermism/AI safety.
This all leads to a point I’ve considered lately—I think EA as a whole would benefit from AI safety/longtermism “spinning” off into its own thing which EA still contributes to. I know that this isn’t really a thing that can be decided—the only way this gets accomplished is if AI safety can attract non-EA funding/support/attention, which is of course a very difficult task!
I really like this post and agree with the message.
I think this problem is exacerbated by the problem that it seems that AI safety does not have a larger ecosystem of which EA is a part. What % of AI safety funding/work is EA affiliated? I would guess a very large number. Contrasted with GHD, which has a much larger ecosystem of funders/orgs/thought leadership of which EA is just a part. It is easier to think of EA as a valuable contributor to GHD, while it feels like AI safety is JUST EA. A natural consequence of this is that the choice to prioritize AI safety is viewed as “radical”, seeing as no one else seems to really agree with EAs about it (not that I think EA is wrong, but I think this affects perceptions).
Say you had a random person who is concerned with the world and wants to have a positive impact, but has little in the way of priors about what causes should be prioritized. If we were to rank different EA causes in order by how “believable” they are (1 being most believable), I think it would look something like:
Global Health and Development
Animal Welfare
Global Catastrophic Risks (not AI Safety)
GCR—AI Safety
When the community puts the least believable/most radical cause area front and center (again, not necessarily disagreeing with the analysis that leads to this), it becomes more difficult to bring people into the movement.
When I talk to people about EA as it relates to GHD and Animal Welfare, people tend to be somewhat willing to agree with the points. I haven’t had any luck at all talking about longtermism/AI safety.
This all leads to a point I’ve considered lately—I think EA as a whole would benefit from AI safety/longtermism “spinning” off into its own thing which EA still contributes to. I know that this isn’t really a thing that can be decided—the only way this gets accomplished is if AI safety can attract non-EA funding/support/attention, which is of course a very difficult task!