Thanks, you make a compelling argument for AI safety movement building. I especially like that you have a lot of experience with community building already to draw these conclusions from. However I think you might be (perhaps unintentionally) setting up the impression of a false dichotomy here between general EA community building and AI safety community building.
I might be wrong, but perhaps you are saying that EA should intentionally more heavily support the budding AI alignment community than they are now, and in some cases this community should be prioritised more than funding over other EA groups? That would seem reasonable to me at least. Your conclusion of “On the margin, I’d direct more resources towards AI safety movement building, though I still think EA movement-building can be very valuable and should continue to some extent.” seems to back up my take?
It makes sense to me that EA funds could experiment in investing a decent amount in communities built specifically around AI safety, then gather data for a couple of years and see if it produces both a consistent community and fruitful counterfactual AI safety efforts. Its seems likely these communities could be intertwined and connected with current EA communities to different extents in different places), but it could also be very separate. This might already be an explicit plan which is happening and I’ve missed it.
Also, initial recruitment numbers only tell part of the effectiveness story. One of the strengths of EA is that people, once joining the community often... 1. Devote a decent part of their life/time/resources to the community and the work 2. Have a decent likelihood of being in it for the long term (This must be quantified somewhere too)
Whether these features would also be present in an AI safety community remain to be seen.
Like titotal said, I don’t think a drastic pivot pulling a huge amount of money away from EA community building and towards AI safety groups would be a great strategic move. Putting all our eggs in one basket and leaving established communities high and dry seems like a bad move - mind you I don’t think that will happen anyway.
Final Question “Anecdotally, impartial/future-focused altruism is not the primary motivation for a large portion of individuals working full-time on AI existential risk reduction (and maybe the majority).” If not this, then what is their motivation outside of perhaps selfish fear for themselves or their family? I’m genuinely intrigued here.
Thanks, you make a compelling argument for AI safety movement building. I especially like that you have a lot of experience with community building already to draw these conclusions from. However I think you might be (perhaps unintentionally) setting up the impression of a false dichotomy here between general EA community building and AI safety community building.
I might be wrong, but perhaps you are saying that EA should intentionally more heavily support the budding AI alignment community than they are now, and in some cases this community should be prioritised more than funding over other EA groups? That would seem reasonable to me at least. Your conclusion of “On the margin, I’d direct more resources towards AI safety movement building, though I still think EA movement-building can be very valuable and should continue to some extent.” seems to back up my take?
It makes sense to me that EA funds could experiment in investing a decent amount in communities built specifically around AI safety, then gather data for a couple of years and see if it produces both a consistent community and fruitful counterfactual AI safety efforts. Its seems likely these communities could be intertwined and connected with current EA communities to different extents in different places), but it could also be very separate. This might already be an explicit plan which is happening and I’ve missed it.
Also, initial recruitment numbers only tell part of the effectiveness story. One of the strengths of EA is that people, once joining the community often...
1. Devote a decent part of their life/time/resources to the community and the work
2. Have a decent likelihood of being in it for the long term (This must be quantified somewhere too)
Whether these features would also be present in an AI safety community remain to be seen.
Like titotal said, I don’t think a drastic pivot pulling a huge amount of money away from EA community building and towards AI safety groups would be a great strategic move. Putting all our eggs in one basket and leaving established communities high and dry seems like a bad move - mind you I don’t think that will happen anyway.
Final Question “Anecdotally, impartial/future-focused altruism is not the primary motivation for a large portion of individuals working full-time on AI existential risk reduction (and maybe the majority).” If not this, then what is their motivation outside of perhaps selfish fear for themselves or their family? I’m genuinely intrigued here.
Nice one!