Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good—to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market, or maybe even just a signpost: once you’ve learnt what you needed, or found what you want and where to go, you don’t necessarily stick around: maybe you need to ‘go out’ in the world to do what calls you.
This explains your venn diagram: GHD and animal welfare are causes that exist prior to, and independent of EA. They, rather than EA, are where the action is if you prioritise those things. AI safety grew up inside EA.
I imagine AI safety will naturally form it own ecosytem independent of EA: much like, if you care about global development, you don’t need to participate in the EA community, a time will come when, for AI safety, you won’t need to participate in EA either.
This doesn’t mean that EA becomes irrelevant, much like a university doesn’t stop mattering when students graduate—or a market ceases to be useful when some people find what they want. There will be further cohorts who want to learn—and some people have to stick around to think about and highlight their options.
Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good—to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market, or maybe even just a signpost: once you’ve learnt what you needed, or found what you want and where to go, you don’t necessarily stick around: maybe you need to ‘go out’ in the world to do what calls you.
This explains your venn diagram: GHD and animal welfare are causes that exist prior to, and independent of EA. They, rather than EA, are where the action is if you prioritise those things. AI safety grew up inside EA.
I imagine AI safety will naturally form it own ecosytem independent of EA: much like, if you care about global development, you don’t need to participate in the EA community, a time will come when, for AI safety, you won’t need to participate in EA either.
This doesn’t mean that EA becomes irrelevant, much like a university doesn’t stop mattering when students graduate—or a market ceases to be useful when some people find what they want. There will be further cohorts who want to learn—and some people have to stick around to think about and highlight their options.