A further attempt at categorization that I think complements your “Respectable <-> Speculative” axis.
I’ve started to think of EA causes as sharing (among other things) a commitment to cosmopolitanism (ie neutrality with respect to the distance between the altruistic actor and beneficiary), but differing according to which dimension is emphasized i) spatial distance (global health, development), ii) temporal difference (alignment), or ii) “mindspace” distance (animal welfare).
I think a table of “speculativeness” vs “cosmopolitanism type” would classify initiatives/proposals pretty cleanly, and might provide more information than “neartermism vs longtermism”?
I like this categorization, but I’m not sure how well it accounts for the component of the community that is worried about x-risk for not especially cosmopolitan reasons. Like, if you think AI is 50% likely to kill everyone in the next 25y then you might choose to work on it even if you only care about your currently alive friends and family.
Which isn’t to say that people in this quadrant don’t care about the impact on other people, just that if the impact on people close to you is large enough and motivating enough then the more cosmopolitan impacts might not be very relevant?
Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
A further attempt at categorization that I think complements your “Respectable <-> Speculative” axis.
I’ve started to think of EA causes as sharing (among other things) a commitment to cosmopolitanism (ie neutrality with respect to the distance between the altruistic actor and beneficiary), but differing according to which dimension is emphasized i) spatial distance (global health, development), ii) temporal difference (alignment), or ii) “mindspace” distance (animal welfare).
I think a table of “speculativeness” vs “cosmopolitanism type” would classify initiatives/proposals pretty cleanly, and might provide more information than “neartermism vs longtermism”?
I like this categorization, but I’m not sure how well it accounts for the component of the community that is worried about x-risk for not especially cosmopolitan reasons. Like, if you think AI is 50% likely to kill everyone in the next 25y then you might choose to work on it even if you only care about your currently alive friends and family.
Which isn’t to say that people in this quadrant don’t care about the impact on other people, just that if the impact on people close to you is large enough and motivating enough then the more cosmopolitan impacts might not be very relevant?
Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
Very reasonable, since it’s not grounded in altruism!