I think “respectable” is kind of a loaded term that gives longtermism a slightly negative connotation. I feel like a more accurate term would be how “galaxy brain” the cause area is—how much effort and time do you need to explain it to a regular person, or what percentage of normal people would be receptive to a pitch.
Other phrasings for this cluster include “low inferential distance”, “clearly valuable from a wide range of perspectives”, “normie”, “mainstream”, “lower risk”, “less neglected”, “more conventional”, “boring”, or “traditional”.
Perhaps rather than a spectrum of “Respectable <-> Speculative” the label could be a more neutral (though more of a mouthful) “High Uncertainty Discounting <-> Low Uncertainty Discounting”
A further attempt at categorization that I think complements your “Respectable <-> Speculative” axis.
I’ve started to think of EA causes as sharing (among other things) a commitment to cosmopolitanism (ie neutrality with respect to the distance between the altruistic actor and beneficiary), but differing according to which dimension is emphasized i) spatial distance (global health, development), ii) temporal difference (alignment), or ii) “mindspace” distance (animal welfare).
I think a table of “speculativeness” vs “cosmopolitanism type” would classify initiatives/proposals pretty cleanly, and might provide more information than “neartermism vs longtermism”?
I like this categorization, but I’m not sure how well it accounts for the component of the community that is worried about x-risk for not especially cosmopolitan reasons. Like, if you think AI is 50% likely to kill everyone in the next 25y then you might choose to work on it even if you only care about your currently alive friends and family.
Which isn’t to say that people in this quadrant don’t care about the impact on other people, just that if the impact on people close to you is large enough and motivating enough then the more cosmopolitan impacts might not be very relevant?
Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
Looking at recent EA forum posts in these areas, do EAs investigating how much juvenile insects matter relative to adult ones really have much more in common with ones working on loneliness than they do with ones evaluating reducing x-risk with a more resilient food supply?
I think a split along the lines of how “respectable” your cause area is might be possible (though still not a good idea):
Respectable: RCT-supported global health and animal work, development of vaccines, reducing risk from nukes or climate change, human- and animal-focused evaluation.
Intermediate: developing plant-based foods, monitoring for future pandemics.
Speculative: figuring out how to make life better for wild animals or align superintelligent AI, building refuges.
But in each of these buckets I’ve put at least one thing I think that would normally be called longtermist and one that wouldn’t.
I think “respectable” is kind of a loaded term that gives longtermism a slightly negative connotation. I feel like a more accurate term would be how “galaxy brain” the cause area is—how much effort and time do you need to explain it to a regular person, or what percentage of normal people would be receptive to a pitch.
Other phrasings for this cluster include “low inferential distance”, “clearly valuable from a wide range of perspectives”, “normie”, “mainstream”, “lower risk”, “less neglected”, “more conventional”, “boring”, or “traditional”.
This is an excellent point that again highlights the problem of labeling something “Longtermist” when many expect it to transpire within their lifetimes.
Perhaps rather than a spectrum of “Respectable <-> Speculative” the label could be a more neutral (though more of a mouthful) “High Uncertainty Discounting <-> Low Uncertainty Discounting”
A further attempt at categorization that I think complements your “Respectable <-> Speculative” axis.
I’ve started to think of EA causes as sharing (among other things) a commitment to cosmopolitanism (ie neutrality with respect to the distance between the altruistic actor and beneficiary), but differing according to which dimension is emphasized i) spatial distance (global health, development), ii) temporal difference (alignment), or ii) “mindspace” distance (animal welfare).
I think a table of “speculativeness” vs “cosmopolitanism type” would classify initiatives/proposals pretty cleanly, and might provide more information than “neartermism vs longtermism”?
I like this categorization, but I’m not sure how well it accounts for the component of the community that is worried about x-risk for not especially cosmopolitan reasons. Like, if you think AI is 50% likely to kill everyone in the next 25y then you might choose to work on it even if you only care about your currently alive friends and family.
Which isn’t to say that people in this quadrant don’t care about the impact on other people, just that if the impact on people close to you is large enough and motivating enough then the more cosmopolitan impacts might not be very relevant?
Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
Very reasonable, since it’s not grounded in altruism!