people don’t really evaluate a moral claim in isolation. [...] And that it’s much easier to convince them of the moral claim once you can point to tractable action relevant conclusions.
This seems right—I’ve definitely seen people come across longtermism before coming across x-risks, and have a reaction like, “Well, sure, but can we do anything about it?” I wonder if this means intro programs should at least flip the order of materials—put x-risks before longtermism.
My read from running intro fellowships is that lots of people find long-termism weird, and I implicitly think that many people who ultimately end up identifying as long-termist still have a fair amount of doubt but are deferring to their perception of the EA consensus. Plus, even if your claim IS true, to me that would imply that we’re selecting intro fellows wrong!
Oh interesting, in my experience (from memory, which might be questionable) intro fellows tend to theoretically buy (at least weak?) longtermism pretty easily. And my vague impression is that a majority of professional self-identified longtermists are pretty comfortable with the idea—I haven’t met anyone who’s working on this stuff and says they’re deferring on the philosophy (while I feel like I’ve often heard that people feel iffy/confused about the empirical claims).
And interesting point about the self-selection effects being ones to try to avoid! I think those self-selection effects mostly come from the EA branding of the programs, so it’s not immediately clear to me how those self-selection effects can be eliminated without also losing out on some great self-selection effects (e.g., selection for analytical thinkers, or for people who are interested in spending their careers helping others).
I’d be pro giving them the argument for long-termism and some intuition pumps and seeing if it grabs people, so long as we also ensure that the message doesn’t implicitly feel like “and if you don’t agree with long-termism you also shouldn’t prioritise x-risk”. The latter is the main thing I’m protecting here
Yeah, that’s fair.
It is likely less efficient, but maybe only 3-30x
I’m sympathetic to something along these lines. But I think that’s a great case (from longtermists’ lights) for keeping longtermism in the curriculum. If one week of readings has a decent chance of boosting already-impactful people’s impact by, say, 10x (by convincing them to switch to 10x more impactful interventions), that seems like an extremely strong reason for keeping that week in the curriculum.
I haven’t met anyone who’s working on this stuff and says they’re deferring on the philosophy (while I feel like I’ve often heard that people feel iffy/confused about the empirical claims).
Fair—maybe I feel that people mostly buy ‘future people have non-zero worth and extinction sure is bad’, but may be more uncertain on a totalising view like ‘almost all value is in the far future, stuff today doesn’t really matter, moral worth is the total number of future people and could easily get to >=10^20’.
I’m sympathetic to something along these lines. But I think that’s a great case (from longtermists’ lights) for keeping longtermism in the curriculum. If one week of readings has a decent chance of boosting already-impactful people’s impact by, say, 10x (by convincing them to switch to 10x more impactful interventions), that seems like an extremely strong reason for keeping that week in the curriculum.
Agreed! (Well, by the lights of longtermism at least—I’m at least convinced that extinction is 10x worse than civilisational collapse temporarily, but maybe not 10^10x worse). At this point I feel like we mostly agree—keeping a fraction of the content on longtermism, after x-risks, and making it clear that it’s totally legit to work on x-risk without buying longtermism would make me happy
Thanks! Great points.
This seems right—I’ve definitely seen people come across longtermism before coming across x-risks, and have a reaction like, “Well, sure, but can we do anything about it?” I wonder if this means intro programs should at least flip the order of materials—put x-risks before longtermism.
Oh interesting, in my experience (from memory, which might be questionable) intro fellows tend to theoretically buy (at least weak?) longtermism pretty easily. And my vague impression is that a majority of professional self-identified longtermists are pretty comfortable with the idea—I haven’t met anyone who’s working on this stuff and says they’re deferring on the philosophy (while I feel like I’ve often heard that people feel iffy/confused about the empirical claims).
And interesting point about the self-selection effects being ones to try to avoid! I think those self-selection effects mostly come from the EA branding of the programs, so it’s not immediately clear to me how those self-selection effects can be eliminated without also losing out on some great self-selection effects (e.g., selection for analytical thinkers, or for people who are interested in spending their careers helping others).
Yeah, that’s fair.
I’m sympathetic to something along these lines. But I think that’s a great case (from longtermists’ lights) for keeping longtermism in the curriculum. If one week of readings has a decent chance of boosting already-impactful people’s impact by, say, 10x (by convincing them to switch to 10x more impactful interventions), that seems like an extremely strong reason for keeping that week in the curriculum.
Fair—maybe I feel that people mostly buy ‘future people have non-zero worth and extinction sure is bad’, but may be more uncertain on a totalising view like ‘almost all value is in the far future, stuff today doesn’t really matter, moral worth is the total number of future people and could easily get to >=10^20’.
Agreed! (Well, by the lights of longtermism at least—I’m at least convinced that extinction is 10x worse than civilisational collapse temporarily, but maybe not 10^10x worse). At this point I feel like we mostly agree—keeping a fraction of the content on longtermism, after x-risks, and making it clear that it’s totally legit to work on x-risk without buying longtermism would make me happy