I think that outlining the case for longtermism (and EA principles, more broadly) is better for building a community of people who will reliably choose the highest-priority actions and paths to do the most good and that this is better for the world and keeping x-risk low in the long-run.
I’m persuaded of all this, except for the “better for the world” part, which I’m not sure about and which I think you didn’t argue for. That is, you’ve persuasively argued that emphasising the process over the conclusions has benefits for community epistemics and long-term community health; but this does trade off against other metrics one might have, like the growth/capacity of individual x-risk-related fields, and you don’t comment about the current margin.
For example, if you adopt David Nash’s lens of EA as an incubator and coordinator of other communities, “it’s possible that by focusing on EA as a whole rather than specific causes, we are holding back the growth of these fields.”
The low-fidelity message “holy shit, x-risk” may be an appropriate pitch for some situations, given that people have limited attention, and ‘getting people into EA per se’ is not what we directly care about. For example, among mid-career people with relevant skills, or other people who we expect to be more collaborators with EA than participants.
The high-fidelity message-sequence “EA → Longtermism → x-risk”, as a more complicated idea, is more suited to building the cause prioritisation community, the meta-community that co-ordinates other communities. For example, when fishing for future highly-engaged EAs in universities.
This still leaves open the question of which one of these should be the visible outer layer of EA that people encounter first in the media etc., and on that I think the current margin (which emphasises longtermism over x-risk) is OK. But my takeaway from David Nash’s post is that we should make sure to maintain pathways within EA — even ‘deep within’, e.g. at conferences — that provide value and action-relevance for people who aren’t going to consider themselves EA, but who will go on to be informed and affected by it for a long time (that’s as opposed to having the implicit endpoint be “do direct work for an EA org”). If these people know they can find each other here in EA, that’s also good for the community’s breadth of knowledge.
Thanks! Yes, I’m sympathetic to the idea that I’m anchoring too hard on EA growth being strongly correlated with more good being done in the world, which might be wrong. Also agree that we should test out and welcome people who are convinced by some messages but not others.
Broadly agree; nitpick follows.
I’m persuaded of all this, except for the “better for the world” part, which I’m not sure about and which I think you didn’t argue for. That is, you’ve persuasively argued that emphasising the process over the conclusions has benefits for community epistemics and long-term community health; but this does trade off against other metrics one might have, like the growth/capacity of individual x-risk-related fields, and you don’t comment about the current margin.
For example, if you adopt David Nash’s lens of EA as an incubator and coordinator of other communities, “it’s possible that by focusing on EA as a whole rather than specific causes, we are holding back the growth of these fields.”
The low-fidelity message “holy shit, x-risk” may be an appropriate pitch for some situations, given that people have limited attention, and ‘getting people into EA per se’ is not what we directly care about. For example, among mid-career people with relevant skills, or other people who we expect to be more collaborators with EA than participants.
The high-fidelity message-sequence “EA → Longtermism → x-risk”, as a more complicated idea, is more suited to building the cause prioritisation community, the meta-community that co-ordinates other communities. For example, when fishing for future highly-engaged EAs in universities.
This still leaves open the question of which one of these should be the visible outer layer of EA that people encounter first in the media etc., and on that I think the current margin (which emphasises longtermism over x-risk) is OK. But my takeaway from David Nash’s post is that we should make sure to maintain pathways within EA — even ‘deep within’, e.g. at conferences — that provide value and action-relevance for people who aren’t going to consider themselves EA, but who will go on to be informed and affected by it for a long time (that’s as opposed to having the implicit endpoint be “do direct work for an EA org”). If these people know they can find each other here in EA, that’s also good for the community’s breadth of knowledge.
Thanks! Yes, I’m sympathetic to the idea that I’m anchoring too hard on EA growth being strongly correlated with more good being done in the world, which might be wrong. Also agree that we should test out and welcome people who are convinced by some messages but not others.