This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. “long-termists interested in things like AI” vs. “short-termists who place significantly more weight on current living things”—OR—“human-centered” vs. “those who place significant weight on non-human lives.”
Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a “meta-value” that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.
I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I’ve never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.
I personally never felt that just because I don’t want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn’t an EA.
This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. “long-termists interested in things like AI” vs. “short-termists who place significantly more weight on current living things”—OR—“human-centered” vs. “those who place significant weight on non-human lives.”
Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a “meta-value” that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.
I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I’ve never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.
I personally never felt that just because I don’t want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn’t an EA.