This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. “long-termists interested in things like AI” vs. “short-termists who place significantly more weight on current living things”—OR—“human-centered” vs. “those who place significant weight on non-human lives.”
Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a “meta-value” that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.
I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I’ve never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.
I personally never felt that just because I don’t want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn’t an EA.
It’s possible that OPS could be useful to EA, but as stated in the post, the validity is not established. It’s hard for me to see how OPS has more predictive ability for mental illness (and subsequent treatment) than any other model of personality. The key feature that makes OPS unique seems to be that it tracks changing personality throughout the day—but what is it about that feature that makes you believe that it could be a better model with more predictive power? Just more granularity?
What are the key first steps that an EA could take? Are you looking for funding? Looking to connect with an established researcher in psychology, or an established institution?
“While they [Dave & Shannon] have taken steps to move to a more scientific approach, some have argued that they fall short of a truly scientific methodology. Nevertheless, that does not make their system invalid.”
This is probably the biggest bottleneck to convince an EA to get involved here. Have Dave & Shannon published peer-reviewed papers that have results that can be replicated? Have they tried to come into contact with established institutions? What if the best next step is for Dave & Shannon to get into graduate school and go for a PhD doing this as their research?