I basically agree with Sarah Constantin and Ben Hoffman’s critiques. The community is too large and distributed to avoid principal-agent problems and Ribbonfarm-Sociopaths. The more people that are involved, the worse decision-making processes get. So I’d prefer to fragment the community in two, with one focused on projects that are externally-facing and primarily interact with non-EAs, and another that’s smaller, denser, and inward-facing, that can be arbitrarily ambitious. The second group has to avoid the forces that attract Sociopaths and Ra, which means it must be relatively small, must be highly socially interconnected, must expand organically, and must have very high standards.
As a related mechanism towards the same end, I would want the community to stop agreeing to disagree on cause areas and how to spend money. The returns for focusing on a cause are superlinear with the amount of thought and resources that go into it. As such, we’re paying a tax in epistemics and outcomes in order to have a wider community, which I don’t think gives us all that much.
More or less all of the people I interact with are associated with the effective altruism and/or rationality communities. I’m connected to the MIRI/CFAR cluster of people, though I’m not generally directly involved in what they do.
Anonymous #10: