One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.
One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.