Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffes’ list of terse/illegible claims is because I know they’re made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you’ll see that a couple of them are expanded already. I don’t know what mix of “x was investigated silently and discarded” and “movement has a blindspot for x” explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we don’t do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than “big 20 cause area”.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movement’s effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most people’s work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.
Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffes’ list of terse/illegible claims is because I know they’re made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you’ll see that a couple of them are expanded already. I don’t know what mix of “x was investigated silently and discarded” and “movement has a blindspot for x” explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we don’t do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than “big 20 cause area”.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movement’s effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most people’s work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.