One estimate from 2019 is that EA has 2315 “highly-engaged” EAs and 6500 “active EAs in the community.”
So a way of making your claims more precise is to estimate how many of these people should drop some or all of what they’re doing now to focus on these cause areas. It would also be helpful to specify what sorts of projects you think they’d be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from within EA (Michael Plant’s wellbeing research, for example), so making an accounting for that would be beneficial.
To be clear, I think it would be best if all arguments about causes being neglected did this. I also think arguments in favor of the status quo should do so as well.
I also think it’s important to address why the issue in question is pressing enough that it needs a “boost” from EA relative to what it receives from non-EAs. For example, there’s a fair amount of attention paid to nuclear risk already in the non-EA governance and research communities. Or in the case of “taking dharma seriously,” which I might interpret as the idea that religious obervation is in fact the central purpose of human life, why are the religious institutions of the world doing an inadequate job in this area, such that EA needs to get involved?
I realize this is just a list on Twitter, a sort of brainstorm or precursor to a deeper argument. That’s a fine place to start. Without an explicit argument on the pros and cons of any given point, though, this list is almost completely illegible on its own. And it would not surprise me at all if any given list of 22 interdependent bullet-point-length project ideas and cause areas contained zero items that really should cause EA to shift its priorities.
Maybe there are other articles out there making deeper arguments in favor of making these EA cause areas. If so, then it seems to me that we should make efforts to center conversation on those, rather than “regressing” to Twitter claims.
Alternatively, if this is where we’re at, then I’d encourage the author, or anyone whose intuition is that these are neglected, to make a convincing argument for them. These are sort of the “epistemic rules” of EA.
In fact, I think that’s sort of the movement’s brand. EA isn’t strictly about “doing the most good.” How could we ever know that for sure?
Instead, it’s about centering issues for which the strongest, most legible case can be made. This may indeed cause some inefficiencies, as you say. Some weird issues that are even more important than the legible ones we support may be ignored by EA, simply because they depend on so much illegible information make their importance clear.
Hopefully, those issues will find support outside of EA. I think the example of “dharma,” or the “implications of psychedelics,” are possibly subject to this dilemma. But I personally think EA is better when it confines itself to legible cause areas. There’s already a lot of intuition-and-passion-based activism and charity out there.
If anyone thinks EA ought to encompass illegible cause areas, I would be quite interested to read a (legible!) argument explaining why!
Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffes’ list of terse/illegible claims is because I know they’re made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you’ll see that a couple of them are expanded already. I don’t know what mix of “x was investigated silently and discarded” and “movement has a blindspot for x” explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we don’t do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than “big 20 cause area”.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movement’s effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most people’s work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.
One estimate from 2019 is that EA has 2315 “highly-engaged” EAs and 6500 “active EAs in the community.”
So a way of making your claims more precise is to estimate how many of these people should drop some or all of what they’re doing now to focus on these cause areas. It would also be helpful to specify what sorts of projects you think they’d be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from within EA (Michael Plant’s wellbeing research, for example), so making an accounting for that would be beneficial.
To be clear, I think it would be best if all arguments about causes being neglected did this. I also think arguments in favor of the status quo should do so as well.
I also think it’s important to address why the issue in question is pressing enough that it needs a “boost” from EA relative to what it receives from non-EAs. For example, there’s a fair amount of attention paid to nuclear risk already in the non-EA governance and research communities. Or in the case of “taking dharma seriously,” which I might interpret as the idea that religious obervation is in fact the central purpose of human life, why are the religious institutions of the world doing an inadequate job in this area, such that EA needs to get involved?
I realize this is just a list on Twitter, a sort of brainstorm or precursor to a deeper argument. That’s a fine place to start. Without an explicit argument on the pros and cons of any given point, though, this list is almost completely illegible on its own. And it would not surprise me at all if any given list of 22 interdependent bullet-point-length project ideas and cause areas contained zero items that really should cause EA to shift its priorities.
Maybe there are other articles out there making deeper arguments in favor of making these EA cause areas. If so, then it seems to me that we should make efforts to center conversation on those, rather than “regressing” to Twitter claims.
Alternatively, if this is where we’re at, then I’d encourage the author, or anyone whose intuition is that these are neglected, to make a convincing argument for them. These are sort of the “epistemic rules” of EA.
In fact, I think that’s sort of the movement’s brand. EA isn’t strictly about “doing the most good.” How could we ever know that for sure?
Instead, it’s about centering issues for which the strongest, most legible case can be made. This may indeed cause some inefficiencies, as you say. Some weird issues that are even more important than the legible ones we support may be ignored by EA, simply because they depend on so much illegible information make their importance clear.
Hopefully, those issues will find support outside of EA. I think the example of “dharma,” or the “implications of psychedelics,” are possibly subject to this dilemma. But I personally think EA is better when it confines itself to legible cause areas. There’s already a lot of intuition-and-passion-based activism and charity out there.
If anyone thinks EA ought to encompass illegible cause areas, I would be quite interested to read a (legible!) argument explaining why!
Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffes’ list of terse/illegible claims is because I know they’re made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you’ll see that a couple of them are expanded already. I don’t know what mix of “x was investigated silently and discarded” and “movement has a blindspot for x” explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we don’t do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than “big 20 cause area”.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movement’s effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most people’s work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of “pipeline”-like framework for how an idea becomes an EA cause area. What is the “epistemic bar” for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. It’s normal and fine for an EA to pursue global health, while there’s little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that haven’t benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that it’s important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider “enough founder energy to demand attention” as part of what makes a neglected idea “tractable” to elevate into a cause area.
The con is that it seems like, in theory, we’d want to actually focus extra attention on those neglected (and important, tractable) ideas—that seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And it’s possible that conventional EA is monopolizing resources, so that it’s harder for someone in 2022 to “found” a new EA cause area than it was in 2008.
So hopefully, it doesn’t seem like a distraction from the object-level proposals on the list to bring up this meta-issue.