To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the âmetaphysical implications of the psychedelic experienceâ are.
I get a sense of dejavu reading this criticism as I feel Iâve sixteen variants of this over the years of how EA has psychological problem this, deep nietzschean struggle that and fails to value <authorâs pet interest>
If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned. Doing the most good doesnât require others approval. I would even wager that if someone wrote a convincing case for why we should be âtaking dharma seriouslyâ, then many would start taking it seriously.
Yes, there are biases and blindspots in EA that lead us to have less accurate beliefs, but by large I think the primary reason many of these topics arenât taken seriously is that the case for doing so usually isnât all that great.
Thereâs definitely more charitable readings than I give here, but after seeing variants of this criticism again and again I donât think the charitable enterpretations are the most accurate. The EA community has a thousand flaws, but I donât think these are it.
It would be extremely surprising if all of them were being given the correct amount of attention. (For a start, #10 is vanilla and highly plausible, and while Iâve heard it before, Iâve never given it proper attention. #5 should worry us a lot.) Even highly liquid markets donât manage to price everything right all the time, when it comes to weird things.
What would the source of EAâs perfect efficiency be? The grantmakers (who openly say that they have a sorta tenuous grasp on impact even in concrete domains)? The perfectly independent reasoning of each EA, including very new EAs? The philosophers, who sometimes throw up their hands and say âah hold up we donât understand enough yet, letâs wait and think insteadâ?
For about 4 years Iâve spent most of my time on EA, and 7 of these ideas are new to me. Even if they werenât, lack of novelty is no objection. Repetition is only waste if you assume that our epistemics are so good that weâre processing everything right the first (or fourth) time we see it.
What do you think EAâs biases and blindspots are?
One estimate from 2019 is that EA has 2315 âhighly-engagedâ EAs and 6500 âactive EAs in the community.â
So a way of making your claims more precise is to estimate how many of these people should drop some or all of what theyâre doing now to focus on these cause areas. It would also be helpful to specify what sorts of projects you think theyâd be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from within EA (Michael Plantâs wellbeing research, for example), so making an accounting for that would be beneficial.
To be clear, I think it would be best if all arguments about causes being neglected did this. I also think arguments in favor of the status quo should do so as well.
I also think itâs important to address why the issue in question is pressing enough that it needs a âboostâ from EA relative to what it receives from non-EAs. For example, thereâs a fair amount of attention paid to nuclear risk already in the non-EA governance and research communities. Or in the case of âtaking dharma seriously,â which I might interpret as the idea that religious obervation is in fact the central purpose of human life, why are the religious institutions of the world doing an inadequate job in this area, such that EA needs to get involved?
I realize this is just a list on Twitter, a sort of brainstorm or precursor to a deeper argument. Thatâs a fine place to start. Without an explicit argument on the pros and cons of any given point, though, this list is almost completely illegible on its own. And it would not surprise me at all if any given list of 22 interdependent bullet-point-length project ideas and cause areas contained zero items that really should cause EA to shift its priorities.
Maybe there are other articles out there making deeper arguments in favor of making these EA cause areas. If so, then it seems to me that we should make efforts to center conversation on those, rather than âregressingâ to Twitter claims.
Alternatively, if this is where weâre at, then Iâd encourage the author, or anyone whose intuition is that these are neglected, to make a convincing argument for them. These are sort of the âepistemic rulesâ of EA.
In fact, I think thatâs sort of the movementâs brand. EA isnât strictly about âdoing the most good.â How could we ever know that for sure?
Instead, itâs about centering issues for which the strongest, most legible case can be made. This may indeed cause some inefficiencies, as you say. Some weird issues that are even more important than the legible ones we support may be ignored by EA, simply because they depend on so much illegible information make their importance clear.
Hopefully, those issues will find support outside of EA. I think the example of âdharma,â or the âimplications of psychedelics,â are possibly subject to this dilemma. But I personally think EA is better when it confines itself to legible cause areas. Thereâs already a lot of intuition-and-passion-based activism and charity out there.
If anyone thinks EA ought to encompass illegible cause areas, I would be quite interested to read a (legible!) argument explaining why!
Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffesâ list of terse/âillegible claims is because I know theyâre made in good faith and because they make the disturbing claim that our legibility /â plausibility sensor is broken. In fact if you look at his past Forum posts youâll see that a couple of them are expanded already. I donât know what mix of âx was investigated silently and discardedâ and âmovement has a blindspot for xâ explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we donât do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than âbig 20 cause areaâ.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movementâs effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most peopleâs work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of âpipelineâ-like framework for how an idea becomes an EA cause area. What is the âepistemic barâ for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. Itâs normal and fine for an EA to pursue global health, while thereâs little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that havenât benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that itâs important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider âenough founder energy to demand attentionâ as part of what makes a neglected idea âtractableâ to elevate into a cause area.
The con is that it seems like, in theory, weâd want to actually focus extra attention on those neglected (and important, tractable) ideasâthat seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And itâs possible that conventional EA is monopolizing resources, so that itâs harder for someone in 2022 to âfoundâ a new EA cause area than it was in 2008.
So hopefully, it doesnât seem like a distraction from the object-level proposals on the list to bring up this meta-issue.
To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the âmetaphysical implications of the psychedelic experienceâ are.
...
If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned⊠I would even wager that if someone wrote a convincing case for why we should be âtaking dharma seriouslyâ, then many would start taking it seriously.
These two bits seem fairly contradictory to me.
If you think a position is ânonsenseâ and youâre âhappy that the EA community is not spending its time engaging withâ it, is someone actually âvery welcomeâ to do a write-up about it on the EA Forum?
In a world where a convincing case can be written for a weird view, should we really expect EAs to take that view seriously, if theyâre starting from your stated position that the view is nonsense and not worth the time to engage with? (Can you describe the process by which a hypothetical weird-but-correct view would see widespread adoption?)
And, who would take the time to try & write up such a case? Milan said he thinks EA âbasically canât hear other flavors of important feedbackâ, suggesting a sense in which he agrees with your first paragraphâEAs tend to think these views are nonsense and not worth engaging with, therefore there is no point in defending them at length because no one is listening.
We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism⊠It was these same people that then tried to prevent this paper from being published.
It doesnât feel contradictory to me, but I think I see where youâre coming from. I hold the following two beliefs which may seem contradictory :
1. Many of the aforementioned blindspots seem like nonsense, and I would be surprised if extensive research in any would produce much of value. 2. At large, people should form and act on their own beliefs rather than differing to what is accepted by some authority.
Thereâs an endless number of things which could turn out to be important. All else equal, EAâs should prioritise researching the things which seem the most likely to turn out to be important.
This is why I am happy that the EA community is not spending time engaging with many of these research directions, as I think theyâre unlikely to bear fruit. That doesnât mean Iâm not willing to change my mind if I were presented a really good case for their importance!
If someone disagrees with my assessment then I would very much welcome research and write-ups, after which I would not be paying the cost of
âshould I (or someone else) prioritise researching psychedelics over this other really important thingâ
but rather
âshould I prioritise reading this paper/âwriteup, over the many other potentially less important papers?â
If everyone would refuse to engage with even a short writeup on the topic, I would agree that there was a problem and to be fair I think there are some issues with misprioritisation due to poor use of proxies such as âdoes the field sound too weirdâ or âis the author high statusâ. But I think in the far majority of cases, what happens is simply that the writeup wasnât sufficiently convincing to justify moving away resources from other important research fields to engage further. This will of course seem like a mistake to the people who are convinced of the topicâs importance, but like the correct action to those who arenât.
To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the âmetaphysical implications of the psychedelic experienceâ are.
I get a sense of dejavu reading this criticism as I feel Iâve sixteen variants of this over the years of how EA has psychological problem this, deep nietzschean struggle that and fails to value <authorâs pet interest>
If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned. Doing the most good doesnât require others approval. I would even wager that if someone wrote a convincing case for why we should be âtaking dharma seriouslyâ, then many would start taking it seriously.
Yes, there are biases and blindspots in EA that lead us to have less accurate beliefs, but by large I think the primary reason many of these topics arenât taken seriously is that the case for doing so usually isnât all that great.
Thereâs definitely more charitable readings than I give here, but after seeing variants of this criticism again and again I donât think the charitable enterpretations are the most accurate. The EA community has a thousand flaws, but I donât think these are it.
It would be extremely surprising if all of them were being given the correct amount of attention. (For a start, #10 is vanilla and highly plausible, and while Iâve heard it before, Iâve never given it proper attention. #5 should worry us a lot.) Even highly liquid markets donât manage to price everything right all the time, when it comes to weird things.
What would the source of EAâs perfect efficiency be? The grantmakers (who openly say that they have a sorta tenuous grasp on impact even in concrete domains)? The perfectly independent reasoning of each EA, including very new EAs? The philosophers, who sometimes throw up their hands and say âah hold up we donât understand enough yet, letâs wait and think insteadâ?
For about 4 years Iâve spent most of my time on EA, and 7 of these ideas are new to me. Even if they werenât, lack of novelty is no objection. Repetition is only waste if you assume that our epistemics are so good that weâre processing everything right the first (or fourth) time we see it.
What do you think EAâs biases and blindspots are?
One estimate from 2019 is that EA has 2315 âhighly-engagedâ EAs and 6500 âactive EAs in the community.â
So a way of making your claims more precise is to estimate how many of these people should drop some or all of what theyâre doing now to focus on these cause areas. It would also be helpful to specify what sorts of projects you think theyâd be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from within EA (Michael Plantâs wellbeing research, for example), so making an accounting for that would be beneficial.
To be clear, I think it would be best if all arguments about causes being neglected did this. I also think arguments in favor of the status quo should do so as well.
I also think itâs important to address why the issue in question is pressing enough that it needs a âboostâ from EA relative to what it receives from non-EAs. For example, thereâs a fair amount of attention paid to nuclear risk already in the non-EA governance and research communities. Or in the case of âtaking dharma seriously,â which I might interpret as the idea that religious obervation is in fact the central purpose of human life, why are the religious institutions of the world doing an inadequate job in this area, such that EA needs to get involved?
I realize this is just a list on Twitter, a sort of brainstorm or precursor to a deeper argument. Thatâs a fine place to start. Without an explicit argument on the pros and cons of any given point, though, this list is almost completely illegible on its own. And it would not surprise me at all if any given list of 22 interdependent bullet-point-length project ideas and cause areas contained zero items that really should cause EA to shift its priorities.
Maybe there are other articles out there making deeper arguments in favor of making these EA cause areas. If so, then it seems to me that we should make efforts to center conversation on those, rather than âregressingâ to Twitter claims.
Alternatively, if this is where weâre at, then Iâd encourage the author, or anyone whose intuition is that these are neglected, to make a convincing argument for them. These are sort of the âepistemic rulesâ of EA.
In fact, I think thatâs sort of the movementâs brand. EA isnât strictly about âdoing the most good.â How could we ever know that for sure?
Instead, itâs about centering issues for which the strongest, most legible case can be made. This may indeed cause some inefficiencies, as you say. Some weird issues that are even more important than the legible ones we support may be ignored by EA, simply because they depend on so much illegible information make their importance clear.
Hopefully, those issues will find support outside of EA. I think the example of âdharma,â or the âimplications of psychedelics,â are possibly subject to this dilemma. But I personally think EA is better when it confines itself to legible cause areas. Thereâs already a lot of intuition-and-passion-based activism and charity out there.
If anyone thinks EA ought to encompass illegible cause areas, I would be quite interested to read a (legible!) argument explaining why!
Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose.
Legibility is great! The reason I promoted Griffesâ list of terse/âillegible claims is because I know theyâre made in good faith and because they make the disturbing claim that our legibility /â plausibility sensor is broken. In fact if you look at his past Forum posts youâll see that a couple of them are expanded already. I donât know what mix of âx was investigated silently and discardedâ and âmovement has a blindspot for xâ explains the reception, but hey nor does anyone.
Current vs claimed optimal person allocation is a good idea, but I think I know why we donât do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than âbig 20 cause areaâ.
Very sketchy BOTEC for the ideas I liked:
#5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly.
#10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway.
#13: Currently around 30? people, including my own minor effort. I think this could boost the movementâs effects by 10%, so 250 people would be fine.
#20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess.
That was hard and probably off by an order of magnitude, because most peopleâs work is quiet and unindexed if not actively private.
One constructive project might be to outline a sort of âpipelineâ-like framework for how an idea becomes an EA cause area. What is the âepistemic barâ for:
Thinking about an EA cause area for more than 10 minutes?
Broaching a topic in informal conversation?
Investing 10 hours researching it in depth?
Posting about it on the EA forum?
Seeking grant funding?
Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in favor of it. Itâs normal and fine for an EA to pursue global health, while thereâs little formal EA support for some of the ideas on this list. Cause areas that have been normed benefit from accumulating evidence and infrastructure to stay in the spotlight. Causes that havenât benefitted from EA norming languish in the dark.
This may be good or bad. The pro is that itâs important to get things done, and concentrating our efforts in a few consensus areas that are imperfect but good enough may ultimately help us organize and establish a track record of success over the long run. In addition, maybe we want to consider âenough founder energy to demand attentionâ as part of what makes a neglected idea âtractableâ to elevate into a cause area.
The con is that it seems like, in theory, weâd want to actually focus extra attention on those neglected (and important, tractable) ideasâthat seems like a self-consistent principle with the heuristics we used to elevate the original cause areas in the first place. And itâs possible that conventional EA is monopolizing resources, so that itâs harder for someone in 2022 to âfoundâ a new EA cause area than it was in 2008.
So hopefully, it doesnât seem like a distraction from the object-level proposals on the list to bring up this meta-issue.
Itâs worth pointing out that #5 will not be news to EAs who have come across Bostromâs paper The Vulnerable World Hypothesis which is featured on the Future of Humanityâs website. It also generated quite a bit of discussion here.
As for #10 it sounds like people at CSER are investigating similar issues as per the comment by MMMaas elsewhere in this thread.
Iâm not convinced any of the ideas mentioned are very important blindspots.
Rethink also have something coming on #10 apparently. But this is then some evidence for Griffesâ nose.
Even if none of these were blindspots, itâs worth actively looking for the ones we no doubt have. (from good faith sources.)
Maybe, but if multiple people have come across an idea then that may be evidence itâs not very hard to come across...
Absolutely.
These two bits seem fairly contradictory to me.
If you think a position is ânonsenseâ and youâre âhappy that the EA community is not spending its time engaging withâ it, is someone actually âvery welcomeâ to do a write-up about it on the EA Forum?
In a world where a convincing case can be written for a weird view, should we really expect EAs to take that view seriously, if theyâre starting from your stated position that the view is nonsense and not worth the time to engage with? (Can you describe the process by which a hypothetical weird-but-correct view would see widespread adoption?)
And, who would take the time to try & write up such a case? Milan said he thinks EA âbasically canât hear other flavors of important feedbackâ, suggesting a sense in which he agrees with your first paragraphâEAs tend to think these views are nonsense and not worth engaging with, therefore there is no point in defending them at length because no one is listening.
Iâm reminded of this post which stated:
It doesnât feel contradictory to me, but I think I see where youâre coming from. I hold the following two beliefs which may seem contradictory :
1. Many of the aforementioned blindspots seem like nonsense, and I would be surprised if extensive research in any would produce much of value.
2. At large, people should form and act on their own beliefs rather than differing to what is accepted by some authority.
Thereâs an endless number of things which could turn out to be important. All else equal, EAâs should prioritise researching the things which seem the most likely to turn out to be important.
This is why I am happy that the EA community is not spending time engaging with many of these research directions, as I think theyâre unlikely to bear fruit. That doesnât mean Iâm not willing to change my mind if I were presented a really good case for their importance!
If someone disagrees with my assessment then I would very much welcome research and write-ups, after which I would not be paying the cost of
âshould I (or someone else) prioritise researching psychedelics over this other really important thingâ
but rather
âshould I prioritise reading this paper/âwriteup, over the many other potentially less important papers?â
If everyone would refuse to engage with even a short writeup on the topic, I would agree that there was a problem and to be fair I think there are some issues with misprioritisation due to poor use of proxies such as âdoes the field sound too weirdâ or âis the author high statusâ. But I think in the far majority of cases, what happens is simply that the writeup wasnât sufficiently convincing to justify moving away resources from other important research fields to engage further. This will of course seem like a mistake to the people who are convinced of the topicâs importance, but like the correct action to those who arenât.