Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let’s start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraph—totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual’s most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that’s just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done—and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I’d guess there may be a correlation between people who think there should be more deference being in the “row” camp and people who think less in the “steer” camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it’s not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that’s important to keep in mind is that deference doesn’t necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let’s start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraph—totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual’s most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that’s just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done—and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I’d guess there may be a correlation between people who think there should be more deference being in the “row” camp and people who think less in the “steer” camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it’s not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that’s important to keep in mind is that deference doesn’t necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.