I also think that EA consensus views are often unusually well-grounded, meaning there are unusually strong reasons to defer to them. (But obviously this may reflect my own biases.)
Fwiw I think many effective altruists defer too little rather than too much.
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldn’t in general put special weight on your own beliefs about those effects (there are some complicating factors here, but that’s a decent first approximation). Instead you should put the same weight on yours and others’ beliefs. I think most people don’t do that, but put much too much weight on their own beliefs relative to others’. Effective altruists have shifted away from that human default, but in my view it’s unlikely—in the light of the general human tendency to overweight our own beliefs—that we’ve shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but it’s nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, though—my sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I won’t be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, I’ll point to some instances where my opinion is that there’s too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on what’s effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that I’ve noticed is that people on the forum—including people who aren’t deeply immersed in effective altruist thinking—criticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think “maybe they’ve thought more about this than I have—maybe there is something that I’ve missed?” More often than not, very smart people have thought very extensively about most such issues, and it’s therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then you’re probably less likely to believe that the critics should defer to the effective altruist consensus.
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let’s start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraph—totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual’s most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that’s just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done—and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I’d guess there may be a correlation between people who think there should be more deference being in the “row” camp and people who think less in the “steer” camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it’s not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that’s important to keep in mind is that deference doesn’t necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Let’s say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying “No” to Vaidehi’s question since it is very long but does not include a specific example.
> I won’t be able to give you examples where I demonstrate that there was too little deference I don’t think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It’s just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
Agreed—except that on the margin I’d rather encourage EAs to defer less than more. :) But of course some should defer less, and others more, and also it depends on the situation, etc. etc.
I also think that EA consensus views are often unusually well-grounded, meaning there are unusually strong reasons to defer to them. (But obviously this may reflect my own biases.)
Fwiw I think many effective altruists defer too little rather than too much.
Could you a few specific examples of times you have seen EAs deferring too little?
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldn’t in general put special weight on your own beliefs about those effects (there are some complicating factors here, but that’s a decent first approximation). Instead you should put the same weight on yours and others’ beliefs. I think most people don’t do that, but put much too much weight on their own beliefs relative to others’. Effective altruists have shifted away from that human default, but in my view it’s unlikely—in the light of the general human tendency to overweight our own beliefs—that we’ve shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but it’s nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, though—my sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I won’t be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, I’ll point to some instances where my opinion is that there’s too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on what’s effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that I’ve noticed is that people on the forum—including people who aren’t deeply immersed in effective altruist thinking—criticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think “maybe they’ve thought more about this than I have—maybe there is something that I’ve missed?” More often than not, very smart people have thought very extensively about most such issues, and it’s therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then you’re probably less likely to believe that the critics should defer to the effective altruist consensus.
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let’s start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraph—totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual’s most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that’s just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done—and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I’d guess there may be a correlation between people who think there should be more deference being in the “row” camp and people who think less in the “steer” camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it’s not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that’s important to keep in mind is that deference doesn’t necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Let’s say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying “No” to Vaidehi’s question since it is very long but does not include a specific example.
> I won’t be able to give you examples where I demonstrate that there was too little deference
I don’t think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It’s just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
Agreed—except that on the margin I’d rather encourage EAs to defer less than more. :) But of course some should defer less, and others more, and also it depends on the situation, etc. etc.