My view is that when you are considering whether to take some action and are weighing up its effects, you shouldnât in general put special weight on your own beliefs about those effects (there are some complicating factors here, but thatâs a decent first approximation). Instead you should put the same weight on yours and othersâ beliefs. I think most people donât do that, but put much too much weight on their own beliefs relative to othersâ. Effective altruists have shifted away from that human default, but in my view itâs unlikelyâin the light of the general human tendency to overweight our own beliefsâthat weâve shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but itâs nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, thoughâmy sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I wonât be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, Iâll point to some instances where my opinion is that thereâs too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on whatâs effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that Iâve noticed is that people on the forumâincluding people who arenât deeply immersed in effective altruist thinkingâcriticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think âmaybe theyâve thought more about this than I haveâmaybe there is something that Iâve missed?â More often than not, very smart people have thought very extensively about most such issues, and itâs therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then youâre probably less likely to believe that the critics should defer to the effective altruist consensus.
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Letâs start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/âmore experienced organizations/âpeople actually recommended against many organizations (CE being one of them and FTX being another). These organizationsâ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. âthis doctor knows what a CAT scan looks like better than meâ, or âGivewell has considered the impact effects of malaria much more than meâ). I think these sorts of situations lend themselves to higher deference. Something like âhow much ethical value do I prescribe to animalsâ, or âwhat is my tradeoff of income to healthâ are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraphâtotally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individualâs most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe thatâs just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things doneâand these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
Iâd guess there may be a correlation between people who think there should be more deference being in the ârowâ camp and people who think less in the âsteerâ camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where itâs not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing thatâs important to keep in mind is that deference doesnât necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Letâs say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying âNoâ to Vaidehiâs question since it is very long but does not include a specific example.
> I wonât be able to give you examples where I demonstrate that there was too little deference I donât think that Vaidehi is asking you to demonstrate anything in particular about any examples given. Itâs just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
Could you a few specific examples of times you have seen EAs deferring too little?
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldnât in general put special weight on your own beliefs about those effects (there are some complicating factors here, but thatâs a decent first approximation). Instead you should put the same weight on yours and othersâ beliefs. I think most people donât do that, but put much too much weight on their own beliefs relative to othersâ. Effective altruists have shifted away from that human default, but in my view itâs unlikelyâin the light of the general human tendency to overweight our own beliefsâthat weâve shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but itâs nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, thoughâmy sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I wonât be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, Iâll point to some instances where my opinion is that thereâs too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on whatâs effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that Iâve noticed is that people on the forumâincluding people who arenât deeply immersed in effective altruist thinkingâcriticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think âmaybe theyâve thought more about this than I haveâmaybe there is something that Iâve missed?â More often than not, very smart people have thought very extensively about most such issues, and itâs therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then youâre probably less likely to believe that the critics should defer to the effective altruist consensus.
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Letâs start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/âmore experienced organizations/âpeople actually recommended against many organizations (CE being one of them and FTX being another). These organizationsâ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. âthis doctor knows what a CAT scan looks like better than meâ, or âGivewell has considered the impact effects of malaria much more than meâ). I think these sorts of situations lend themselves to higher deference. Something like âhow much ethical value do I prescribe to animalsâ, or âwhat is my tradeoff of income to healthâ are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraphâtotally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individualâs most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe thatâs just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things doneâand these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
Iâd guess there may be a correlation between people who think there should be more deference being in the ârowâ camp and people who think less in the âsteerâ camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where itâs not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing thatâs important to keep in mind is that deference doesnât necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Letâs say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying âNoâ to Vaidehiâs question since it is very long but does not include a specific example.
> I wonât be able to give you examples where I demonstrate that there was too little deference
I donât think that Vaidehi is asking you to demonstrate anything in particular about any examples given. Itâs just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.