Deference Culture in EA
As has been noted by other writers, the EA (Effective Altruism) movement has a pretty strong deference culture. In many ways this makes sense as lots of EAs come from a background of reading about and being compelled by organizations like GiveWell. These organizations are built on the assertion that charity evaluation is hard and benefits from a full-time team doing it. A typical donor has the most impact by deferring to this research. This culture of deference has become pretty strong in EA, I pretty frequently have conversations with highly involved EAs who are still deferring on major topics (e.g. cause area, career choice) without investigating them personally.
On the one hand, it is impossible to be an expert in everything, and making hard decisions around doing good is no different. On the other hand, a culture that is too high in deference or defers using the wrong metrics can become homogenized and lead to great opportunities being missed. Often, pieces written about deferral end up falling on the âyou should defer moreâ side or the âyou should defer lessâ side.
I think the optimal perspective on the right amount of deferral depends on background expertise and expected time. (I donât think any previous writers would disagree with this, but their posts are typically not interpreted in that way.) If you want to become a tinker and put thousands of hours into learning about cars; this puts you in a different position than someone who drives but has never looked under the hood, in terms of deference to a mechanic. The same piece of advice; âdefer lessâ or âdefer moreâ, would not be equally applicable to those two demographics. One might defer too little (e.g. the person who has never opened up their hood being pretty confident they can fix something) another might defer too much (the tinker being disappointed to learn that the local mechanic knows far less than expected about a rare engine part).
Another component of this is discerning which questions are more objectively answerable vs ones which are based on values or unclear epistemic trade-offs. To use GiveWell as an example; if you want to save the most lives possible with a high degree of confidence, one of their top choices in fighting malaria is a really strong bet, and deferring to their research is advisable. However, they are far less confident about their trade-offs between income and lives saved and thus it makes less sense to defer on that topic.
So when does it concretely make sense to defer in EA? Letâs examine some clear examples on either side and then work our way to more ambiguous cases.
High deferenceâNew EA
John is brand new to EA and has read a single book on the topic. Although he loves the concepts, he feels overwhelmed with all the new information and does not plan on engaging with it super deeply. He is already well into a solid career and does not imagine EA becoming a big part of his life. Nonetheless, he wants his donations to make the maximum impact from a fairly standard view of saving more lives and reducing pain. He defers to the EA community and ends up donating 10% to GiveWell recommended charities, seeing it as a safe, impactful option that does not take a ton of time.
I think in this case John has made the ideal call; he has made an optimal decision given the amount of time and energy he wants to put into the topic. But letâs look at the same amount of deference with a much more involved EA.
High deferenceâExperienced EA
Sally has been involved in the EA movement for a number of years, she led her local university chapter for a couple of years before joining an EA organization full time. She has spent several hundred hours engaging with EA content, and has a pretty deep understanding of where the cruxes of disagreements are between EAs. However, when it comes to donating she still feels uncertain. She sees problems with the movement and its granting, and has knowledge of some unique opportunities that most EAs are not aware of. She puts in several dozen hours to investigate a couple of opportunities. However, she also knows that the full-time grantmakers are even more experienced in this area and likely have access to even more information. She thus decides to donate evenly between the EA funds, deferring that they will ultimately have better judgment than her.
Although this is close to the same outcome in terms of the level of deferral, this seems like a real loss to me. Sally fits the profile of someone who could be a helpful grantmaker if they had just happened to get another job, and likely would be able to have far more impact independently considering opportunities to find the best one. She is like the tinker in the car example above. In addition, the judgment calls that are made by the EA funders are considerably more sensitive to values than Johnâs rough alignment with GiveWell. Sally might decide to fund one of the funds fully after considering the debate between cause areas, or donate specifically to an unnoticed opportunity that might be missed by larger grantmakers.
A central claim here is that someoneâs deference should reduce as they become more knowledgeable in an area. Someone who has been working full-time in EA for years should probably take the time to thoroughly think through their cause prioritization. Someone who is going to pick a career primarily based on impact should likely do enough research that they have a good sense of the options, not just pick something from the top of a list. Letâs look at some examples of questions where it might make sense to use an informed view rather than deferring, as your experience in the EA movement goes up. Similar to my room for more funding post, I do not expect this table to be perfectly accurate or cross-applicable. I do think itâs a more helpful guide or frame of reference than the more generic âeveryone should defer moreâ or âeveryone should defer lessâ advice. In this table; when something is in the âareas to investigateâ column, the action would involve looking at the original sources of arguments and best critiques (e.g. in the first case I think it would be reading some of GiveWell content, spot-checking a few of their assumptions, looking up critics of GiveWell, and looking up the other big charity evaluators to see their differences). I do not mean just asking their local EA chapter leader âis GiveWell the best charity evaluator.â That would really just be a different deferral, and I am suggesting direct consideration would be valuable.
Experience level | Description | Example choices to defer | Example choices to investigate |
Low | An EA who has read one book and has put in ~1 hour or less a week for under a year | What are the best specific charities? | Is GiveWell the best charity evaluator? |
Medium | An EA who has read three books on the topic and been involved in a chapter for one-two years | What Cause X areas are worth considering? | What EA cause area is best to focus on? |
High | An EA who has led a chapter for two years and worked at an EA org. for one | What should a specific organizationâs plan be? | What are my ethical and epistemic tradeoffs? |
Very high | An EA who has been working full time in EA and considering meta-issues for years | Sub comparison between charities doing similar work (e.g. AMF vs Malaria Consortium) | What are the biggest weaknesses of current EA views and how should my actions change based on that? |
This table shows the evolution of how as someone gains more expertise in an area they should defer less and less, particularly on topics that might be value sensitive or that relatively few EAs are considering independently. Itâs also worth noting that EA is a young movement and there are likely lots of things that the movement as a whole is missing. If we have a culture of deference that means there are a relatively small number of people who need to notice these gaps. If we have more informed independent thinkers however, gaps can be noticed that would otherwise be missed. There are lots of reasons why a high deferral community might create bad norms.
Overall I think EA would benefit from a more spectrum based understanding of deferral; with specific questions and levels of knowledge (like the table above) being the factors discussed, instead of overall views or vague claims about when and when not to defer.
- How can we imÂprove InÂfoÂhazÂard GoverÂnance in EA BioseÂcuÂrity? by 5 Aug 2023 12:03 UTC; 168 points) (
- What you priÂoriÂtise is mostly moral intuition by 24 Dec 2022 12:06 UTC; 76 points) (
- 24 Jan 2023 0:21 UTC; 67 points) 's comment on My highly perÂsonal skepÂtiÂcism brainÂdump on exÂisÂtenÂtial risk from arÂtifiÂcial inÂtelÂliÂgence. by (
- Why AI Safety Camp strugÂgles with fundraisÂing (FBB #2) by 21 Jan 2025 17:25 UTC; 63 points) (
- Monthly OverÂload of EAâJuly 2022 by 1 Jul 2022 16:22 UTC; 55 points) (
- 9 Apr 2024 16:02 UTC; 39 points) 's comment on Jamie_HarÂrisâs Quick takes by (
- Challenges for EA StuÂdent Group Organizers by 26 Jul 2022 5:06 UTC; 26 points) (
- 12 Oct 2022 16:19 UTC; 19 points) 's comment on Ask CharÂity EnÂtrepreneurÂship Anything by (
- How I saved 1 huÂman life (in exÂpecÂtaÂtion) withÂout overÂthinkÂing it by 22 Dec 2024 20:53 UTC; 14 points) (LessWrong;
- 23 Jul 2022 23:59 UTC; 12 points) 's comment on EA is beÂcomÂing inÂcreasÂingly inÂacÂcessible, at the worst posÂsiÂble time by (
- More to exÂplore on âWhat do you think?â by 9 Jul 2022 23:00 UTC; 7 points) (
- 3 Jun 2023 4:28 UTC; 0 points) 's comment on Should the EA comÂmuÂnity be cause-first or memÂber-first? by (
EA has a high deference culture? Compared to what other cultures? Idk but I feel like the difference between EA and other groups of people Iâve been in (grad students, City Year people, law students...) may not be that EAs defer more on average but rather that they are much more likely to explicitly flag when they are doing so. In EA the default expectation is that you do your own thinking and back up your decisions and claims with evidence*, and deference is a legitimate source of evidence so people cite it. But in other communities people would just say âI think Xâ or âIâm doing Xâ and not bother to explain why (and perhaps not even know why, because they didnât really think that much about it).
*Other communities have this norm too, I think, but not to the same extent.
EA has a high deference culture compared to the epistemic norms it claims to adhere to = compared to the standards it aspires and claims to follow, Iâd say. This can be true independently of the difference with other groups of people that you described (which I think is also a true description).
Yeah, I agree with that. On the margin I think more EAs should defer less. Iâve been frustrated with this in particular on topics I know a lot about, such as AI timelines.
tl;dr, I think deference is more concerning for EA than other cultures. Relative to how much we should expect EAs to defer, they defer way too much.
1) We should expect EA to have much less deference culture than other cultures, since a lot of EA claims are based on things like answers to philosophical questions, long term future predictions, ect. These kinds of things are really hard to answer, and I donât think itâs the case that most experts have a much better shot at answering these than some relatively smart and quantitative University students. Questions about moral philosophy are the exact kinds of questions you expect to have a super wide range of answers to, so the number of EAs that claim theyâre longtermist is kind of surprising and unexpected. I think this is a sign thereâs more deference than their should be.
On the other hand, for more concrete and established scientific fields where experts do have a much better chance at making decisions than students, it makes way more sense to defer to them about what things are important.
2) EAs are optimizing for altruism, so decisions on what to work on require lots of thought. Iâm guessing most non-EA people choose to work on things they enjoy or are emotionally invested in.
I can easily tell you, without any evidence or deference, what things I thing are fun and am emotionally invested in. But it takes a lot more time and research to come up with what I think is the most impactful.
I think EAs having more evidence and reasoning to back up what weâre working on just naturally arises from being an EA, and doesnât necessarily mean we have better epistemics than other communities.
3) Explicitly saying when youâre deferring to someone seems like it does a better job of convincing people âwow! these EA people seem more correct than most other communitiesâ and does a worse job at actually being more correct than most other communities. Being explicit about when we defer to people still means we might defer way too much.
4) Edit: I think this point is not actually about deference. Also, I know very little about MIRI and have no idea if this is in any way realistic. Iâm guessing you could replace MIRI with some other org and this kind of story would be true, but Iâm not totally sure.
Also, idk I feel like some things that look like original, detailed thinking actually just ends up being closer to deference than Iâd like. I think perhaps a story thatâs happened before is âMIRI researcher thinks hard about AI stuff, and comes up with some original thoughts with lots of evidence. Writes on alignment form. Tons of karma, yay.â
Sure, the thinking is original, has evidence to back it up, and looks really nice, pretty, and useful. That being said, even if this is original thinking, Iâm guessing if you looked at how this person was using the opinions of other people to shape their own opinions, it would look like
Talking to other MIRI people â 80%
Talking to non-MIRI EAs â 10%
Reading books/âopinions written by non-EAs relevant to what theyâre working on â 5%
Talking to non-EAs â 5%
So even if this thinking looks really original and intelligent, this still seems like a problem with deference. Not deferring to other MIRI researchers an unhealthy amount probably looks more like getting more insight from mainstream academia and non-EAs.
I guess the point here is that itâs much easier to look like youâre not deferring to people too much than to actually not defer to people too much.
5) I think people in general defer way too much and do not think hard enough about what to work on. I think EAs defer too much and occassionally donât think hard enough about what to work on. Being better than the latter doesnât really mean Iâm satisfied with the former.
FWIW I agree that EAs should probably defer less on average. So e.g. I agree with your point 5.
I donât like the example you gave about MIRIâI think filter bubbles & related issues are real problems but distinct from deference; nothing in the example you gave seems like deference to me. (Also, in my experience the people from MIRI defer less than pretty much anyone in EA. If anyone is deferring too little, itâs them.)
Yeah youâre right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like âEA opinions are influenced by other EAs more than Iâd like them to beâ. Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didnât mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. Iâll change the original comment to reflect this.
I also think that EA consensus views are often unusually well-grounded, meaning there are unusually strong reasons to defer to them. (But obviously this may reflect my own biases.)
Fwiw I think many effective altruists defer too little rather than too much.
Could you a few specific examples of times you have seen EAs deferring too little?
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldnât in general put special weight on your own beliefs about those effects (there are some complicating factors here, but thatâs a decent first approximation). Instead you should put the same weight on yours and othersâ beliefs. I think most people donât do that, but put much too much weight on their own beliefs relative to othersâ. Effective altruists have shifted away from that human default, but in my view itâs unlikelyâin the light of the general human tendency to overweight our own beliefsâthat weâve shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but itâs nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, thoughâmy sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I wonât be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, Iâll point to some instances where my opinion is that thereâs too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on whatâs effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that Iâve noticed is that people on the forumâincluding people who arenât deeply immersed in effective altruist thinkingâcriticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think âmaybe theyâve thought more about this than I haveâmaybe there is something that Iâve missed?â More often than not, very smart people have thought very extensively about most such issues, and itâs therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then youâre probably less likely to believe that the critics should defer to the effective altruist consensus.
Hey Stefan,
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Letâs start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/âmore experienced organizations/âpeople actually recommended against many organizations (CE being one of them and FTX being another). These organizationsâ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. âthis doctor knows what a CAT scan looks like better than meâ, or âGivewell has considered the impact effects of malaria much more than meâ). I think these sorts of situations lend themselves to higher deference. Something like âhow much ethical value do I prescribe to animalsâ, or âwhat is my tradeoff of income to healthâ are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraphâtotally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individualâs most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe thatâs just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things doneâand these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
Iâd guess there may be a correlation between people who think there should be more deference being in the ârowâ camp and people who think less in the âsteerâ camp, or another camp, described here.
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where itâs not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing thatâs important to keep in mind is that deference doesnât necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Letâs say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying âNoâ to Vaidehiâs question since it is very long but does not include a specific example.
> I wonât be able to give you examples where I demonstrate that there was too little deference
I donât think that Vaidehi is asking you to demonstrate anything in particular about any examples given. Itâs just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
Agreedâexcept that on the margin Iâd rather encourage EAs to defer less than more. :) But of course some should defer less, and others more, and also it depends on the situation, etc. etc.
Here are choice parts of my model of deference:
Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of âhow can I coordinate with my community in order to maximize the impact we produce?â you start to see why specializing could be optimal.
If you are a Decision-Maker, youâre optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement style), as long as you think youâre decently calibrated and youâre deferring to advice with sufficient bandwidth. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoid ending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/âapproaches for how neglected they are. Remember, it doesnât matter whether youâre right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. Weâre in a parallel tree search paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/âForecaster, your job is to acquire accurate beliefs that are safe to defer to. If thereâs a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/âdonations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you donât have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike, so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions, and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as itâs psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights, you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just donât have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure youâre allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think itâs usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
I found this to be an interesting way to think about this that I hadnât considered beforeâthanks for taking the time to write it up.
Thanks for detailing this aspect of EA. I think much of the deference culture is driven by early EA orgs like GiveWell, as you mention. There is a tendency to map the strong deference that GiveWell merits in global health onto other cause areas where it may not apply. For instance, GWWC recommends giving to several funds in different cause areas. The presentation suggests that funds are roughly equal in quality for their respective cause areas. GiveWell has about ~5x more staff than Animal Charity Evaluators, ~10x+ more staff compared to the EA Infrastructure Fund, and ~10x+ more staff compared to Founders Pledgeâs climate team. To the extent that a larger team size means more research hours and more research hours means better funding decisions, there is a significant difference in funding quality among the different fund recommendations. This difference isnât communicated in public-facing EA media like the GWWC webpage and videos.
As someone who is an expert in a cause area where the EA fund has comparatively little analytical capacity (climate change mitigation), I find the deference and marketing of the climate fund as the most effective giving option a continual source of frustration. Iâve written about that here and here. Iâm also worried about people mapping weak deference onto causes where they should have greater deference: many people early in their EA engagement care about climate change as a cause area. If they have some level of expertise, they may find the climate fund recommendations underwhelming and then incorrectly assume funds in other causes areas have similarly low levels of research behind them. There may be some attrition in getting more people more involved in EA because of this, though it is a tiny niche. I donât think the answer to the comparative deference problem is to do something like delist fund options from the GWWC page. But we do need some way to communicate the differential level of rigor.
I like that you contrast deference with investigation, rather than unilaterism. So many discussions and posts about deference devolve into discussion about unilateralism. Example: https://ââforum.effectivealtruism.org/ââposts/ââJx6ncakmergiC74kG/ââdeference-culture-in-ea?commentId=epR5HxT6nkdSCtMCf
But arguments against unilateralism canât be applied as arguments against investigation. Investigation grows the intellectual commons. Empirically itâs clear there is much to investigate. EAs generally agree that AI risk is the most important problem yet there is no plan to move forward (aside from help out OpenAI and hope that this somehow turns out to be a good idea instead of an apocalyptic one).
I like this breakdown a lot. Another related reason for deferring less and building your own inside view is for figuring out your career within a field.
Choosing research questions, deciding which roles and orgs to apply to, finding role models and plotting a career trajectory, and proposing new projects can be parts of your job in just about any field, and itâll be hard to do them well if youâre constantly deferring to experts. On niche topics, itâs even difficult to learn who the experts are and what they believe.
Personally Iâve deferred to 80,000 Hours on which high-level cause areas offer the highest potential for impact. But after spending a few months to years learning about a single cause area, I feel much less clueless about the field and have a real inside view.
âthe EA (Effective Altruism) movement has a pretty strong deference culture.â
Is this some kind of demographic thing? I havenât noticed it except in terms of college students/ârecent grads being a bit too attached to the idea of working for EA orgs. I defer when I donât feel like I have the appropriate knowledge and canât acquire it in reasonable time, and donât otherwise.
As someone who was a solo-EA, without knowing there was a whole EA movement, for well over a decade, itâs really nice to be able to rely on other peopleâs judgment sometimes instead of having to analyze every little thing for myself. But that deference comes from some intuitive sense of cost-benefit tradeoffs involved in investing my time to dive deeper into something, not from a general idea that I should be deferential, and it goes away the moment I sense that the cost-benefit analysis has flipped. And I donât feel like some kind of outlier for doing this. Another EA once called me an SBF bootlicker just for supporting Carrick Flynn, for example.
When someone defers, they defer to a person or group. And presumably, as someone learns more about a person it becomes easier to decide when to defer to their judgment.
I agree that learning more about a subject would help someone assess any person/âgroup who claims to have knowledge about that subject.
I feel like this captures a lot, and kinda helps to bridge the gap between how I feel about EA and how genius silicone valley people feel about EA.
Thanks Joey!