The illusion of consensus about EA celebrities
Epistemic status: speaking for myself and hoping it generalises
I donât like everyone that Iâm supposed to like:
Iâve long thought that [redacted] was focused on all the wrong framings of the issues they discuss,
[redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this,
[redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible,
[redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts.
Why did I redact all those names? Well, my criticisms are often some mixture of:
half-baked; I donât have time to evaluate everyone fairly and deeply, and donât need to in order to make choices about what to focus on,
based on justifications that are not very legible or easy to communicate,
not always totally central to their point or fatal to their work,
kind of upsetting or discouraging to hear,
often not that actionable.
I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons:
it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people,
it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes.
What to do about it?
I think âjust criticise people moreâ in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains.
I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Hereâs smaller, simpler things that could help:
Write a forum post about it (this oneâs taken, sorry),
Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? â you canât come away from that discussion thinking âoh, whatever the smart, respected people in EA think must be rightâ, because either way at least one of them will disagree with you!
Thereâs a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it.
Iâd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences.
Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected peopleâs blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, itâs probably good to have these for your own sake.)
As is often the case, though, I feel more convinced of my description of the problem than my proposals to address it. Interested to hear othersâ thoughts.
- CorÂrectly CalÂibrated Trust by 23 Jun 2023 17:08 UTC; 92 points) (
- Where Iâm at with AI risk: conÂvinced of danÂger but not (yet) of doom by 21 Mar 2023 13:23 UTC; 62 points) (
- CorÂrectly CalÂibrated Trust by 24 Jun 2023 19:48 UTC; 36 points) (LessWrong;
- 13 Feb 2023 21:01 UTC; 35 points) 's comment on Ben MillÂwoodâs Quick takes by (
- EA & LW FoÂrum Weekly SumÂmary (13th â 19th March 2023) by 20 Mar 2023 4:18 UTC; 31 points) (
- EA & LW FoÂrum Weekly SumÂmary (13th â 19th March 2023) by 20 Mar 2023 4:18 UTC; 13 points) (LessWrong;
I feel like this post is missing something. I would expect one of the strongest predictors of the aforementioned behaviors to be age. Are there any people in their thirties you know who are prone to hero-worshipping?
I donât consider hero-worshipping an EA problem as such, but a young people problem. Of course EA is full of young people!
This seems like good advice to me, but I expect it to benefit from being aware that you need to talk about these things to a young person because they are young.
This is a great point. I also think thereâs a further effect, which is that older EAs were around when the current âheroesâ were much-less -impressive university students or similar. Which I think leads to a much less idealising frame towards them.
But I can definitely see that if you yourself are young and you enter a movement with all these older, established, impressive people⊠hero-worshipping is much more tempting.
Michaelâinteresting point. EA is a very unusual movement in that the founders (Will MacAskill Toby Ord, etc) were very young when they launched the movement, and are still only in their mid-30s to early 40s. They got some guidance & inspiration from older philosophers (e.g. Derek Parfit, Peter Singer), but mostly they recruited people even younger than them into the movement ⊠and then eventually some older folks like me joined as well.
So, EAâs demographics are quite youth-heavy, but thereâs also much less correlation between age and prestige in EA than in most moral/âactivist movements.
Hmm I find the correlation plausible but Iâm not sure Iâm moved to act differently by it. I wouldnât guess itâs a strong enough effect that all young people need this conversation or all older people donât, so Iâm still going to focus on what people say to judge whether they are making this mistake or not.
Also, to the extent that weâre worried that the illusion of consensus harms our credibility, thatâs going to be more of a problem with older people, I expect.
I do think thereâs a big difference in how much various high-status people engage on the forum. And I think that the people who do engage feel like theyâre more âpart ofâ the community⊠or at least that small part of it that actually uses the forum! It also gives them more opportunity to say stupid things and get downvoted, very humanising!
(An example that comes to mind is Oliver Habryka, who comments a lot on posts, not just his own, so feels quite present.)
But I definitely donât think there should be a norm that you should engage on the forum. It seems pretty unlikely to be the best use of your time, and can be a big time sink. Maybe youâve got some actually useful object level work to do! Then posting and leaving us probably the right choice. So I donât think thatâs a good solution, although I think it could be somewhat effective if people did it.
Yeah I agree that for many people, not engaging is the right choice, I donât intend to suggest that all or even most technical debates or philosophical discussions happen here, just that keeping a sprinkling of them here helps give a more accurate impression of how these things evolve.
I agree with you that people should be much more willing to disagree, and we need to foster a culture that encourages this. No disagreement is a sign of insufficient debate, not a well-mapped landscape. That said, I think EAâs in general should think way less about who said what and focus much more on whether the arguments themselves hold water.
I find it striking that all the examples in the post are about some redacted entity, when all of them could just as well have been rephrased to be about object level reality itself. For example:
Could to me be rephrased to
To me it seems that just having the debate on <topic> is more interesting than the meta debate of <is orgâs thinking on topic sloppy>. Thinking a lot about the views of specific persons or organizations has its time and place, but the right split of thinking about reality versus social reality is probably closer to 90â10 than 10â90.
I think itâs not primarily a question of how much to disagree â as I said, we see plenty of disagreement every day on the forum. The issue Iâm trying to address is:
with whom we disagree,
how visible those disagreements are,
and particularly Iâm trying to highlight that many internal disagreements will not be made public. The main epistemic benefit of disagreement is there even in private, but thereâs a secondary benefit which needs the disagreement to be public, and thatâs the one Iâm trying to address.
The necessity of thinking about the second question is clearest when deciding who to fund, who to work for, who to hire, etc.
makes sense, agree completely
I appreciate this point, but personally I am probably more like 70-30 for general thinking, with variance depending on the topic. So much of thinking about the world is trust-based. My views on historical explanations virtually never depend on my reading of primary documentsâthey depend on my assessment of what the proportional consensus of expert historians thinks. Same with economics, or physics, or lots of things.
When Iâm dealing directly with an issue, like biosecurity, it makes sense to have a higher split â 80-20 or 90-10 - but itâs still much easier to navigate if you know the landscape of views. For something like AI, I just donât trust my own take on many argumentsâI really rely a lot on the different communities of AI experts (such as they are).
I think most people most of the time donât know enough about an issue to justify a 90-10 split in issue vs. view thinking. However, I should note all this is regarding the right split of personal attention; for public debate, I can understand wanting a greater focus on the object level (because the view-level should hopefully be served well by good object-level work anyway).
There is nobody you, or any other EA, are âsupposedâ to like. Apologies if Iâm over-interpreting whatâs meant to be a turn of phrase, but I really want to push back on the idea that to be an EA you have to like specific people in the movement. Our movement is about the ideas it generates, not the people who generate them.[1] This is not to say that you canât admire or like certain people, I definitely do! But liking them is not a requirement in any sense to be a part of the movement, or at least it shouldnât be.
I definitely agree with this. Itâs prima facie obvious that senior EAs wonât align 100% with each other on every philosophical issue, and thatâ s ok. I think the Redwood/âAnthropic idea is another good one. In general I think adversarial collaboration might be a good route to pursue is some casesâI know not everyone in the community is a fan of them but I feel like at the margin the Forum may benefit from a few more of them.
I also co-sign Mathiasâs post, that many of the [redacted] claims could probably be re-framed as object level concerns. But I also donât think you should be shy of saying you disagree with a point-of-view, and disagree with a high-level EA who holds that view, as long as you do so in good faith. A case in point, one of my favourite 80k podcast episodes is the blockbuster one with David Chalmers, but listening to âVulcan Trolley Problemâ section I came away with the impression that David was spot on, and Robâs point of view (both Wiblin in this one, and Long in the recent one) wasnât that tenable in comparison. But my disagreement there is really an object level one, it doesnât prevent me from appreciating the podcast any less.
Clarification: obviously people matter! I mean this in the sense that anyone should be able to come up with good ideas in EA, regardless of background, hierarchy, seniority etc.
Nobody says you should like everyone. No one says you should agree with everyone either, even those who are high-profile in the community.
It sounds to me like this boils down to beware of logical fallacies, especially ad hominem. Donât criticize people; criticize ideas. Here are a couple of tactical things that have helped me:
Pretend that a different person (e.g. one for whom you have positive regard) was making the same point.
Distill their main points down to neutral tone language (I havenât tried it, but ChatGPT or other LLMs might be a good tool to do this).
Regarding the complaints you listedâwrong focus, sloppy thinking, irritating tone, mediocre performanceâIâm a big fan of leadership by filling the vacuum. If you see something that could be improved and no one else is stepping up, maybe itâs time for you to step up and take a stab at it. It might not be perfect, but it will be better than doing nothing about it.