If everyone makes the same criticism, the opposite criticism is more likely to be true
I frequently hear people say EAs rely too much on quantifying uncertain variables, but I basically never hear the opposite criticism. If everyone believes you shouldn’t quantify, then nobody’s doing it, so it can’t possibly be true that people quantify too much, and in fact the opposite is probably true.
Obviously I could make various counterarguments, like maybe the people who think we don’t quantify enough are not writing essays about how we need to quantify more. Generally speaking, I don’t think this counterargument is correct, but arguing for/against it is harder so I don’t have much to say about it
It’s like Lake Wobegon, where all the children are above average. It’s impossible for every single person in the community to believe that the community is not X enough
Another example: everyone says we need to care more about systemic change
Saw a Twitter post “EAs way under-update on thought experiments” and I thought, damn that’s a spicy take. Then I realized I misread it and they actually said “over-update” and I thought...wow what a boring take that’s been said a thousand times already
They gave the simulation argument and Roko’s Basilisk as examples. As far as I know, nobody has ever changed their behavior based on either of those arguments. It would be pretty much impossible for people to update less on them than they have
I’m sure there are some people somewhere who have updated based on the simulation argument but I’ve never met them
“People under-update on thought experiments” would have been a much more interesting take because people basically don’t update on thought experiments
By a shocking coincidence, I take the opposite side on all these examples: I think EAs should use more quantitative estimates, should care less about systemic change, and should update more on thought experiments
Are there any issues where I make the same criticism as everyone else, and I’m actually wrong? Probably, idk
I can think of some non-EA-related examples of this phenomenon, but I’m not as interested in those
By analogy, the moment when the most people agree the stock market is going to go up is the exact moment when the market is at its peak. The price can’t go higher because there’s no one left to buy from. If everyone agrees, everyone /must/ be wrong
Relevant Scott Alexander: https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/ and https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/. He said it better than me, but my post isn’t about exactly the same thing so I figured it might be worth publishing.
(Note: The way I usually write essays is by writing outlines like this, and then fleshing them out into full posts. For a lot of the outlines I write, like this one, I never flesh them out because it doesn’t seem worth the time. But I figured for Draft Amnesty Day, I could just publish my outline, and most people will get the idea.)
I think this outline needs major revisions to be improved.
The above is the clearest example of why I think this post’s argument fails. It is definitely possible for all members to believe something about the community is inadequate. For example, say a sports team is bad at defence. And, every member of the team could believe that they need to improve their defence. The fact that all members believe it does not disprove the empirical fact that the team’s defence is inadequate.
Where this impossibility claim might have legs is where the group belief is about group belief itself. For example, it may be impossible for every member of the team to believe that every member of the team does not think about defence enough. But the examples you talked about are not like this—they concern actual action, not group belief.
You are conflating noticing/talking/writing about an action with the action itself. This is especially apparent for issues that are larger than individual action. For the example of systemic change: every member could believe sincerely that the community as a whole should ‘do more work on systemic change’, but reasonably continue their normal, everyday, non-systemic work. In that case, everyone agrees that more systemic change work is needed, but no systemic change work actually gets done.
I think this is fairly common in EA counter-criticism, where people point to an old blog post about issue X, proving that EAs are already aware about X, and so, they argue, the criticism fails. While relevant, awareness of X pales in comparison to actually dealing with X itself.
This argument is further weakened by the fact that few critical stances are endorsed by near-100% of the community. There are significant counter-parties to most interesting critical claims. So making a critical claim is usually not in the situation of ‘everyone already agrees with this’.
Finally, there’s the common sense rebuttal that if a criticism is being made by many, that should (all else equal) increase your credence that the criticism is true. Contrarianism for contrarianism’s sake is useful for checks and balances, but as a personal strategy is antithetical to epistemic modesty.
Yeah, it’s mostly a heuristic argument, and the best you can do might be to just carefully look at the object level instead of trying to infer based on what people are saying.
I disagree, I think this community in particular has a contrarian bias, probably as a result of the ties to the Rationalist community. A lot of people are here for the fun of discussion, and it is way more interesting to debate and discuss wacky and strange ideas than it is to go over the minutae of the most efficient malaria nets or whatever. Unfortunately, most of the time the boring, mainstream take also happens to be the true one.
My sense is if you look at “wacky and strange ideas being explored by highly educated contrarians” as a historical reference class, they’ve been important enough to be worth paying attention to. I would put pre-WWW discussion & exploration of hypermedia in this category, for instance. And the first wiki was a rather wacky and strange thing. I think you could argue that the big ideas underpinning EA (RCTs, veganism, existential risk) were all once wacky and strange. (Existential risk was certainly wacky and strange about 10-15 years ago.)
I think it’s good to discuss wacky and strange ideas, because on the occasions where they actually are true, it can lead to great things. A lot of great movements and foundations are built on disruptive ideas that were strange at the time but obvious in retrospect.
However, that doesn’t really change my point that usually the reason a new idea seems wacky and strange is because it’s wrong. And if you glorify the rare victories too much, you might start forgetting the many, many failures, leading towards a bias for accepting ideas that are somewhat half-baked.
I think seeming wacky and strange is mainly a function of difference, not wrongness per se.
I’d argue that the best way to evaluate the merits of a wacky idea is usually to consider it directly. And discussing wacky ideas is what brings them from half-baked to fully-baked.
If you can find a good way to count up the historical reference class of “wacky and strange ideas being explored by highly educated contrarians” and quantify the percentage of such ideas which were verifiable duds, I’d be very interested to see that post. (The “highly educated” part is doing a lot of work here btw—I know there’s a lot of random occult type stuff that never goes anywhere.) I don’t think we’re going to get anywhere talking about biases—my view is that people are biased in the other direction! (Maybe that’s the correct bias to have if you aren’t experienced in the ways of highly educated contrarianism, though.)
I mean, we can start with this list here. I guarantee you there are highly educated people who buy into pretty much every conspiracy on that list. It’s not at all hard to find, for example, engineers who think 9/11 was an inside job. Ted kascynski was a mathematics professor, etc, you get the point.
The list of possible wrong beliefs outnumbers the list of possible correct beliefs by many orders of magnitude. That stands for status quo opinions as well, but they have the advantage of withstanding challenges and holding for a longer period of time. That’s the reason that if someone claims they’ve come up with a free energy machine, it’s okay to dismiss them, unless you’re feeling really bored that day.
Now, EA is exploring status quo ideas that are much less tested and firm that physics, so finding holes is much easier and worthwhile, and so I agree that strange ideas are worth considering. But most of them are still gonna be wrong, because they are untested.
I disagree with this disagreement.
EA is built on a foundation of rejecting the status quo. EA might only do that in places where the status quo is woefully inadequate of false in some way, but the status quo is still the status quo and it will strike back at people who challenge it.
The phenomenon described above is a side effect of optimization, not “contrarian bias”. Contrarian bias is also a problem that many people in EA and especially rationalists have, but the only common factor is that there aren’t the kind of people who assume that everything is all right and go along with it.
I disagree with your disagreement of my disagreement!
The foundation of EA is (or at least should be), finding the truth. We should only reject the status quo if the status quo is wrong.
I don’t have a problem with EA trying out hot takes and contrarian ideas, because finding cases where the status quo is genuinely wrong is valuable and gives a large competitive advantage. But I think this very fact leads to a bias towards accepting such ideas, even if they are not strictly true.
This is actually true, to a first approximation. Yet on the rare times that the mainstream is wrong, it really matters, so a version of this post still stands, EV wise.
Interesting argument!
I’m not fully persuaded, because I think we’re dealing with heterogeneous sub-populations.
Consider the statement “As a non-EA, I believe that EA funders don’t allocate enough capital to funding development econ research”. I don’t think we can conclude from this statement that the opposite is true, and EA funders allocate too much capital to development econ research.
The heterogeneous subpopulations perspective suggests that people who think development econ research is the most promising cause may be self-selecting out of the “dedicated EA” subpopulation. I think criticism can be helpful in mitigating self-selection effects of this kind.
Of course, we can’t conclude that people who self-select out of EA on the basis of some disagreement are taking the correct side of that disagreement. The point is that criticism allows us to hear their perspective even if they’re not heavily involved.
BTW, I thought the outline format was fine for this post. Some individual sentences were choppy, but that was fine after I decided to read less thoroughly because it was a draft.
I mentioned this before several times, e.g. here and here. Scott Alexander has also said this.