This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on LW stem from people having an implicit assumption that the conversation here is about “should [any particular person in this article] be socially punished?”. In my preferred world, before you get to that phase there should be at least some period focused on “information aggregation and Original Seeing.”
It’s pretty tricky, since in the default, world, “social punishment?” is indeed the conversation people jump to. And in practice, it’s hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being “moves” in a social conflict.
But, I think it’s useful to at least (individually) inhabit the frame of “what is true, here?” without asking questions like “what do those truths imply?”.
With that in mind, some generally useful epistemic advice that I think is relevant here:
Try to have Multiple Hypotheses
It’s useful to have at least two, and preferably three, hypotheses for what’s going on in cases like this. (Or, generally whenever you’re faced with a confusing situation where you’re not sure what’s true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.
If you have at least two hypotheses (and, like, “real ones”, that both seem plausible to you), I find it easier to take in new bits of data, and then ask “okay, how would this fit into two different plausible scenarios”? which activates my “actually check” process.
I think three hypotheses is better than two because two can still end up in a “all the evidence ways in on a one-dimensional spectrum”. Three hypotheses a) helps you do ‘triangulation’, and b) helps remind you to actually do the “what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?”
Multiple things can be going on at once
If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something).
If one person does an action, it could be true, simultaneously, that:
They are somewhat motivated by [Virtuous Motive A]
They are somewhat motivated by [Suspicious Motive B]
They are motivated by [Random Innocuous Motive C]
I once was arguing with someone, and they said “your body posture tells me you aren’t even trying to listen to me or reason correctly, you’re just trying to do a status monkey smackdown and put me in my place.” And, I was like “what? No, I have good introspective access and I just checked whether I’m trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the “actually figure out the truth” thing.”
What I later realized is that I was, like, 65% motivated by “actually wanna figure out the truth”, and like 25% motivated by “socially punish this person” (which was a slightly different flavor of “socially punish” then, say, when I’m having a really tribally motivated facebook fight, so I didn’t recognize it as easily).
Original Seeing vs Hypothesis Evaluation vs Judgment
OODA Loops include four steps: Observe, Orient, Decide, Act
Often people skip over steps. They think they’ve already observed enough and don’t bother looking for new observations. Or it doesn’t even occur to them to do that explicitly. (I’ve noticed that I often skip to the orient step, where I figure out about “how do I organize my information? what sort of decision am I about to decide on?”, and not actually do the observe step, where I’m purely focused on gaining raw data.
When you’ve already decided on a schema-for-thinking-about-a-problem, you’re more likely to take new info that comes in and put it in a bucket you think you already understand.
Original Seeing is different from “organizing information”.
They are both different from “evaluating which hypothesis is true”
They are both different from “deciding what to do, given Hypothesis A is true”
Which is in turn different from “actually taking actions, given that you’ve decided what to do.”
I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won’t get automatically leveraged into a conflict/political move. I don’t think we’re close enough to that world to advocate for it in-the-moment, but I do think it’s still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they’re currently focusing on.
My take: rank-and-file-EAs (and most EA local communities) should be oriented around donor lotteries.
Background beliefs:
I think EA is vetting constrained
Much of the direct work that needs doing is network constrained (i.e. requires mentorship, in part to help people gain context they need to form good plans)
The Middle of the Middle of the EA community should focus on getting good at thinking.
There’s only so much space in the movement for direct work, and it’s unhealthy to set expectations that direct work is what people are “supposed to be.”
I think the “default action” for most EAs should be something that is:
Simple, easy, and reasonably impactful
Provides a route for people who want to put in more effort to do so, while practicing building actual models of the EA ecosystem.
I don’t think it’s really worth it for someone donating a few thousand dollars to put a lot of effort into evaluating where to donate. But if 50 people each put $2000 into a donation lottery, then they collectively have $100,000, which is enough to justify at least one person’s time in thinking seriously about where to put it. (It’s also enough to angel-invest in a new person or org, allowing them to vet new orgs as well as existing ones)
I think it’s probably more useful for one person to put serious effort into allocating $100,000, than 50 people to put token effort into allocating $2000.
This seems better than generic Earning to Give to me (except for people who make enough for donating, say, $25,000 or more realistic)