For example if you’d asked me, I would have told you that your comment reads to me like “Will is so selfish” rather than “Will and I have major disagreements on the strategies he should pursue but I believe he’s well-intentioned” because of things like:
I am actively trying to avoid relying on concepts like “well-intentioned”, and I don’t know whether he is well-intentioned, and as such saying “but I believe he’s well-intentioned” would be inaccurate (and also actively distract from my central point).
Like, I think it’s quite plausible Sam Bankman Fried was also well-intentioned. I do honestly feel confused enough about how people treat “well-intentionedness” that I don’t really know how to communicate around this topic.
I don’t think whether SBF was well-intentioned changes how the community should relate to him that much (though it is of course a cognitively relevant fact about him that might help you predict the details of a bunch of his behavior, but I don’t think that should be super relevant given what a more outside-view perspective says about the benefits of engaging with him).
A few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent.
A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam’s mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey’s interview, but haven’t done the virtuous thing of digging up links.)
It seems to me that local theories of Sam’s mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically):
Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said “I’m going to exploit these EAs for everything they’re worth.”
Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn’t conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good.
One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who’s merely being reckless or foolish. So they care a lot about the mental state behind the act.
(As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.)
A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence.
On this theory, most people who are in effect trying to exploit resources from your community, won’t be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they’re doing is justified and fine.
Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don’t quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, even as they wander down the incentive landscape towards the resources you can provide them. They can sub- or semi-consciously mold their inner workings in ways that avoid tripping your malice-detectors, while still managing to exploit you.
And, well, there’s mild versions of the above paragraph that apply to almost everyone, and I’m not sure how to sharpen it. (Who among us doesn’t subconsciously follow incentives, and live under the influence of some self-serving blind spots?)
I personally have found that focusing the conversation on whether someone was “well-intentioned” is usually pretty counterproductive. Almost no one is fully ill-intentioned towards other people. People have a story in their head for why what they are doing is good and fair. It’s not like it never happens, but I have never encountered a case within the EA or Rationality community, of someone who has caused harm and also didn’t have a compelling inner-narrative for why they were actually well-intentioned.
I don’t know what is going on inside of Will. I think he has many good qualities. He seems pretty smart, he is a good conversationalist and he has done many things that I do think are good for the world. I also think he isn’t a good central figurehead for the EA community and think a bunch of his actions in-relation to the EA community have been pretty bad for the world.
This is not how professionals tend to talk about each other—especially in public—unless they really don’t think there’s anything positive about someone.
I don’t think you are the arbiter of what “professionals” do. I am a “professional”, as far as I can tell, and I talk this way. Many professionals I work with daily also communicate more like this. My guess is you are overgeneralizing from a specific culture you are familiar with, and I feel like your comment is trying to create some kind of implicit social consensus against my communication norms by invoking some greater “professionalism” authority, which doesn’t seem great to me.
I am happy to argue the benefits of being careful about communicating negative takes, and the benefits of carefully worded and non-adversarial language, but I am not particularly interested in doing so from a starting-point of you trying to invoke some set of vaguely-defined “professionalism” norms that I didn’t opt-into.
But my sense is that the kinds of things I’ve mentioned above resulted in a comment that came across as shockingly unprofessional and unconstructive to many people (popular, clearly, but I don’t think people’s upvotes/likes correlate particularly well with what they deem constructive) - especially given the context of one EA leader publicly kicking another while they’re down—and I’d like to see us do better.
The incentives against saying things like this are already pretty strong (indeed, I am far from the only person having roughly this set of opinions, though I do appear to be the only person who has communicated them at all to the broader EA community, despite this seeming of really quite high relevance to a lot of the community that has less access to the details of what is happening in EA than the leadership).
I do think there are bad incentives in this vicinity which result in everyone shit-talking each other all the time as well, but I think on the margin we could really use more people voicing the criticism they have of others, especially ones that are indeed not their hot-takes but are opinions that they have extensively discussed and shared with others already, and seem to have not encountered any obvious and direct refutations, as is the case with my takes above.
I am actively trying to avoid relying on concepts like “well-intentioned”, and I don’t know whether he is well-intentioned, and as such saying “but I believe he’s well-intentioned” would be inaccurate (and also actively distract from my central point).
Like, I think it’s quite plausible Sam Bankman Fried was also well-intentioned. I do honestly feel confused enough about how people treat “well-intentionedness” that I don’t really know how to communicate around this topic.
I don’t think whether SBF was well-intentioned changes how the community should relate to him that much (though it is of course a cognitively relevant fact about him that might help you predict the details of a bunch of his behavior, but I don’t think that should be super relevant given what a more outside-view perspective says about the benefits of engaging with him).
The best resource I know on this is Nate’s most recent post: “Enemies vs. Malefactors”:
I personally have found that focusing the conversation on whether someone was “well-intentioned” is usually pretty counterproductive. Almost no one is fully ill-intentioned towards other people. People have a story in their head for why what they are doing is good and fair. It’s not like it never happens, but I have never encountered a case within the EA or Rationality community, of someone who has caused harm and also didn’t have a compelling inner-narrative for why they were actually well-intentioned.
I don’t know what is going on inside of Will. I think he has many good qualities. He seems pretty smart, he is a good conversationalist and he has done many things that I do think are good for the world. I also think he isn’t a good central figurehead for the EA community and think a bunch of his actions in-relation to the EA community have been pretty bad for the world.
I don’t think you are the arbiter of what “professionals” do. I am a “professional”, as far as I can tell, and I talk this way. Many professionals I work with daily also communicate more like this. My guess is you are overgeneralizing from a specific culture you are familiar with, and I feel like your comment is trying to create some kind of implicit social consensus against my communication norms by invoking some greater “professionalism” authority, which doesn’t seem great to me.
I am happy to argue the benefits of being careful about communicating negative takes, and the benefits of carefully worded and non-adversarial language, but I am not particularly interested in doing so from a starting-point of you trying to invoke some set of vaguely-defined “professionalism” norms that I didn’t opt-into.
The incentives against saying things like this are already pretty strong (indeed, I am far from the only person having roughly this set of opinions, though I do appear to be the only person who has communicated them at all to the broader EA community, despite this seeming of really quite high relevance to a lot of the community that has less access to the details of what is happening in EA than the leadership).
I do think there are bad incentives in this vicinity which result in everyone shit-talking each other all the time as well, but I think on the margin we could really use more people voicing the criticism they have of others, especially ones that are indeed not their hot-takes but are opinions that they have extensively discussed and shared with others already, and seem to have not encountered any obvious and direct refutations, as is the case with my takes above.