“This post is anonymised because I don’t want to have to deal with the interpersonal consequences of the beliefs I hold; I don’t want people knowing that I hold my beliefs, and would rather trick people into associating with me in ways they might not if they actually knew my true stance.”
Duncan Sabien
Selfish piggyback plug for the concept of sazen.
The essay itself is the argument for why EAs shouldn’t steelman things like the TIME piece.
(I understand you’re disagreeing with the essay and that’s :thumbsup: but, like.)
If you set out to steelman things that were generated by a process antithetical to truth, what you end up with is something like [justifications for Christianity]; privileging-the-hypothesis is an unwise move.
If one has independent reasons to think that many of the major claims in the article are true, then I think the course most likely to not-mislead one is to follow those independent reasons, and not spend a lot of time anchored on words coming from a source that’s pretty clearly not putting truth first on the priority list.
This language is inflammatory (“overwhelming”, “incestuous”), but we can boil this down to a more sterile sounding claim
A major part of the premise of the OP is something like “the inflammatory nature is a feature, not a bug; sure, you can boil it down to a more sterile sounding claim, but most of the audience will not; they will instead follow the connotation and thus people will essentially ‘get away’ with the stronger claim that they merely implied.”
The accuser doesn’t offer concrete behaviors, but rather leaves the badness as general associations. They don’t make explicit accusations, but rather implicit ones. The true darkness is hinted at, not named. They speculate about my bad traits without taking the risk of making a claim. They frame things in a way that increases my perceived culpability.
I think it is a mistake to steelman things like the TIME piece, for precisely this reason, and it’s also a mistake to think that most people are steelmanning as they consume it.
So pointing out that it could imply something reasonable is sort of beside the point—it doesn’t, in practice.
I am at best 1/1000th as “famous” as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.
- 12 Mar 2023 18:41 UTC; 38 points) 's comment on Share the burden by (
I mean, I don’t have this hypothetical document made in my head (or I would’ve posted it myself).
But an easy example is something of the shape:
[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]“We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years.”
Maybe the number is 1%, or 10%, or something else; maybe it’s 1 year or 10 years or instead of years it’s “until X members of our group/board/whatever are from [nondominant demographic].”
The thing that I like about the above example in contrast with the OP is that it’s clear, concrete, specific, and evaluable, and not just an applause light.
I would like for all involved to consider this, basically, a bet, on “making and publishing this pledge” being an effective intervention on … something.
I’m not sure whether the something is “actual racism and sexism and other bigotry within EA,” or “the median EA’s discomfort at their uncertainty about whether racism and sexism are a part of EA,” or what.
But (in the spirit of the E in EA) I’d like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as “fear”].
Here are some things that are true:
Racism is harmful and bad
Sexism is harmful and bad
Other “isms” such as homophobia or religious oppression are harmful and bad.
To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. “we showed a 1% difference in the medians of the bell curves for these two populations, thereby ‘proving’ one of those populations to be fundamentally superior!” This is bullshit from a truth-seeking perspective, and it’s bullshit from a social progress perspective, and in most circumstances it doesn’t need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is overwhelmingly on the person starting the discussion, to demonstrate that they are both a) genuinely well-intentioned, and b) have something real to talk about.
However:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that’s happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
(I can imagine people saying “we won’t do that!” and my response is “great—you won’t. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.”)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.
I would support that.
The community is quite capable of dealing with actual “bs” by downvoting it into oblivion.
I disagree that the community is doing anything remotely close to a good job of distinguishing bs from non-bs via downvotes. [The evidence is that the community does not find the bulk of burner-poster posts to be “bs”] is a true statement, and is revealing the problem.
It is also very easy for users who do not want to engage with burner posts to skip on past them.
This is straight false; they’re showing up on all sorts of posts WAY more than they used to.
I didn’t need to post, and it’s quite unpleasant to do so.
Then … please stop.
There are low-bs forums such as Hacker News and slatestarcodex where most people don’t use their real names.
In those forums, reputation accrues to username; little (or at least less) attention is paid to brand-new accounts.
Here, a lot of accounts are trying to recruit/use the “I’m a for real serious longtime member of this community” reputational/seriousness boost, while being who the heck knows who.
It’s also full of insinuation and implication and “X may mean Y [which is damning]” in a way that’s attempting to get the benefit of “X = Y” without having to actually demonstrate it.
In my opinion, “you have to use a burner account to put forth this kind of ‘thinking’ and ‘reasoning’ and ‘argument’” is actually a point in EA culture’s favor.
It greatly increases the odds of the forum being flooded with unaccountable bs; it removes/breaks the feedback loop of reputation.
Deciding to be silent isn’t tricking people. Posting anonymously because you don’t want to be associated with your own views (but you want to inject those views without paying the reputational cost of having them) is.
Pro-EA posts made anonymously creep me out 98% as much; I personally would rather (most) anonymous posts not happen at all than happen anonymously. See above for my caveat to that general position.
In other words:
“I’d rather extract money/support from people who wouldn’t willingly give it to me, if I were being candid.”
:/ :/ :/ :/
Like, I’m not saying I don’t get it, but the “it” that I’m getting seems super duper sad. People’s money/support/endorsement should be theirs to give, and tricking people into giving it to you when they wouldn’t if they knew your true positions seems … not great.
Note that I specifically wanted to hit the failure mode where there is, in reality, a clear-cut binary (e.g. totally innocent or totally guilty).
But yeah, correct that this is not what was going on with SBF or Nate’s assessments. More of a “this made me think of that,” I guess.
So for lack of knowing how to walk that line, I can at least comment on the problem in this footnote.
Very important/good footnote imo.
This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to try to form a unified picture of the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill”.
There’s a thing here which didn’t make its way into Lessons, perhaps because it’s not a lesson that Nate in particular needed, or perhaps because it’s basically lumped into “don’t pit your evidence against itself.”
But, stating it more clearly for others:
There is a very common and very bad mistake that both individuals and groups tend to make a lot in my experience, whereby they compress (e.g.) “a 60% chance of total guilt and a 40% chance of total innocence” into something like “a 100% chance that the guy is 60% guilty, i.e. kinda sketchy/scummy.”
I think something like DO NOT DO THIS or at the very least NOTICE THIS PATTERN maybe is important enough to be a Lesson for the median person here, although plausibly this is not among the important takeaways for Nate.
Another way to think about this (imo) is “do you screen falsehoods immediately, such that none ever enter, or do you prune them later at leisure?”
Sometimes, assembling false things (such as rough approximations or heuristics!) can give you insight as to the general shape of a new Actually True thing, but discovering the new Actually True thing using only absolutely pure definite grounded vetted airtight parts would be way harder and wouldn’t happen in expectation.
And if you’re trying to (e.g.) go “okay, men are stronger than women, and adults are smarter than kids” and somebody interrupts to go “aCtUaLlY this is false” because they have a genuinely correct point about, e.g., the variance present in bell curves, and there being some specific women who are stronger than many men and some specific children who are smarter than many adults … this whole thing just derails the central train of thought that was trying to go somewhere.
(And if the “aCtUaLlY” happens so reliably that you can viscerally feel it coming, as you start to type out your rough premises, you get demoralized before you even begin, close your draft, and go do something else instead.)