I am at best 1/1000th as “famous” as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.
Duncan Sabien
- 12 Mar 2023 18:41 UTC; 38 points) 's comment on Share the burden by (
This language is inflammatory (“overwhelming”, “incestuous”), but we can boil this down to a more sterile sounding claim
A major part of the premise of the OP is something like “the inflammatory nature is a feature, not a bug; sure, you can boil it down to a more sterile sounding claim, but most of the audience will not; they will instead follow the connotation and thus people will essentially ‘get away’ with the stronger claim that they merely implied.”
The accuser doesn’t offer concrete behaviors, but rather leaves the badness as general associations. They don’t make explicit accusations, but rather implicit ones. The true darkness is hinted at, not named. They speculate about my bad traits without taking the risk of making a claim. They frame things in a way that increases my perceived culpability.
I think it is a mistake to steelman things like the TIME piece, for precisely this reason, and it’s also a mistake to think that most people are steelmanning as they consume it.
So pointing out that it could imply something reasonable is sort of beside the point—it doesn’t, in practice.
What follows is a tangent, but it feels like a relevant tangent. Like, I do not claim this is quite the same conversation as the above, it’s slightly in a different direction, but it’s not fully a non-sequitur.
Forgive the slightly-not-normal-for-this-venue language; this was originally a personal Facebook comment.
Here is a point that I don’t think gets made often enough:
It doesn’t matter whether other people are inferior, when it comes to talking about their fundamental dignity and the rights that a civilized society should grant them.
Like, often nazis or misogynists or whatever will try to start demonstrating that [some group] is objectively inferior on [some axis], and often the opposition will come right back with NUH-UH, [group] IS EVERY BIT AS CAPABLE—
I think there’s a mistake, there, and I think that mistake is *acting like that would matter,* even if true. Playing into the frame of the bigot, letting them set the terms of the debate, implicitly conceding that the question of two different group’s equality or inequality is the *crux* of the issue.
It isn’t.
I happen to think that it’s *false* that [race] or [gender] or whatever is inferior; my sense is that even if the bell curves for different groups peak in slightly different places and have their tails in slightly different places, they basically cover the same ground and overwhelmingly overlap anyway, so whatever.
But even if it were *demonstrably true* that [group] were inferior, that wouldn’t change my sense of moral obligation toward its members, and it wouldn’t change my beliefs about what kinds of treatment are fair or unfair.
I know for a fact that I have more raw intelligence than most humans! Even in nerd circles, I’m more-than-half-the-time in the upper quartile of whatever room I’m in, and guess what! Doesn’t matter! Practically every human outstrips me in some domain or other anyway! I can’t step to someone’s unique expertise, nor can I compete with them along domains orthogonal to intelligence (e.g. physical prowess), and even if I were superior to someone along 10 out of 10 of the *most* important axes …
… EVEN THEN, I do not think that gives me the right to dictate the terms of their existence, cut them off from opportunity, or take a larger share of the social pie.
The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability. The whole point of building a safe, stable, cooperative society is making it so that you *don’t* have to pull your whole weight every second of every day or else be abandoned to the wolves or enslaved by strongmen.
The thing we’re trying to build here is a world where the absolutely inferior—
(To the extent that’s even a category that exists; a lot depends on your point of view and what axes you consider relevant)
The thing we’re trying to build here is a world where *even the absolutely inferior* get to have the maximum achievable amount of sovereignty, and agency, and happiness, and health, and get to participate in society to the greatest possible degree permitted by their personal limitations and the technology we have available (both literal technology and social/metaphorical tech).
IDGAF if you can “prove” some group’s inferiority. It means nothing to me. It changes nothing. It was never the key hinge of the conversation for me. Superiority is not the foundation of my sense of my fellow humans’ dignity.
(And that’s setting *aside* the fact that even if you’ve proven a difference between groups at the statistical level, you’ve done very little to demonstrate the relevance of that statistical difference on individual members; bell curves are not their averages.)
I think it’s good to push back on bigots when they are spreading straightforward falsehoods. I’m not saying “don’t fire back with facts” in these conversations.
But the *fire* with which people fire back seems to me to be counterproductive and wrong, and it worries me. Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—
I kind of fear that those people are closer to the bigots than I might wish. That they’re responding with such fervor because they *do* believe, on some gut level, that if the groups are different, then the moral standards must necessarily also be different. They don’t want to conclude that the moral standards should be different, and so they object with *desperation* to any evidence that threatens to show actual differences between groups.
Potential competence differences between groups don’t matter on a moral level. Or at least, let me-and-my-philosophy be an existence proof to you: they don’t HAVE to matter.
You can build a society that doesn’t give a fuck if people are fundamentally inferior, and that does its best to be fair and moral toward them anyway.
That’s the society you *should* be trying to build. If for no other reason than the fact that that’s going to be you one day, when you break a leg or have a stroke or just succumb to the vicissitudes of time. If for no other reason than the fact that that could be your kid, or the kid of someone you care about.
(There are other reasons, too, but that’s the one that’s hopefully at least a little bit persuasive even to selfish egotists.)
Competence is not the measure of worth. Fundamental equality is *not* the justification for fair and moral treatment.
Build your ethics on firmer ground, please.
So for lack of knowing how to walk that line, I can at least comment on the problem in this footnote.
Very important/good footnote imo.
This was an error of reasoning. I had some impression that Sam had altruistic intent, and I had some second-hand reports that he was mean and untrustworthy in his pursuits. And instead of assembling this evidence to try to form a unified picture of the truth, I pit my evidence against itself, and settled on some middle-ground “I’m not sure if he’s a force for good or for ill”.
There’s a thing here which didn’t make its way into Lessons, perhaps because it’s not a lesson that Nate in particular needed, or perhaps because it’s basically lumped into “don’t pit your evidence against itself.”
But, stating it more clearly for others:
There is a very common and very bad mistake that both individuals and groups tend to make a lot in my experience, whereby they compress (e.g.) “a 60% chance of total guilt and a 40% chance of total innocence” into something like “a 100% chance that the guy is 60% guilty, i.e. kinda sketchy/scummy.”
I think something like DO NOT DO THIS or at the very least NOTICE THIS PATTERN maybe is important enough to be a Lesson for the median person here, although plausibly this is not among the important takeaways for Nate.
Another way to think about this (imo) is “do you screen falsehoods immediately, such that none ever enter, or do you prune them later at leisure?”
Sometimes, assembling false things (such as rough approximations or heuristics!) can give you insight as to the general shape of a new Actually True thing, but discovering the new Actually True thing using only absolutely pure definite grounded vetted airtight parts would be way harder and wouldn’t happen in expectation.
And if you’re trying to (e.g.) go “okay, men are stronger than women, and adults are smarter than kids” and somebody interrupts to go “aCtUaLlY this is false” because they have a genuinely correct point about, e.g., the variance present in bell curves, and there being some specific women who are stronger than many men and some specific children who are smarter than many adults … this whole thing just derails the central train of thought that was trying to go somewhere.
(And if the “aCtUaLlY” happens so reliably that you can viscerally feel it coming, as you start to type out your rough premises, you get demoralized before you even begin, close your draft, and go do something else instead.)
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn’t easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There’s also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
It’s also full of insinuation and implication and “X may mean Y [which is damning]” in a way that’s attempting to get the benefit of “X = Y” without having to actually demonstrate it.
In my opinion, “you have to use a burner account to put forth this kind of ‘thinking’ and ‘reasoning’ and ‘argument’” is actually a point in EA culture’s favor.
“if there are going to be sociopaths in power, they might as well be EA sociopaths”.
Just noting that my shoulder Nate is fairly calibrated/detailed thanks to over a decade of interaction with Nate, and is incapable of endorsing the sentence “if there are going to be sociopaths in power, they might as well be EA sociopaths.”
My shoulder Nate insists quite adamantly that this is Not A Good Thing To Boil Down To A Binary Yes-No, but that if someone’s forcing him to say yes or no, then he will say no. And before someone argues that picking no sounds dumb, [insert coherent explanation of why the frame is badwrong].
Pro-EA posts made anonymously creep me out 98% as much; I personally would rather (most) anonymous posts not happen at all than happen anonymously. See above for my caveat to that general position.
Anonymous accounts created within the very recent past I found just from skimming the posts and comments from the past few days for, like, five minutes:
BurnerAcct
OutsideView
whistleblower9
anonymousEA20
Sick_of_this
AnonymousAccount
temp_
Burner1989
AnonymousQualy
ConcernedEAs
Selfish piggyback plug for the concept of sazen.
I would like for all involved to consider this, basically, a bet, on “making and publishing this pledge” being an effective intervention on … something.
I’m not sure whether the something is “actual racism and sexism and other bigotry within EA,” or “the median EA’s discomfort at their uncertainty about whether racism and sexism are a part of EA,” or what.
But (in the spirit of the E in EA) I’d like that bet to be more clear, so since you were willing to leave a comment above: would you be willing to state with a little more detail which problem this was intended to solve, and how confident you (the group involved) are that it will be a good intervention?
A few relevant thoughts from a Facebook thread of mine yesterday:
Man, I feel *super* creeped out by how many posts and comments I’m seeing on the EA forum from anonymous this, burneraccount that, groupofnamelessconcernedcitizens, etc.
Seems like a sign of *something* real bad, even though I haven’t quite put my finger on what, and doesn’t seem to me like “the light as people come out of the tunnel” in the sense that it doesn’t feel to me like a precursor to something good, or people finally overcoming coordination challenges in a bad environment, or whatever.
I wonder if there’s a Bizarro Duncan out there who’s, like, super stoked to see so many people choosing to anonymously share their heretical thoughts or whatever.
.
I think, trying to introspect on the creepies, that a big part of the problem is something like:
“Here I am, in substantial disagreement/criticism of a subculture, but I don’t want my large and fairly crucial disagreement attached to my name because if people understood my *true* beliefs they might not want to hire me at their org.”
.
To be clear, I think [things like] Chatham room [are] amazing; I think that it’s quite good/important to have *clearly defined contexts* in which people can be anonymous.
I would not at all mind a single, promoted, pinned/stickied top-level post that was, like, this is the thread for anonymous posting of things you think are true; mods will moderate for discourse norms and like Actual Dangerous Or Crazy Stuff but otherwise let’s make room for people to Say Stuff.
But like. It seems that the tide is turning toward “oh, flooding the EA forum with anonymous sniping from the sidelines is the Cool And Correct Thing To Do Now” and that seems like two or three distinct kinds of bad.
There’s always a tension between assimilating into a preexisting culture, and attempting to change that culture, and I just generally think that the incentives are better *overall* when people have common-knowledge skin in the game; it’s just *so easy* for bad stuff to proliferate when it becomes open-season on anonymous chime-ins, and the EA forum over the past week has felt like it was moving in that direction.
There are low-bs forums such as Hacker News and slatestarcodex where most people don’t use their real names.
In those forums, reputation accrues to username; little (or at least less) attention is paid to brand-new accounts.
Here, a lot of accounts are trying to recruit/use the “I’m a for real serious longtime member of this community” reputational/seriousness boost, while being who the heck knows who.
It greatly increases the odds of the forum being flooded with unaccountable bs; it removes/breaks the feedback loop of reputation.
Deciding to be silent isn’t tricking people. Posting anonymously because you don’t want to be associated with your own views (but you want to inject those views without paying the reputational cost of having them) is.
I mean, I don’t have this hypothetical document made in my head (or I would’ve posted it myself).
But an easy example is something of the shape:
[EDIT: The below was off-the-cuff and, on reflection, I endorse the specific suggestion much less. The structural thing it was trying to gesture at, though, of something clear and concrete and observable, is still the thing I would be looking for, that is a prerequisite for enduring endorsement.]“We commit to spending at least 2% of our operational budgets on outreach to [racial group/gender group/otherwise unrepresented group] for the next 5 years.”
Maybe the number is 1%, or 10%, or something else; maybe it’s 1 year or 10 years or instead of years it’s “until X members of our group/board/whatever are from [nondominant demographic].”
The thing that I like about the above example in contrast with the OP is that it’s clear, concrete, specific, and evaluable, and not just an applause light.
The community is quite capable of dealing with actual “bs” by downvoting it into oblivion.
I disagree that the community is doing anything remotely close to a good job of distinguishing bs from non-bs via downvotes. [The evidence is that the community does not find the bulk of burner-poster posts to be “bs”] is a true statement, and is revealing the problem.
It is also very easy for users who do not want to engage with burner posts to skip on past them.
This is straight false; they’re showing up on all sorts of posts WAY more than they used to.
Note that I specifically wanted to hit the failure mode where there is, in reality, a clear-cut binary (e.g. totally innocent or totally guilty).
But yeah, correct that this is not what was going on with SBF or Nate’s assessments. More of a “this made me think of that,” I guess.
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as “fear”].
Here are some things that are true:
Racism is harmful and bad
Sexism is harmful and bad
Other “isms” such as homophobia or religious oppression are harmful and bad.
To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. “we showed a 1% difference in the medians of the bell curves for these two populations, thereby ‘proving’ one of those populations to be fundamentally superior!” This is bullshit from a truth-seeking perspective, and it’s bullshit from a social progress perspective, and in most circumstances it doesn’t need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is overwhelmingly on the person starting the discussion, to demonstrate that they are both a) genuinely well-intentioned, and b) have something real to talk about.
However:
Intelligent, moral, and well-meaning people will frequently disagree about to-what-extent a given situation is explained by various bigotries as opposed to other factors. Intelligent, moral, and well-meaning people will frequently disagree about which actions are wise and appropriate to take, in response to the presence of various bigotries.
By taking anti-racism and anti-sexism and other anti-bigotry positions which are already overwhelmingly popular and overwhelmingly agreed-upon within the Effective Altruism community, and attempting to convert them to Anti-Racism™, Anti-Sexism™, and Anti-Bigotry™ applause lights with no clear content underneath them, all that’s happening is the creation of a motte-and-bailey, ripe for future abuse.
There were versions of the above proposal which were not contentless and empty, which stake out clear and specific positions, which I would’ve been glad to see and enthusiastically supported and considered concrete progress for the community. It is indeed true that EA as a whole can do better, and that there exist new norms and new commitments that would represent an improvement over the current status quo.
But by just saying “hey, [thing] is bad! We’re going to create social pressure to be vocally Anti-[thing]!” you are making the world worse, not better. Now, there is a List Of Right-Minded People Who Were Wise Enough To Sign The Thing, and all of the possible reasons to have felt hesitant to sign the thing are compressible to “oh, so you’re NOT opposed to bigotry, huh?”
Similarly, if four-out-of-five signatories of The Anti-Racist Pledge think we should take action X, but four-out-of-five non-signatories think it’s a bad idea for various pragmatic or logistical reasons, it’s pretty easy to imagine that being rounded off to “the opposition is racist.”
(I can imagine people saying “we won’t do that!” and my response is “great—you won’t. Are you claiming no one will? Because at the level of 1000+ person groups, this is how this always goes.”)
The best possible outcome from this document is that everybody recognizes it as a basically meaningless non-thing, and nobody really pays attention to it in the future, and thus having signed it means basically nothing. This is also a bad outcome, though, because it saps momentum for creating and signing useful versions of such a pledge. It’s saturating the space, and inoculating us against progress of this form; the next time someone tries to make a pledge that actually furthers equity and equality, the audience will be that much less likely to click, and that much less willing to believe that anything useful will result.
The road to hell is paved with good intentions. This is clearly a good intention. It does not manage to avoid being a pavestone.