IMO the amount of diligence someone ought to perform on their counterparties’ character is different in different circumstances. “This person is one of hundreds of people I transact with every week” carries different obligations than “This person is one of the four big donors who fund my organization” carries different obligations than “This person has been my only source of income for the past two years”. Different EAs were at different points along this spectrum.
Sarah Levin
See Samo’s essay series here for the definition of “intellectual legitimacy” as it’s being used in the OP:
An idea has intellectual legitimacy insofar as it is recognized by society as respectable and reasonable. An intellectually legitimate idea does not need to be recognized as credible by all people, or even by very many people at all. There only needs to be a general perception that society at large holds the idea to be legitimate. Powerful institutions and individuals are seen as tolerating or endorsing it. Such a perception isn’t necessarily coupled to whether an idea is true.
...
individuals routinely use legitimacy as a shortcut for evaluating the quality of the ideas around them. What may intuitively feel like evaluating an idea on its merits has oftentimes already factored in how an idea is communicated, and who is communicating it. We do this because it is harder for us to assess claims in fields that are outside our areas of expertise and so, instead, we learn from experience which sources to rely on. Evaluating an idea’s intellectual legitimacy is often safer and easier than evaluating the idea itself, and in a healthy society, the shortcut works. This makes the shortcut an efficient and effective heuristic for individuals. Even then, though, an intellectually valid idea that is correctly perceived as legitimate might ultimately still turn out to be false.
In most of the USA and Europe, the smallpox vaccine is currently widely available as part of the response to monkeypox, which is countered by the same vaccine. Local policy varies; in some cities the vaccine is available to everyone now that the available supply far outstrips demand, while in other cities it is still available only to anyone who claims to engage in particular sexual behavior, to hold particular sexual identities, and/or to be employed in particular professions (AFAIK all the vaccine clinics accept self-report on sexual matters).
It seems to me that getting the smallpox vaccine now, while there’s no pressure on the stockpiles or the clinics, is both prosocial and individually beneficial. I’ve gotten it myself.
This is true. Another reason I think public fears of professional retaliation are overstated is that “I’m afraid of professional retaliation” is generally taken as a legitimate reason to hide, whereas lots of other fears are not, and so many other fears get justified in terms that will be well-received. Like, if saying “I’m posting anonymously because I’m afraid of being looked at funny” is seen as cowardly but saying “I’m posting anonymously because I’m afraid of professional retaliation” is seen as sympathetic, then I expect both types of people will claim to fear professional retaliation.
(I do think EA institutions have a totally-normal-for-white-collar-professional degree of retaliation for not toeing the line. I just think the discourse here overweights how much of it comes from, like, posts on this forum, whereas all the cases I know about were because of more substantive causes like materially supporting a disfavored institution, or normal bureaucratic power struggles, or something.)
- 10 Mar 2023 12:25 UTC; 1 point) 's comment on The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts by (
Yeah, pseudonyms are great. There’s been recent debates about people using one-off burner accounts to make accusations, but those don’t reflect at all on the merits of using durable pseudonyms for general conversation.
The degree of reputation and accountability that durable pseudonyms provide might be less than using a wallet name, but it’s still substantial, and in practice it’s a perfectly sufficient foundation for good discourse.
Ultimately, my overall point is that one reason for using a burner account (like in my case) is that if you don’t belong to the “inner circle” of funders and grantees, then I believe that different rules apply to you. And if you want to join that inner circle, you need not question grants by directly emailing OP. And once you’re inside the inner circle, but you want to criticise grants, you must use a burner account or risk being de-funded or blacklisted.
Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical. Which would be bad! Don’t do that! Normally I don’t go around demanding that people must be willing to make personal sacrifices for the greater good if they want to be taken seriously but this is literally a forum for self-declared altruists.
Several times I’ve received lucrative offers and overtures from sources (including one EA fund) that seemed corrupt in ways that resemble how you think your funder is corrupt. Each time my reaction has been “I’ve gotta end this relationship ASAP. This will be used to pressure me into going along with corruption. Better to remove their power over me on my terms.” This was clearly correct in hindsight; it saved me and my team from some entanglements that would have made it harder to pursue our mission, and also it left me free to talk about the bad stuff I saw as much as I want to. While I did pass up a lot of money for myself and my organization, we’re doing fine now. None of this was some crazy-advanced Sun Tzu maneuver; it’s common knowledge that refusing dirty money is the right thing to do but you have to pass up money to do it.
I dunno, a lot of these burner account accusations just strike me as trying to provoke a fight that the poster themselves lacks the courage and conviction to actually participate in, and I have very little patience for “let’s you and him fight”. I assume that the point of posting this stuff is to advocate for some sort of change, but that can’t happen unless specific people lead the charge. And if you’re not willing to bear any costs at all, then why should anyone else pick up your banner? Even if I wanted to, how would I lead the charge against “my friend who I won’t name got the impression that someone else who I won’t name did something bad, based on circumstantial evidence that you can’t check”? Questions of right and wrong aside, this plan just won’t work, you can’t actually lead from the rear like this.
Given your stated beliefs, your moral duty is to either become a “troublemaker” even if the risk to your career is real or else cut yourself off from the dirty money and go do something that’s not compromised. Personally I’ve usually chosen the latter option when I’ve faced similar dilemmas but I have a ton of respect for good-faith troublemakers.
There was one incorrect claim (“AI safteyists encourage work at AGI companies”)
“AI safetyists” absolutely do encourage work at AGI companies. To take one of many examples, 80,000 Hours are “AI safetyists”, and their job board currently encourages work at OpenAI, Deepmind, and Anthropic, which are AGI companies.
(I haven’t watched the video.)
The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts
We will also never know about serious issues if people are too afraid to speak up in a way that can be trusted and acted on. Creating a burner account out of fear might be a psychologically understandable reaction (although I suspect its prevalence is overstated), but it is not an effective or tactically appropriate reaction. Burner account accusations get upvotes and public sympathy but they don’t accomplish much else. Actual change requires someone to stick their neck out, whether that might be in a public post or in influential backchannels. There is no substitute for courage.
Can you name examples of this working? Because I’ve seen a good number of anonymous public accusations on this forum and I don’t recall any that led to the outcome you describe. I understand this theory of change but it sure doesn’t seem to work that way in real life.
In contrast I know of many cases where backchannel reporting to trusted third parties has led to results. If someone is not willing to speak up publicly, then using whisper networks or official reporting channels has a much better track record compared to making burner accusations on the EA forum. I am somewhat worried about people making an ineffective burner account post and feeling like they’ve done their job when otherwise they would’ve mustered up their courage and told the conference organizer.
I actually do know the real names of the people who wrote about Brent. It’s one of those “community insiders know who they were but it’s hard to tell from the outside” situations, like the one I described with pre-doxxing Scott Alexander. If the authors had been anonymous for real then I don’t think it would’ve worked anywhere near as well. This approach avoids most of the downsides of actually-unknown-and-unaccountable burner accounts and I do not object to it.
As many have noted, this recommendation will usually yield good results when the org responds cooperatively and bad results when the org responds defensively. It is an org’s responsibility to demonstrate that they will respond cooperatively, not a critic’s responsibility to assume. Defensive responses aren’t, like, rare.
To be more concrete, I personally would write to Givewell before posting a critique of their work because they have responded to past critiques with deep technical engagement, blog posts celebrating the critics, large cash prizes, etc. I would not write to CEA before posting a critique of their work because they have responded to exactly this situation by breaking a confidentiality request in order to better prepare an adversarial public response to the critic’s upcoming post. People who aren’t familiar with deep EA lore won’t know all this stuff and shouldn’t be expected to take a leap of faith.
This does mean that posts with half-cocked accusations will get more attention than they deserve. This is certainly a problem! My own preferred solution to this would be to stop trusting unverifiable accusations from burner accounts. Any solution will face tradeoffs.
(For someone in OP’s situation, where he has extensive and long-time knowledge of many key EA figures, and further is protected from most retaliation because he’s married to Julia Wise, who is a very influential community leader, I do indeed think that running critical posts by EA orgs will often be the right decision.)
within the community we’re working towards the same goals: you’re not trying to win a fight, you’re trying to help us all get closer to the truth.
This is an aside, but it’s an important one:
Sometimes we’re fighting! Very often it’s a fight over methods between people who share goals, e.g. fights about whether or not to emphasize unobjectional global health interventions and downplay the weird stuff in official communication. Occasionally it’s a good-faith fight between people with explicit value differences, e.g. fights about whether to serve meat at EA conferences. Sometimes it’s a boring old struggle for power, e.g. SBF’s response to the EAs who attempted to oust him from Alameda in ~2018.
Personally I think that some amount of fighting is critical for any healthy community. Maybe you disagree. Maybe you wish EA didn’t have any fighting. But acting as if this were descriptively true rather than aspirational is clearly incorrect.
Looking back five months later, can you say anything about whether this program ended up making grants, and if so how much/how many? Thanks!
Looking back five months later, can you say anything about whether this program ended up matching people with new jobs or opportunities, and if so how many? Thanks!
Great, this is useful data.
Results demonstrated that FTX had decreased satisfaction by 0.5-1 points on a 10-point scale within the EA community, but overall community sentiment remained positive at ~7.5/10
That’s a big drop! In practice I’ve only ever seen this type of satisfaction scale give results between about 7⁄10 through 9.5/10 (which makes sense, right, if my satisfaction with EA is 3⁄10 then I’m probably not sticking around the community and answering member surveys), so that decline is a real big chunk of the scale’s de facto range.
I suppose it’s not surprising that the impact on perception is much bigger inside EA, where there’s (appropriately) been tons of discourse on this, than in the general public.
- How has FTX’s collapse impacted EA? by 17 Oct 2023 17:02 UTC; 239 points) (
- 26 Mar 2024 23:07 UTC; 33 points) 's comment on Updates on Community Health Survey Results by (
...What on earth does “90% probability, with medium confidence” mean? Do you think it’s 90% likely or not?
Your “90% confidence interval” of… what, exactly? This looks like a confidence interval over the value of your own subjective probability estimate? And “90% as the mean” of… a bunch of different guesses you’ve taken at your “true” subjective probability? I can’t imagine why anyone would do that but I can’t think what else this could coherently mean…?
If I can be blunt, I suspect you might be repeating probabilistic terms without really tracking their technical meaning, as though you’re just inserting nontechnical hedges. Maybe it’s worth taking the time to reread the map/territory stuff and then run through some calibration practice problems while thinking closely about what you’re doing. Or maybe just use nontechnical hedges more, they work perfectly well for expressing things like this.
I think trying to figure out the common thread “explaining datapoints like FTX, Leverage Research, [and] the LaSota crew” won’t yield much of worth because those three things aren’t especially similar to each other, either in their internal workings or in their external effects. “World-scale financial crime,” “cause a nervous breakdown in your employee,” and “stab your landlord with a sword” aren’t similar to each other and I don’t get why you’d expect to find a common cause. “All happy families are alike; each unhappy family is unhappy in its own way.”
There’s a separate question of why EAs and rationalists tolerate weirdos, which is more fruitful. But an answer there is also gonna have to explain why they welcome controversial figures like Peter Singer or Eliezer Yudkowsky, and why extremely ideological group houses like
early Toby Ord’s[EDIT: Nope, false] or more recently the Karnofsky/Amodei household exercise such strong intellectual influence in ways that mainstream society wouldn’t accept. And frankly if you took away the tolerance for weirdos there wouldn’t be much left of either movement.
Historical nitpick: Schindler ran a Nazi munitions factory, but did not actually produce functioning shells. He delivered duds, and on a few occasions bought working shells from other factories to deliver to the Nazis in order to deflect suspicion, but AFAIK was careful not to actually increase the counterfactual supply of Nazi weapons.
This does not affect your argument, since Schindler obviously did many other things that would be “morally dodgy” in normal circumstances, like fraud and bribery and buying chattel.