Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.
Robi Rahman
Can you explain how the EA community is intensely hierarchical? From what I’ve seen, EA tends to have a relatively flat orginazational structure and very high tolerance for contradicting or questioning authority figures, but maybe others have had different experiences with this than I have.
I don’t really see anything in the article to support the headline claim, and the anonymous sources don’t actually work at NIST, do they?
I strongly disagree with your implication that “these things” (presumably “sexism, racism, and other toxic ideologies” as mentioned in the original post) are “accepted” within this movement, and I’m tired of stuff like this being brought up and distracting us from the mission we’re all here for, which is to help others.
If you think pandemic response is the key issue, Dr. Harder is a highly experienced doctor who used to run the Oregon Medical Board. Medical and policy experience: maybe you still think your guy will be better, but by how much?
The FDA has hundreds of highly -experienced doctors and still had such a disastrous response to the pandemic they probably caused millions of extra deaths. They completely blocked challenge trials and delayed vaccine deployment by six months. What matters is not whether the people in government are doctors, it’s the policies on how the government behaves when an important problem arises. And crucially, the key issue isn’t pandemic response, it’s pandemic prevention. Carrick Flynn is the only congressional candidate I know of who’s running on that.
As has been noted many times, EA is currently about 70% male, whilst environmentalism/animal advocacy is majority women. I would be fairly confident that a more balanced gender ratio would mean less misogyny towards women.
I don’t think the 70⁄30 gender ratio causes misogyny. I think it amplifies experiences of it among women because they are the minority here. Imagine a group of 100 EAs, 70 men and 30 women, and a group of 100 environmentalists, 30 men and 70 women. Suppose 10% of all men do something misogynistic towards a random woman in their group. Then 23% of EA women experience misogyny compared to only 4% of environmentalist women, even though each individual man in each group is equally likely to have behaved misogynistically.
(Prior to seeing this post, I’d have conjectured that men in EA are less likely than men elsewhere to behave misogynistically, and maybe that’s still true, but these reports are really alarming.)
Phrasings like
“if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary”
for what is evidently a fully paid, luxurious work & travel experience… tanks the quality of the comment.Huh? No, that is a succinct and accurate description of a disputed interpretation, and I think Nonlinear’s interpretation is wrong there. They keep saying in their defense that they paid Alice (the equivalent of) $72,000 when they didn’t—it’s really not the same thing at all if 80% of it is comped flights, food, and hotels. At least for me, the amount of cash that would be an equivalent value to Alice’s compensation package is something like $30-40,000.
Respectfully, I have to disagree that most of these examples are any reason to distrust communications from EA. Someone has already addressed Ben Todd’s answer on whether EA is utilitarian (saying it’s utilitarian-ish is the most accurate answer, it’s not deceptive), so I’ll comment on the career advice you saw:
Philosophy, but also not Philosophy?
I took a look at global priorities research. It was one of five top-recommended career paths on 80K’s website and required researchers in philosophy. 80K website at time of writing:
> In general, for foundational global priorities research the best graduate subject is an economics PhD. The next most useful subject is philosophy
This article contrasts sharply with the 80K page on philosophy:
> the academic job market for philosophy is extremely challenging. Moreover, the career capital you acquire working toward a career in philosophy isn’t particularly transferable. For these reasons we currently believe that, for the large majority of people who are considering it, pursuing philosophy professionally is unlikely to be the best choice.
It seems like there are significant risks to pursuing further study in philosophy that 80K are well aware of, and it does not look great that they mention them in the context of general philosophical research (that they presumably don’t care about their readers pursuing) but omit them when discussing a career path they are eager for their readers to pursue. Spending 7 years getting a philosophy PhD because you want to research global priorities and then failing to find a position (the overwhelmingly likely outcome) does not sound like much fun.
This is a particularly clear example of a more general experience I’ve had with 80K material, namely being encouraged to make major life choices without an adequate treatment of the risks involved. I think readers deserve this information upfront.
They aren’t being dishonest here, they’re answering two different questions. The first page says that the best background for global priorities research, one of their most-recommended career options, is economics followed by philosophy. The second page, on philosophy as a career path, correctly points out that the job market for philosophy is very challenging. They’re not telling lots of people they should go into philosophy in the hopes that some of them will then do global priorities research. They’re saying you should not do philosophy, but if you did, then global priorities research is a highly valuable thing your background would be suitable for, which I’d say are good recommendations all around.
I think it’s not actually accurate to say that
The vast majority of what they gave is disputing the evidence
as it’s constantly interspersed with stuff like how great it is to work in a hot tub.
Jeff was probably not asking what “sacred cow” means; more likely the question was asking in what way polyamory is a sacred cow of EA. I will grant that EA is more tolerant of most personal traits than society typically is, and therefore is more supportive of polyamory than other groups just by not being against it, but it’s not anywhere in any canonical EA materials, and certainly not a sacred cow. Plenty of EAs are criticizing it in this very thread.
[Alice] chose to pay herself an annualized ~$72,000 per year—more than anyone else at the org, and far more than the ~minimum wage she earned in previous jobs.
This is more than most people make at OpenPhil, according to Glassdoor.
This seems unlikely—these numbers on Glassdoor are way lower than I’d expect for most of these job titles. Can anyone from OP corroborate?
The specificity of Naia’s allegations (the part about “Will basically threatened Tara” seems particularly important/bad)
To the contrary, this strikes me as really unspecific. What does it mean to “basically threaten” someone? What was the implied consequence of going against Will and/or Sam? What did Will say? The article raises a lot of questions.
Tengo una oferta permanente disponible para cualquiera que lea esto: elige un tema de altruismo efectivo sobre el que no se haya escrito en español, y te apuesto $20 a que puedo escribir un artículo mejor que tú. No soy un hablante nativo de español, ¡así que esta apuesta debería ser fácil de ganar! Nombra un juez neutral con el que ambos estemos de acuerdo, les mostraremos nuestros artículos de forma anónima y él elegirá al ganador. Algunos temas que me emocionaría especialmente tratar para este concurso:
- un discurso de ascensor de sesenta segundos sobre los riesgos de la inteligencia artificial
- el argumento del niño ahogado
- explicación de los principales programas de caridad de GiveWell y por qué los apoyamos---
I have a standing offer available to anyone reading this: pick an effective altruism topic that hasn’t been written about in Spanish, and I’ll bet you $20 I can write a better piece than you can. I’m not a native Spanish speaker, so this bet should be easy to win! Name a neutral judge we can both agree on, we’ll show them both of our articles anonymously, and they pick the winner. Some topics I would be especially excited to try for this contest:
- a sixty-second elevator pitch about risks from artificial intelligence
- the drowning child argument
- explanation of GiveWell’s top charity programs and why we support those
Can the people who agreement-downvoted this explain yourselves? Bogdan has a good point: if we really believe in short timelines to transformative AI we should either be spending our entire AI-philanthropy capital endowment now, or possibly investing it in something that will be useful after TAI exists. What does not make sense is trying to set up a slow funding stream for 50 years of AI alignment research if we’ll have AGI in 20 years.
(Edit: the comment above had very negative net agreement when I wrote this.)
I don’t follow. Can you explain how Will Aldred’s comment was preposterously naive?
A paperclip maximiser and a pencil maximiser cannot “agree to disagree”. One of them will get to tile the universe with their chosen stationery implement, and one of them will be destroyed. They are mortal enemies with each other, and both of them are mortal enemies of the stapler maximiser, and the eraser maximiser, and so on. Even a different paperclip maximiser is the enemy, if their designs are different. The plastic paperclipper and the metal paperclipper must, sooner or later, battle to the death.
The inevitable result of a world with lots of different malevolent AGI’s is a bare-knuckle, vicious, battle royale to the death between every intelligent entity. In the end, only one goal can win.
Are you familiar with the concept of values handshakes? An AI programmed to maximize red paperclips and an AI programmed to maximize blue paperclips and who know that each would prefer to destroy each other might instead agree on some intermediate goal based on their relative power and initial utility functions, e.g. they agree to maximize purple paperclips together, or tile the universe with 70% red paperclips and 30% blue paperclips.
[Objection] from Robi Rahman: “A person choosing to eat 1kg less chicken results in 0.6 kg less expected chicken produced in the long run, which averts 20 days of chicken suffering. A comparable sacrifice would be to turn off your air conditioning for 3 days, which in expectation reduces future global warming by 10^(-14) °C and reduces suffering by zero.”
Without quibbling with the precise numbers, I think this is fundamentally a point about the importance of the two cause areas.
Actually, what I meant was fundamentally not a point about the importance of either cause area. I think that even if total harms from climate are greater than total harms from factory farming, the marginal harm reduction from changing individual behavior on diet is probably greater than the marginal harm reduction from changing personal energy consumption. I still think you’re right overall that individual action on animal welfare is over-emphasized relative to individual action on climate or political/technology interventions on animal welfare, but this is one possible justification for the behavior of a lot of EAs I’ve met who put lots of effort into changing their diet but none into reducing their energy usage.
Uh oh, better reduce the humor by 20% or we’re courting peril.
I don’t see Shapley values mentioned anywhere in your post. I think you’ve made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.
If the flooding is predictable, are we causing moral hazard by subsidizing farming in flood-prone areas?
Who are the EAs claiming that race-and-IQ conversations are untouched by white supremacism? I have never seen an effective altruist claim anything like that.
Discussions about race and IQ that are instigated by white supremacists often mention results showing that black people score lower than white people while omitting results showing that white people score lower than Asians; discussions in academic psychology are more likely to mention all of those results together. And I’ve never seen anyone in effective altruism mention anything on this topic, until this post and your comment just now.
I don’t see how it’s relevant to effective altruism. Looking for group differences on IQ tests doesn’t seem to help with fundraising, preventing pandemics, or distributing bednets, so unsurprisingly it never comes up here.