When we were talking about this in 2012 we called it the “poor meat-eater problem”, which I think is clearer.
Buck
seems like the marginal value is much higher
I’ve done this.
I think this is a very good use of time and encourage people to do it.
yeah I totally agree
Alex Wellerstein notes the age distribution of Manhattan Project employees:
Sometimes people criticize EA for having too many young people; I think that this age distribution is interesting context for that.
[Thanks to Nate Thomas for sending me this graph.]
Note: When an earlier private version of these notes was circulated, a senior figure in technical AI safety strongly contested my description. They believe that the Anthropic SAE work is much more valuable than the independent SAE work, as both were published around the same time, but the Anthropic work provides sufficient evidence to be worth extending by other researchers, whereas the independent research was not dispositive.
For the record, if the researcher here was COI’d, eg working at Anthropic, I think you should say so, and you should also substantially discount what they said.
I’d bet against that but not confident
I don’t think he says anything in the manifesto about why AI is going to go better if he starts a “hedge fund/think tank”.
I haven’t heard a strong case for him doing this project but it seems plausibly reasonable. My guess is I’d think it was a suboptimal choice if I heard his arguments and thought about it, but idk.
I’d bet that he didn’t mean black people here.
For what it’s worth, I’m 75% confident that Hanania didn’t mean black people with the “animals” comment.
I think it’s generally bad form to not take people at their word about the meaning of their statements, though I’m also very sympathetic to the possibility of provocateurs exploiting charity to get away with dogwhistles (and I think Hanania deserves more suspicion of this than most), so I feel mixed about you using it as an example here.
I don’t think Hanania is exactly well-positioned to build support on the right; he constantly talks about how much contempt he has for conservatives.
He lays out the relevant part of his perspective in “The Free World Must Prevail” and “Superalignment” in his recent manifesto.
I think it’s pretty unreasonable to call him a Nazi—he’d hate Nazis, because he loves Jews and generally dislikes dumb conservatives.
I agree that he seems pretty racist.
Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn’t live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful
Who on your list matches this description? Maybe Becca if you think she’s thoughtful on these issues? But isn’t that one at most?
I think that one reason this isn’t done is that the people who have the best access to such metrics might not think it’s actually that important to disseminate them to the broader EA community, rather than just sharing them as necessary with the people for whom these facts are most obviously action-relevant.
I think you’re right that my original comment was rude; I apologize. I edited my comment a bit.
I didn’t mean to say that the global poverty EAs aren’t interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell’s meticulous reasoning. I’ve edited my comment to make it less sound like I’m saying that the global poverty EAs are dumb or uninterested in thinking.
But I do stand by the claim that you’ll understand EA better if you think of “promote AMF” and “try to reduce AI x-risk” as results of two fairly different reasoning processes, rather than as results of the same reasoning process. Like, if you ask someone why they’re promoting AMF rather than e.g. insect suffering prevention, the answer usually isn’t “I thought really hard about insect suffering and decided that the math doesn’t work out”, it’s “I decided to (at least substantially) reject the reasoning process which leads to seriously considering prioritizing insect suffering over bednets”.
(Another example of this is the “curse of cryonics”.)
I don’t think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you’re advocating here. My oversimplified model of the situation is more like:
Some EAs don’t feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don’t really spend much time thinking about how to best address global development, because they’re doing things they think are more important.
(I think the EAs in the latter category have their own failure modes and wouldn’t obviously have gotten the malaria thing right (assuming you’re right that a mistake was made) if they had really tried to get it right, tbc.)
Note that L was the only example in your list which was specifically related to EA. I believe that that accusation was false. See here for previous discussion.
Well known EA sympathizer Richard Hanania writes about his donation to the Shrimp Welfare Project.