Does EA get the “best” people? Hypotheses + call for discussion
My mental model of the rationality community (and, thus, some of EA) is “lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides.”
Given this, I’m pessimistic that, in our current setup, we’re able to attract the absolute “best and brightest and also most ethical and also most epistemically rigorous people” that exist on Earth.
Ignoring for a moment that it’s just hard to find people with all of those qualities combined… what about finding people with actual-top-percentile any of those things?
The most “ethical” (like professional-ethics, personal integrity, not “actually creates the most good consequences) people are probably doing some cached thing like “non-corrupt official” or “religious leader” or “activist”.
The most “bright” (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like “quantum physicist” or “galaxy-brained mathematician”.
The most “epistemically rigorous” people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they’re not already part of the broader “community” (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader problem might be something like: promote EA --> some people join it --> the other competent people think “ah, EA has all those weird problems handled, so I can keep doing my normal job” --> EA doesn’t get the best and brightest.
(This was originally a comment, but I think it deserves more in-depth discussion.)
EA’s meta-strategy is ‘simp for tech billionaires and AI companies’. EA systematically attracts people who enjoy this strategy. So no it does not attract the best. Maybe a version of EA with more integrity would attract the best people.
I think this is entirely legitimate criticism. It’s not at all clear to me that the net impact of Effective Altruism, from end to end, has even been positive. And if it has been negative, it has been negative BECAUSE of the impact the movement has had on AI timelines.
This should prompt FAR more reflection than I have seen within the community. People should be racking their brains for what went wrong and crying mea culpa. And working for OpenAI/Anthropic/etc/etc should not be seen as “effective”. (Well, maybe now it’s okay. Cat’s out of the bag. But certainly being an AI capabilities researcher in 2020 did a lot of harm.)
As far as I can tell, the “Don’t Build the Torment Nexus” community went ahead and built the Torment Nexus because it was both intellectually interesting and a path for individuals to acquire more power. Oops.
And to be clear, this pales in comparison—in my mind at least—to any harms done from the FTX debacle or the sexual abuse scandals. And that is not in any way a trivialization of either of those harms, both of which were also pretty severe. “Accelerate AI timelines” is just that bad.
We have a higher bar for taking moderation action against criticism, but considering that sapphire was warned two days ago we have decided to ban sapphire for one month for breaking forum norms multiple times.
I strongly, strongly, strongly disagree with this decision.
Per my own values and style of communication, I think that welcoming people like sapphire or Sabs who a) are or can be intensely disagreeable, and b) have points worth sharing and processing, is strongly on the side of worth doing, even if c) they make other people uncomfortable, and d) even if they occasionally misfire, and even if they are wrong most of the time, as long as the expected value of the stuff they say remains high.
In particular, I think that doing so is good for arriving at correct beliefs and for becoming stronger, which I value a whole lot. It is the kind of communication which we use on my forecasting group, where the goal is to arrive at correct beliefs.
I understand that the EA Forum moderators may have different values, and that they may want to make the forum a less spiky place. Know that this has the predictable consequence of losing a Nuño, and it is part of the reason why I’ve bothered to create a blog and added comments to it in a way which I expect to be fairly uncensorable[1].
Separately, I do think it is the case that EA “simps” for tech billionaires[2]. An answer I would have preferred to see would be a steelmanning of why that is good, or an argument of why this isn’t the case.
Uncensorable by others: I am hosting the blog on top of nja.la and the comments on my own servers. Not uncensorable by me; I can and will censor stuff that I think is low value by my own utilitarian/consequentialist lights.
Less sure of AI companies, but you could also make the case, e.g., 80kh does recommend positions at OpenAI (<https://jobs.80000hours.org/?query=OpenAI>)
I’m conflicted on this: on the one hand I agree that it’s worth listening to people who aren’t skilled at politeness or aren’t putting enough effort into it. On the other hand, I think someone like Sapphire is capable of communicating the same information in a more polite way, and a ban incentivizes people to put more effort into politeness, which will make the community nicer.
Yeah, you also see this with criticism, where for any given piece of criticism, you could put more effort into it and make it more effective. But having that as a standard (even as a personal one) means that it will happen less.
So I don’t think we disagree on the fact that there is a demand curve? Maybe we disagree that I want to have more sapphires and less politeness, on the margin?
The mods can’t realistically call different strike zones based on whether or not “expected value of the stuff [a poster says] remains high.” Not only does that make them look non-impartial, it actually is non-impartial.
Plus, warnings and bans are the primary methods by which the mods give substance to the floor of what forum norms require. That educative function requires a fairly consistent floor. If a comment doesn’t draw a warning, it’s at least a weak signal that the comment doesn’t cross the line.
I do think a history of positive contributions is relevant to the sanction.
Can you say which norms the current comment breaks? I think it was not clear to me upon reading both the comment, and looking at the forum norms again.
Sorry for the late reply,
The comment was unnecessarily rude and antagonistic — it didn’t meet the minimum bar for civility. (See the Forum norm “Stay civil, at the minimum”.)
In isolation, this comment is a mild norm violation. But having a lot of mildly-bad (unnecessarily antagonistic) comments often corrodes the quality of Forum discourse more than a single terrible comment.
It’s hard to know how to respond to someone who seems to have a pattern of posting such comments. There’s often no “smoking gun” comment that clearly deserves a ban. That’s why we have our current setup — we generally give warnings and then proceed to bans if nothing changes.
I think we’ve not been responding to cases like this enough, recently. At the same time, I wish we could figure out a more collaborative approach than our current one, and it’s possible that a 1-month warning was too long — we’re discussing it in the moderation team.
(Note: some parts of this comment, as with some other comments that moderators post, were written by other moderators, but I personally believe what I’m posting. This seems worth flagging, given that I’m sharing these opinions as my own. I don’t know if all the people on the moderation team agree with everything as I put it here.)
The meaning of “simp” differs from place to place, but it’s not particularly civil and decidedly not in this context. I support a suspension action in light of the recent warning, but given the dissimilar type of violation maybe a week or two would have been sufficient.
https://www.cnn.com/2021/02/19/health/what-is-simp-teen-slang-wellness/index.html