Personal feelings:
I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!
Object-level:
My mental model of the rationality community (and, thus, some of EA) is “lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides.”
Given this, I’m pessimistic that, in our current setup, we’re able to attract the absolute “best and brightest and also most ethical and also most epistemically rigorous people” that exist on Earth.
Ignoring for a moment that it’s just hard to find people with all of those qualities combined… what about finding people with actual-top-percentile any of those things?
The most “ethical” (like professional-ethics, personal integrity, not “actually creates the most good consequences) people are probably doing some cached thing like “non-corrupt official” or “religious leader” or “activist”.
The most “bright” (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like “quantum physicist” or “galaxy-brained mathematician”.
The most “epistemically rigorous” people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they’re not already part of the broader “community” (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader-problem might be something like: promote EA --> some people join it --> the other competent people think “ah, EA has all those weird problems handled, so I can keep doing my normal job” --> EA doesn’t get the best and brightest.
I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at “increasing our rationality, comprehending big problems”, and I really, really, really doubt that “the most “epistemically rigorous” people are writing blog posts”.
Personal feelings: I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!
Object-level: My mental model of the rationality community (and, thus, some of EA) is “lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides.”
Given this, I’m pessimistic that, in our current setup, we’re able to attract the absolute “best and brightest and also most ethical and also most epistemically rigorous people” that exist on Earth.
Ignoring for a moment that it’s just hard to find people with all of those qualities combined… what about finding people with actual-top-percentile any of those things?
The most “ethical” (like professional-ethics, personal integrity, not “actually creates the most good consequences) people are probably doing some cached thing like “non-corrupt official” or “religious leader” or “activist”.
The most “bright” (like raw intelligence/cleverness/working-memory) people are probably doing some typical thing like “quantum physicist” or “galaxy-brained mathematician”.
The most “epistemically rigorous” people are writing blog posts, which may or may not even make enough money for them to do that full-time. If they’re not already part of the broader “community” (including forecasters and I guess some real-money traders), they might be an analyst tucked away in government or academia.
A broader-problem might be something like: promote EA --> some people join it --> the other competent people think “ah, EA has all those weird problems handled, so I can keep doing my normal job” --> EA doesn’t get the best and brightest.
I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at “increasing our rationality, comprehending big problems”, and I really, really, really doubt that “the most “epistemically rigorous” people are writing blog posts”.