[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t.
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently - affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact - there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment - we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits— there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy - all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
and
if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
and
many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently
- affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact
- there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment
- we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits—
there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy
- all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
Two points:
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
I agree with
and
and
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.