If someone is interested in (3), they’ll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we’re all dead, or they’ll be more easily solvable with the help of aligned powerful AIs
Actually I think that people’s thinking about AI has become somewhat broken and I’m starting to see the same dark-side epistemology that gave us HBD-denialism seep into the AI community.
But, take a step back.
Suppose you have a community of biologists who all believe in Lysenkoism. Then, despite their repeated failures to improve crop yields, the country is bailed out by a large external food source.
Would you be willing to overlook their belief in Lysenkoism and have these people start working on cancer biology, aging and other areas?
Or, look at another example. You ask a self-professed mathematician whether he thinks that all continuous functions are differentiable. He says they are, and it’s so obvious that it requires no proof. Do you trust this mathematician to generally provide good advice?
My point is that process matters in science and epistemology. You can’t sweep the bad process of creationists under the carpet and expect them to continue to produce good results on other issues. Their process is broken.
I made the following edit to my comment above-thread:
[Edit: To be clear, by “HBD crowd” I don’t mean people who believe and say things like “intelligence is heritable” or “embryo selection towards smarter babies seems potentially very good if implemented well.” I thought this was obvious, but someone pointed out that people might file different claims under the umbrella “HBD”.]
I’m not sure this changes anything about your response, but my perspective is that a policy of “let’s not get obsessed over mapping out all possible group differences and whether they’re genetic” is (by itself) unlikely to start us down a slippery slope that ends in something like Lyssenkoism.
For illustration, I feel like my social environment has tons of people with whom you can have reasonable discussions about e.g., applications of embryo selection, but they mostly don’t want to associate with people who talk about IQ differences between groups a whole lot and act like it’s a big deal if true. So, it seems like these things are easy to keep separate (at least in some environments).
Also, I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of, and then announcing with loudspeakers that you’re ready to die on that hill. (In a way, that sort of behavior would even send an “untrue” [in a “misdirection” sense discussed here] signal: Usually people die on hills that are worthy causes. So, if you’re sending the signal “group differences discourse is worth dying over,” you’re implicitly signalling that this is an important topic. But, as I argued, I don’t think it is, and creating an aura of it being important is part of what I find objectionable and where I think the label “racist” can be appropriate, if that’s the sort of motivation that draws people to these topics. So, even in terms of wanting to convey true things, I think it would be a failure of prioritization to focus on this swamp of topics.)
“group differences discourse is worth dying over,” … implicitly signalling that this is an important topic. But, as I argued, I don’t think it is
Human group differences is probably the most important topic in the world outside of AI/Singularity. The reason people are so keen to censor the topic is because it is important.
I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of
Making dialogue saner is a nice goal, but people can unilaterally make dialogue insane by demanding that a topic is banned, or that certain opinions are immoral, etc.
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t.
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently - affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact - there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment - we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits— there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy - all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
and
if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
and
many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.
Actually I think that people’s thinking about AI has become somewhat broken and I’m starting to see the same dark-side epistemology that gave us HBD-denialism seep into the AI community.
But, take a step back.
Suppose you have a community of biologists who all believe in Lysenkoism. Then, despite their repeated failures to improve crop yields, the country is bailed out by a large external food source.
Would you be willing to overlook their belief in Lysenkoism and have these people start working on cancer biology, aging and other areas?
Or, look at another example. You ask a self-professed mathematician whether he thinks that all continuous functions are differentiable. He says they are, and it’s so obvious that it requires no proof. Do you trust this mathematician to generally provide good advice?
My point is that process matters in science and epistemology. You can’t sweep the bad process of creationists under the carpet and expect them to continue to produce good results on other issues. Their process is broken.
I made the following edit to my comment above-thread:
I’m not sure this changes anything about your response, but my perspective is that a policy of “let’s not get obsessed over mapping out all possible group differences and whether they’re genetic” is (by itself) unlikely to start us down a slippery slope that ends in something like Lyssenkoism.
For illustration, I feel like my social environment has tons of people with whom you can have reasonable discussions about e.g., applications of embryo selection, but they mostly don’t want to associate with people who talk about IQ differences between groups a whole lot and act like it’s a big deal if true. So, it seems like these things are easy to keep separate (at least in some environments).
Also, I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of, and then announcing with loudspeakers that you’re ready to die on that hill. (In a way, that sort of behavior would even send an “untrue” [in a “misdirection” sense discussed here] signal: Usually people die on hills that are worthy causes. So, if you’re sending the signal “group differences discourse is worth dying over,” you’re implicitly signalling that this is an important topic. But, as I argued, I don’t think it is, and creating an aura of it being important is part of what I find objectionable and where I think the label “racist” can be appropriate, if that’s the sort of motivation that draws people to these topics. So, even in terms of wanting to convey true things, I think it would be a failure of prioritization to focus on this swamp of topics.)
Human group differences is probably the most important topic in the world outside of AI/Singularity. The reason people are so keen to censor the topic is because it is important.
Making dialogue saner is a nice goal, but people can unilaterally make dialogue insane by demanding that a topic is banned, or that certain opinions are immoral, etc.
Personally, I would trust an AI researcher even if they weren’t racist.
Would you trust an AI alignment researcher who supported Lysenkoism in the era when it was popular in the soviet union?
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently
- affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact
- there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment
- we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits—
there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy
- all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
Two points:
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
I agree with
and
and
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.