If the Effective Altruism movement turns into a human biodiversity denial movement that will definitely outstrip all the potential good it could do, with the possible exception of AI alignment.
Human biodiversity and human capital is the hidden variable that drives almost all outcomes in our world. It’s not really something that you can adopt beliefs about in an unrigorous, signaling-driven manner and expect to be right about the world. I would encourage people to read about Lysenkoism to see what happens when ideology overrides science as the basis for epistemology.
[Edit: To be clear, by “HBD crowd” I don’t mean people who believe and say things like “intelligence is heritable” or “embryo selection towards smarter babies seems potentially very good if implemented well.” I thought this was obvious, but someone pointed out that people might file different claims under the umbrella “HBD”.]
For me, it’s not necessarily because I think they’re wrong about most factual claims that they’re making.
Instead, I’m turned off by the attitude of these being important questions to focus intellectual pursuits on. The existence and origin of group differences seem to me obviously not of great practical importance, so I feel like when people obsess over this, I’m suspicious that it’s coming either from a place of edginess/wanting to feel superior to those who “cannot face the truth”, or (worse) a darker place of entitlement and wanting to externalize bad feelings about one’s own life by blaming some outgroup that has received “undeserved” support.
When thinking about how to make the world better for humans (excluding non-human animals for the moment), I see basically three major cause areas (very simplified):
(1) Evidence-based, immediate-outcome-focused interventions that improve things on some legible metric, like school attendance, medicines successfully administered, etc.
(2) Longer-term structural reform via politics.
(3) Focusing on technological breakthroughs and risks that either improve or worsen things for everyone.
If someone is interested in (1), HBD doesn’t change anything about evidence-based progress on legible metrics. We’d continue to want to support evidence-based interventions in all kinds of contexts that make things better for individuals on some concrete variables. (The focus on evidence-based metrics is great because it helps us sideline a lot of politics-inspired storytelling that turns out to be wrong, such as the claim that poor people will make poor choices if you give them money [GiveDirectly example].)
If someone is interested in (3), they’ll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we’re all dead, or they’ll be more easily solvable with the help of aligned powerful AIs and radical technologically-aided re-structuring of society.
Lastly, if someone is interested in (2), then good luck: It seems like the EA community has failed to find convincing interventions in this area. If you know of some intervention that would be extremely cost-effective, but in-your-opinion false beliefs about HBD are the only crux that stands in the way from us doing the intervention, then that would sound interesting to talk about. But this isn’t the case, is it? I think structural reform is intrinsically hard.
I can see how HBD questions might have some tangential relevance for policy reform, but emphasis on tangential, and I also think that we’re so far away from doing sensible things under (2) that this seems unlikely to be an important crux. (Also, if I were to prioritize something in this space, it would be meta-level interventions like improving the news landscape.)
In this context of structural reform, I should flag that I’m also very much against wokeism, and I agree that there are parallels to Lysenkoism. But I don’t think “being against wokeism” implies “we should be interested in HBD questions.” In fact, I think I am against both of these for related reasons. I think it’s often not productive to view everything in terms of “group vs group.” I think we should spend resources on causes where we can point to concrete benefits for individuals, no matter their group. There’s so much to do on that front already so that other things feel like a bit of a distraction, both in general, but also especially when considering the mind-killing effects of political controversies.
So, to summarize, your comment about HBD being important seems very wrong to me.
Edit: I guess a steelman of your point is that you’re not necessarily saying HBD is in itself important, you’re just saying it would be bad to actively deny it (presumably because this would lend momentum to wokeism or new types of Lyssenkoism). I have more sympathies with that, but the way I see it, it’s more like maybe HBD and wokeism are two sides of a toxic dynamic where it would be better if we could get back to other concerns.
If someone is interested in (3), they’ll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we’re all dead, or they’ll be more easily solvable with the help of aligned powerful AIs
Actually I think that people’s thinking about AI has become somewhat broken and I’m starting to see the same dark-side epistemology that gave us HBD-denialism seep into the AI community.
But, take a step back.
Suppose you have a community of biologists who all believe in Lysenkoism. Then, despite their repeated failures to improve crop yields, the country is bailed out by a large external food source.
Would you be willing to overlook their belief in Lysenkoism and have these people start working on cancer biology, aging and other areas?
Or, look at another example. You ask a self-professed mathematician whether he thinks that all continuous functions are differentiable. He says they are, and it’s so obvious that it requires no proof. Do you trust this mathematician to generally provide good advice?
My point is that process matters in science and epistemology. You can’t sweep the bad process of creationists under the carpet and expect them to continue to produce good results on other issues. Their process is broken.
I made the following edit to my comment above-thread:
[Edit: To be clear, by “HBD crowd” I don’t mean people who believe and say things like “intelligence is heritable” or “embryo selection towards smarter babies seems potentially very good if implemented well.” I thought this was obvious, but someone pointed out that people might file different claims under the umbrella “HBD”.]
I’m not sure this changes anything about your response, but my perspective is that a policy of “let’s not get obsessed over mapping out all possible group differences and whether they’re genetic” is (by itself) unlikely to start us down a slippery slope that ends in something like Lyssenkoism.
For illustration, I feel like my social environment has tons of people with whom you can have reasonable discussions about e.g., applications of embryo selection, but they mostly don’t want to associate with people who talk about IQ differences between groups a whole lot and act like it’s a big deal if true. So, it seems like these things are easy to keep separate (at least in some environments).
Also, I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of, and then announcing with loudspeakers that you’re ready to die on that hill. (In a way, that sort of behavior would even send an “untrue” [in a “misdirection” sense discussed here] signal: Usually people die on hills that are worthy causes. So, if you’re sending the signal “group differences discourse is worth dying over,” you’re implicitly signalling that this is an important topic. But, as I argued, I don’t think it is, and creating an aura of it being important is part of what I find objectionable and where I think the label “racist” can be appropriate, if that’s the sort of motivation that draws people to these topics. So, even in terms of wanting to convey true things, I think it would be a failure of prioritization to focus on this swamp of topics.)
“group differences discourse is worth dying over,” … implicitly signalling that this is an important topic. But, as I argued, I don’t think it is
Human group differences is probably the most important topic in the world outside of AI/Singularity. The reason people are so keen to censor the topic is because it is important.
I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of
Making dialogue saner is a nice goal, but people can unilaterally make dialogue insane by demanding that a topic is banned, or that certain opinions are immoral, etc.
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t.
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently - affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact - there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment - we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits— there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy - all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
and
if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
and
many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.
it’s more like maybe HBD and wokeism are two sides of a toxic dynamic where it would be better if we could get back to other concerns.
Unfortunately this “toxic dynamic” is also known as truth-based versus consequences-based epistemology and that is a dynamic that you absolutely cannot escape because the ability to alter social consensus beliefs for the benefits of special interests is just a generic problem. It will also pop up in the AI debate.
How is HBD action-relevant for EA in a pre-AGI world? Do you think getting people accept HBD is one of the top 50 interventions for making progress on AI safety and governance?
How is HBD action-relevant for EA in a pre-AGI world?
I don’t think it is, because I think AI will replace humans in all economic roles within 5-15 years. But I think the same dark-side intellectual tactics that gave rise to HBD-denialism will contaminate our thinking about AI, just in different ways.
If the Effective Altruism movement turns into a human biodiversity denial movement that will definitely outstrip all the potential good it could do, with the possible exception of AI alignment.
Human biodiversity and human capital is the hidden variable that drives almost all outcomes in our world. It’s not really something that you can adopt beliefs about in an unrigorous, signaling-driven manner and expect to be right about the world. I would encourage people to read about Lysenkoism to see what happens when ideology overrides science as the basis for epistemology.
https://en.wikipedia.org/wiki/Lysenkoism
I’m personally very turned off by the HBD crowd.
[Edit: To be clear, by “HBD crowd” I don’t mean people who believe and say things like “intelligence is heritable” or “embryo selection towards smarter babies seems potentially very good if implemented well.” I thought this was obvious, but someone pointed out that people might file different claims under the umbrella “HBD”.]
For me, it’s not necessarily because I think they’re wrong about most factual claims that they’re making.
Instead, I’m turned off by the attitude of these being important questions to focus intellectual pursuits on. The existence and origin of group differences seem to me obviously not of great practical importance, so I feel like when people obsess over this, I’m suspicious that it’s coming either from a place of edginess/wanting to feel superior to those who “cannot face the truth”, or (worse) a darker place of entitlement and wanting to externalize bad feelings about one’s own life by blaming some outgroup that has received “undeserved” support.
When thinking about how to make the world better for humans (excluding non-human animals for the moment), I see basically three major cause areas (very simplified):
(1) Evidence-based, immediate-outcome-focused interventions that improve things on some legible metric, like school attendance, medicines successfully administered, etc.
(2) Longer-term structural reform via politics.
(3) Focusing on technological breakthroughs and risks that either improve or worsen things for everyone.
If someone is interested in (1), HBD doesn’t change anything about evidence-based progress on legible metrics. We’d continue to want to support evidence-based interventions in all kinds of contexts that make things better for individuals on some concrete variables. (The focus on evidence-based metrics is great because it helps us sideline a lot of politics-inspired storytelling that turns out to be wrong, such as the claim that poor people will make poor choices if you give them money [GiveDirectly example].)
If someone is interested in (3), they’ll hopefully understand that a lot of things that are pressing problems today will either no longer matter in 1-20 years because we’re all dead, or they’ll be more easily solvable with the help of aligned powerful AIs and radical technologically-aided re-structuring of society.
Lastly, if someone is interested in (2), then good luck: It seems like the EA community has failed to find convincing interventions in this area. If you know of some intervention that would be extremely cost-effective, but in-your-opinion false beliefs about HBD are the only crux that stands in the way from us doing the intervention, then that would sound interesting to talk about. But this isn’t the case, is it? I think structural reform is intrinsically hard.
I can see how HBD questions might have some tangential relevance for policy reform, but emphasis on tangential, and I also think that we’re so far away from doing sensible things under (2) that this seems unlikely to be an important crux. (Also, if I were to prioritize something in this space, it would be meta-level interventions like improving the news landscape.)
In this context of structural reform, I should flag that I’m also very much against wokeism, and I agree that there are parallels to Lysenkoism. But I don’t think “being against wokeism” implies “we should be interested in HBD questions.” In fact, I think I am against both of these for related reasons. I think it’s often not productive to view everything in terms of “group vs group.” I think we should spend resources on causes where we can point to concrete benefits for individuals, no matter their group. There’s so much to do on that front already so that other things feel like a bit of a distraction, both in general, but also especially when considering the mind-killing effects of political controversies.
So, to summarize, your comment about HBD being important seems very wrong to me.
Edit: I guess a steelman of your point is that you’re not necessarily saying HBD is in itself important, you’re just saying it would be bad to actively deny it (presumably because this would lend momentum to wokeism or new types of Lyssenkoism). I have more sympathies with that, but the way I see it, it’s more like maybe HBD and wokeism are two sides of a toxic dynamic where it would be better if we could get back to other concerns.
Actually I think that people’s thinking about AI has become somewhat broken and I’m starting to see the same dark-side epistemology that gave us HBD-denialism seep into the AI community.
But, take a step back.
Suppose you have a community of biologists who all believe in Lysenkoism. Then, despite their repeated failures to improve crop yields, the country is bailed out by a large external food source.
Would you be willing to overlook their belief in Lysenkoism and have these people start working on cancer biology, aging and other areas?
Or, look at another example. You ask a self-professed mathematician whether he thinks that all continuous functions are differentiable. He says they are, and it’s so obvious that it requires no proof. Do you trust this mathematician to generally provide good advice?
My point is that process matters in science and epistemology. You can’t sweep the bad process of creationists under the carpet and expect them to continue to produce good results on other issues. Their process is broken.
I made the following edit to my comment above-thread:
I’m not sure this changes anything about your response, but my perspective is that a policy of “let’s not get obsessed over mapping out all possible group differences and whether they’re genetic” is (by itself) unlikely to start us down a slippery slope that ends in something like Lyssenkoism.
For illustration, I feel like my social environment has tons of people with whom you can have reasonable discussions about e.g., applications of embryo selection, but they mostly don’t want to associate with people who talk about IQ differences between groups a whole lot and act like it’s a big deal if true. So, it seems like these things are easy to keep separate (at least in some environments).
Also, I personally think the best way to make any sort of dialogue saner is NOT by picking the most controversial true thing you can think of, and then announcing with loudspeakers that you’re ready to die on that hill. (In a way, that sort of behavior would even send an “untrue” [in a “misdirection” sense discussed here] signal: Usually people die on hills that are worthy causes. So, if you’re sending the signal “group differences discourse is worth dying over,” you’re implicitly signalling that this is an important topic. But, as I argued, I don’t think it is, and creating an aura of it being important is part of what I find objectionable and where I think the label “racist” can be appropriate, if that’s the sort of motivation that draws people to these topics. So, even in terms of wanting to convey true things, I think it would be a failure of prioritization to focus on this swamp of topics.)
Human group differences is probably the most important topic in the world outside of AI/Singularity. The reason people are so keen to censor the topic is because it is important.
Making dialogue saner is a nice goal, but people can unilaterally make dialogue insane by demanding that a topic is banned, or that certain opinions are immoral, etc.
Personally, I would trust an AI researcher even if they weren’t racist.
Would you trust an AI alignment researcher who supported Lysenkoism in the era when it was popular in the soviet union?
[not trying to take a position on the whole issue at hand in this post here] I think I would trust an AI alignment researcher who supported Lysenkoism almost as much as an otherwise-identical seeming one who didn’t. And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists. Claims without justification, mostly because I find it helpful to articulate my beliefs aloud for myself:
I don’t think people generally having correct beliefs on irrelevant social issues is very correlated with having correct beliefs on their area of expertise
I think in most cases, having unpopular and unconventional beliefs is wrong (most contrarians are not correct contrarians)
A bunch of unpopular and unconventional things are true, so to be maximally correct you have to be a correct contrarian
Some people aren’t really able to entertain unpopular and unconventional ideas at all, which is very anticorrelated with the ability to have important insights and make huge contributions to a field
But lots of people have very domain-specific ability to have unpopular and unconventional ideas while not having/not trusting/not saying those ideas in other domains.
A large subset of the above are both top-tier in terms of ground-breaking insights in their domain of expertise, and put off by groups that are maximally open to unpopular and unconventional beliefs (which are often shitty and costs to associate with)
I think people who are top-tier in terms of ability to have ground-breaking insights in their domain disproportionately like discussing unpopular and unconventional beliefs from many different domains, but I don’t know if, among people who are top-tier in terms of ground-breaking insights in a given domain, the majority prefer to be in more or less domain-agnostically-edgy crowds.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it’s informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
How far are you willing to go with this?
What about a researcher who genuinely believes all the most popular political positions:
- gender is a social construct and you can change your gender just by identifying differently
- affirmative action and diversity makes companies and teams more efficient and this is a solid scientific fact
- there are no heritable differences in cognitive traits, all cognitive differences in humans are 100% exclusively the result of environment
- we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits—
there’s a climate emergency that will definitely kill most people on the planet in the next 10 years if we don’t immediately change our lifestyles to consume less energy
- all drugs should be legal and ideally available for free from the state
Do you think that people who genuinely believe these things will create an intellectual environment that is conducive to solving hard problems?
Two points:
(1) I don’t think “we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits” or “all drugs should be legal and ideally available for free from the state” are the most popular political positions in the US, nor close to them, even for D-voters.
(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they “genuinely believe”
But yes, per my earlier point, if you told me for example “there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let’s say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most” I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don’t have much explanatory power. And, assuming you meant “support” not “genuinely believe” and cutting the two bullets I claim aren’t even majority positions among for example D-voters, and B>A>C but barely
Unfortunately I think that a subject like bacteriology is more resistant to bad epistenics than something like AI alignment or effective altruism.
And I think this critique just sort of generalizes to a fairly general critique of EA: if you want to make a widget that’s 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work.
But if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial.
Communism is perhaps the prime example of this from the 20th century: many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.
HBD denial (race communism) is the communism of the present.
I agree with
and
and
A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. “finding EA cause X” or “thinking about if newly invented systems of government will work well”. This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so.
If you mean some parts of technical alignment, then perhaps that’s true, but I mean alignment in the broad sense of creating good outcomes.
> AI researchers are more in the former category
Yeah but people in the former category don’t matter much in terms of outcomes. Making a better sparse autoencoder won’t change the world at the margin, just like technocrats working to make soviet central planning better ultimately didn’t change the world because they were making incremental progress in the wrong direction.
Unfortunately this “toxic dynamic” is also known as truth-based versus consequences-based epistemology and that is a dynamic that you absolutely cannot escape because the ability to alter social consensus beliefs for the benefits of special interests is just a generic problem. It will also pop up in the AI debate.
How is HBD action-relevant for EA in a pre-AGI world? Do you think getting people accept HBD is one of the top 50 interventions for making progress on AI safety and governance?
I don’t think it is, because I think AI will replace humans in all economic roles within 5-15 years. But I think the same dark-side intellectual tactics that gave rise to HBD-denialism will contaminate our thinking about AI, just in different ways.