Could you elaborate on why youâre so quick to associate racism with truthseekingness? Youâre at least the third person to do so in this discussion and I think this demands an explanation. Whatâs the relationship between the two? Have you investigated racist assertions and concluded they are truthful?
Hereâs where I see this association coming from. People vary in many ways, some directly visible (height, facial structure, speed, melanin) and some less so (compassion, facility with mathematics, creativity, musicality). Most directly visible ones clearly have a genetic component: you can see the differences between populations, cross-group adoptees are visibly much more similar to their birth parents than their adoptive parents, etc. With the non-visible variation itâs harder to tell how much is genetic, but evidence from situations like twins raised apart tells us that some is.
Getting closer to the edge, itâs likely that there are population-level genetic differences on non-visible traits: different populations have been under different selection pressures in ways that impacted visible traits, and it would be surprising if these pressures didnât impact non-visible traits. One could go looking into this, try to figure out what is actually true, and if so what those differences are. If I did this I might find that some common racist stereotypes are backed up by reality, or I might find that they were not. Since by my values and temperament I would need to talk about what I found, whichever direction it was, and I donât see much value in learning these answers, however, Iâm not going to look into this. A general commitment to seeking truth doesnât obligate one to investigate every possible question. I think a lot of people reason this way about low-payoff controversial areas and avoid them.
Say someone does value seeking truth so highly that theyâre willing to go into these areas despite the risk of social censure should they end up with politically difficult beliefs. If they encounter strong evidence that this aspect of reality has seriously unfortunate implications, they have two main options: delude themselves into thinking reality is otherwise or accept reality and with it the implications. Biting the bullet, the same good epistemic norms we need elsewhere for handling a messy world mean that if someone really does find themselves in that situation, I think they should do the latter.
Of course someone can also end up with racist beliefs through garden variety stereotyping, close mindedness, and bigotry. Since these are relatively common, most people saying racist things didnât get there via an unusually strong commitment to seeking truth regardless of the social consequences. And even someone who has a scientific-sounding justification for their claims may have done a poor job (or not even attempted) to find out whatâs really true, instead poking through some papers and ending up with their initial stereotypes strengthened through confirmation bias. So I think itâs generally incorrect to go from learning that a person has racist beliefs to increasing your sense of how truth-seeking they are, though it may still make sense if (a) your priors on reality being unfortunate here are high enough and (b) you know enough other things about this person that this path seems much more likely than the more common path.
I love this comment, it really helped me think about this.
To explore a little more, I had a small issue with this sentiment.
âSince by my values and temperament I would need to talk about what I found, whichever direction it was, and I donât see much value in learning these answers, however, Iâm not going to look into this. A general commitment to seeking truth doesnât obligate one to investigate every possible question. I think a lot of people reason this way about low-payoff controversial areas and avoid them.â
I completely agree with this as a guiding principle, and think it should probably usually be the default option for most people. âA general commitment to seeking truth doesnât obligate one to investigate every possible question.â
I think however that sticking to talking about every truth we find may not be a good idea, and I would bet you probably donât actually talk about every uncomfortable finding you have com accross. âSince by my values and temperament I would need to talk about what I found, whichever direction it wasâ
I get the general principle of talking about what we discover along the rather than staying quiet, but I think there can be exceptions. If we do stumble across meaningful uncomfortable outcomes in either through our own research or on the internet or whatever, I think the best option might be to avoid talking about the issue at all. Iâm not sure we ever âneedâ to talk about a research finding.
I agree with this statement âthey have two main options: delude themselves into thinking reality is otherwise or accept reality and with it the implications.â but think that in some cases we can accept reality and still choose not to talk about it, oreven think about it very much, especially if talking about it is unlikely to lead to any helpful outcome.
I think the world in general is extremely unfair and there are quite a number of âunfortunateâ and awkward truths even outside the realm of genetics, some of which might best to avoid talking about.
Youâre right that I donât have to talk about everything that I find. To take an uncontroversial example, if in my day job I find an easy way to make a bioweapon, Iâm not going to blog how to do that.
But if youâre not going to talk about it if you conclude X, are you also not going to talk about it if you conclude not-X? If not then youâre making it much harder for other people to figure out what is true (more).
I feel one is always allowed not to speak about what they donât want to, but that if one does decide to speak about something, they should never make a statement they know is a lie. This is sad, because depending on the issue and how it relates to your career and other stuff, you might not be able to just keep quiet, and besides, your silence is going to be interpreted uncharitably. People who have shown to consistently value and practice truth-saying should be allowed some sort of leeway, like âI will only answer n randomly chosen questions today (n also randomized) and you are not entitled to press further on anything I donât answerâ.
>If we do stumble across meaningful uncomfortable outcomes in either through our own research or on the internet or whatever, I think the best option might be to avoid talking about the issue at all.
You canât ignore reality this selectively and expect reasonable outcomes. If I have two health problems, but Iâm only allowed to treat one because the other is socially unacceptable, the other will get worse and worse. To be clear- I think thereâs little value in discussing the whole genetic thing. But I think most people outraged by it are ignoring why it comes up.
If you want to avoid talking about the issue, then you have to move that removal up a level. So we refuse to consider that there are racial differences in genetics- okay, then you need to move that up a level and racial differences in anything are unacceptable topics. No more concern about statistical differences in, say, homeownership, graduation rates, or crime rates. To make certain causes verboten means the symptoms cannot be properly addressed either.
I believe this kind of absolute and strict colorblindness would be an improvement for society. But I suspect that most of the people complaining about Hanania would not agree.
For what itâs worth, I find Hanania an irritating troll and I donât get the appeal to the Manifest crowd, except in the most cynical manner that heâs a right-winger who mostly shits on other right-wingers. A sort of guilty indulgence, like a comedian who makes jokes mostly about people you already donât like.
oThis isnât directly responsive to your comment but- Iâve gone to that particular edge of the map and poked around a bit. I think people who avoid looking into the question for the above reason typically sound like they expect that there plausibly be dragons. This is a PSA that I saw no dragons, so the reader should consider the dragons less plausible.
There certainly are differences in individual intelligence due to genetics. And at the species level, genes are what cause humans to be smarter than, say, turtles. Itâs also true that thereâs no law of reality that prevents unfortunate things like one group of sapients being noticeably smarter than another due to genetics. However, Iâm pretty sure that this is not a world where that happened with continent-scale populations of homo sapiens[1]. I think itâs more likely that the standard evidence presented in favor instead indicates psychiatristsâ difficulty in accounting for all non-genetic factors.
I donât mean to argue for spending time reading about this. The argument against checking every question still applies, and I donât expect to update anyoneâs expectations of what theyâd find by a huge amount. But my impression is people sound like their expectations are rather gloomy[2]. Iâd like to stake some of my credibility to nudge those expectations towards âprobably fineâ.
I feel like I ought to give a brief and partial explanation of why: Human evolutionary history shows an enormous âhungerâ for higher intelligence. Mutations that increase intelligence with only a moderate cost would tend to rapidly spread across populations, even relatively isolated ones, much like lactose tolerance is doing. It would be strange this pressure dropped off in some locations after human populations diverged.
Itâs possible that there were differing environmental pressures that pushed different tradeoffs over aspects of intelligence. Eg, perhaps at very high altitudes itâs more favorable to consider distant dangers with very thorough system-2 assessments, and in lowlands itâs better to make system-2 faster but less careful. However at the scale corresponding to the term âraceâ (ie roughly continent-scale), I struggle to think of large or moderate environmental trends that would affect optimal cognition style. Whereas continent-scale trends that affect optimal skin pigments are pretty clear.
Adding to this, our understanding of genetics is rapidly growing. If there was a major difference in cognition-affecting mutations corresponding to racial groupings, Iâd have bet a group of scientists would have stumbled on them by now & caused an uproar Iâd hear about. As time goes on the lack of uproars is becoming stronger evidence.
I suspect this is due to a reporting bias by non-experts that talk about this question. Those who perceive âdragons on the mapâ will often feel their integrity is at stake unless they speak up. Those who didnât find any will lose interest and wonât feel their integrity is at stake, so they wonât speak up. So people who calmly state facts on the matter instead of shouting about bias are disproportionately the ones convinced of the genetic differences, which heuristically over-weights their position.
The asymmetry that @Ben Millwood points to below is important, but it goes further. Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places, make different errors, and there are a lot of things that could be confused for dragons or things dragons could be confused for, so this is a noisy process. Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.
While humanity is not perfect at uncovering the truth in confusing situations, our approach that best approaches the truth is for people to report back what theyâve found, and have open discussion of the evidence. Perhaps some evidence A finds is very convincing to them, but then B shows how theyâve been misinterpreting it. Except this doesnât work on taboo topics:
Many sensible people have (what I interpret as) @NickLaingâs perspective, and people with that perspective will only participate in the public evidence reconciliation process if they failed to find dragons. I donât know, for example, whether this is your perspective.
You wrote essentially the opposite (âThose who perceive âdragons on the mapâ will often feel their integrity is at stake unless they speak up. Those who didnât find any will lose interest and wonât feel their integrity is at stake, so they wonât speak up.â) and I agree some people will think this way, but I think this is many fewer people than are willing to publicly argue for generally-accepted-as-good positions but not generally-accepted-as-evil ones.
Many people really do or donât want dragons to exist, and so will argue for/âagainst them without much real engagement with the evidence.
Good faith participation in a serious debate on the existence of dragons risks your reputation and jeopardizes your ability to contribute in many places.
So I will continue not engaging, publicly or privately, with evidence or arguments on whether there are dragons.
Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places, make different errors, and there are a lot of things that could be confused for dragons or things dragons could be confused for, so this is a noisy process. Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.
While humanity is not perfect at uncovering the truth in confusing situations, our approach that best approaches the truth is for people to report back what theyâve found, and have open discussion of the evidence. Perhaps some evidence A finds is very convincing to them, but then B shows how theyâve been misinterpreting it.
This is a bit discourteous here.
I am not claiming that A is convincing to me in isolation. I am claiming that after a hundred similarly smart people fit different evidence together, thereâs so much model uncertainty that Iâm conservatively downgrading A from âoverwhelmingly obviousâ to âpretty sureâ. I am claiming that if we could somehow make a prediction market that would resolve on the actual truth of the matter, I might bet only half my savings on A, just in case I missed something drastic.
Youâre free to dismiss this as overconfidence of course. But this isnât amateur hour, I understand the implications of what Iâm saying and intend my words to be meaningful.
Many sensible people have (what I interpret as) @NickLaingâs perspective, and people with that perspective will only participate in the public evidence reconciliation process if they failed to find dragons. I donât know, for example, whether this is your perspective.
You wrote essentially the opposite⊠and I agree some people will think this way, but I think this is many fewer people than are willing to publicly argue for generally-accepted-as-good positions but not generally-accepted-as-evil ones
I think this largely depends on whether a given forum is anonymous or not. In an alternate universe where the dragon scenario was true, I think Iâd end up arguing for it anonymously at some point, though likely not on this forum.
I was not particularly tracking my named-ness as a point of evidence, except insofar as it could be used to determine my engagement with EA & rationality and make updates about my epistemics & good faith.
Good faith participation in a serious debate on the existence of dragons risks your reputation and jeopardizes your ability to contribute in many places.
Sure. I understand itâs epistemically rude to take debate pot-shots when an opposing team would be so disadvantaged, and thereâs a reason to ignore one-sided information. Thereâs no obligation to update or engage if this comes across as adversarial.
But I really am approaching this as cooperatively communicating information. I found I had nonzero stress about the perceived possibility of dragons here, and I expect others do as well. I think a principled refusal to look does have nonzero reputational harm. There will be situations where thatâs the best we can manage, but thereâs also such a thing as a p(dragon) low enough that itâs no longer a good strategy. If it is the case that there are obviously no dragons somewhere, itâd be a good idea for a high-trust group to have a way to call âall clearâ.
So this is my best shot. Hey, anyone reading this? I know this is unilateral and all, but I think weâre good.
Thanks. Thereâs an asymmetry, though, where you can either find out that what everyone already thinks is true (which feels like a bit of a waste of time), or you can find out something deeply uncomfortable. Even if you think the former is where most of the probability is, itâs still not a very appealing prospect.
(Iâm not sure what the rhetorical import of this or what conclusions we should draw from it, just felt like explaining why a lot of people find investigating distasteful even if they think it wonât change their mind.)
I think I wasnât entirely clear; the recommendation was that if my claim sounded rational people should update their probability, not that people should change their asymmetric question policy. Edited a bit to make it more clear.
Hereâs where I see this association coming from. People vary in many ways, some directly visible (height, facial structure, speed, melanin) and some less so (compassion, facility with mathematics, creativity, musicality). Most directly visible ones clearly have a genetic component: you can see the differences between populations, cross-group adoptees are visibly much more similar to their birth parents than their adoptive parents, etc. With the non-visible variation itâs harder to tell how much is genetic, but evidence from situations like twins raised apart tells us that some is.
Getting closer to the edge, itâs likely that there are population-level genetic differences on non-visible traits: different populations have been under different selection pressures in ways that impacted visible traits, and it would be surprising if these pressures didnât impact non-visible traits. One could go looking into this, try to figure out what is actually true, and if so what those differences are. If I did this I might find that some common racist stereotypes are backed up by reality, or I might find that they were not. Since by my values and temperament I would need to talk about what I found, whichever direction it was, and I donât see much value in learning these answers, however, Iâm not going to look into this. A general commitment to seeking truth doesnât obligate one to investigate every possible question. I think a lot of people reason this way about low-payoff controversial areas and avoid them.
Say someone does value seeking truth so highly that theyâre willing to go into these areas despite the risk of social censure should they end up with politically difficult beliefs. If they encounter strong evidence that this aspect of reality has seriously unfortunate implications, they have two main options: delude themselves into thinking reality is otherwise or accept reality and with it the implications. Biting the bullet, the same good epistemic norms we need elsewhere for handling a messy world mean that if someone really does find themselves in that situation, I think they should do the latter.
Of course someone can also end up with racist beliefs through garden variety stereotyping, close mindedness, and bigotry. Since these are relatively common, most people saying racist things didnât get there via an unusually strong commitment to seeking truth regardless of the social consequences. And even someone who has a scientific-sounding justification for their claims may have done a poor job (or not even attempted) to find out whatâs really true, instead poking through some papers and ending up with their initial stereotypes strengthened through confirmation bias. So I think itâs generally incorrect to go from learning that a person has racist beliefs to increasing your sense of how truth-seeking they are, though it may still make sense if (a) your priors on reality being unfortunate here are high enough and (b) you know enough other things about this person that this path seems much more likely than the more common path.
I love this comment, it really helped me think about this.
To explore a little more, I had a small issue with this sentiment.
âSince by my values and temperament I would need to talk about what I found, whichever direction it was, and I donât see much value in learning these answers, however, Iâm not going to look into this. A general commitment to seeking truth doesnât obligate one to investigate every possible question. I think a lot of people reason this way about low-payoff controversial areas and avoid them.â
I completely agree with this as a guiding principle, and think it should probably usually be the default option for most people. âA general commitment to seeking truth doesnât obligate one to investigate every possible question.â
I think however that sticking to talking about every truth we find may not be a good idea, and I would bet you probably donât actually talk about every uncomfortable finding you have com accross. âSince by my values and temperament I would need to talk about what I found, whichever direction it wasâ
I get the general principle of talking about what we discover along the rather than staying quiet, but I think there can be exceptions. If we do stumble across meaningful uncomfortable outcomes in either through our own research or on the internet or whatever, I think the best option might be to avoid talking about the issue at all. Iâm not sure we ever âneedâ to talk about a research finding.
I agree with this statement âthey have two main options: delude themselves into thinking reality is otherwise or accept reality and with it the implications.â but think that in some cases we can accept reality and still choose not to talk about it, oreven think about it very much, especially if talking about it is unlikely to lead to any helpful outcome.
I think the world in general is extremely unfair and there are quite a number of âunfortunateâ and awkward truths even outside the realm of genetics, some of which might best to avoid talking about.
Youâre right that I donât have to talk about everything that I find. To take an uncontroversial example, if in my day job I find an easy way to make a bioweapon, Iâm not going to blog how to do that.
But if youâre not going to talk about it if you conclude X, are you also not going to talk about it if you conclude not-X? If not then youâre making it much harder for other people to figure out what is true (more).
I feel one is always allowed not to speak about what they donât want to, but that if one does decide to speak about something, they should never make a statement they know is a lie. This is sad, because depending on the issue and how it relates to your career and other stuff, you might not be able to just keep quiet, and besides, your silence is going to be interpreted uncharitably. People who have shown to consistently value and practice truth-saying should be allowed some sort of leeway, like âI will only answer n randomly chosen questions today (n also randomized) and you are not entitled to press further on anything I donât answerâ.
I 100 percent agree with that, which is where the wisdom comes in to choose not speak about many things.
>If we do stumble across meaningful uncomfortable outcomes in either through our own research or on the internet or whatever, I think the best option might be to avoid talking about the issue at all.
You canât ignore reality this selectively and expect reasonable outcomes. If I have two health problems, but Iâm only allowed to treat one because the other is socially unacceptable, the other will get worse and worse. To be clear- I think thereâs little value in discussing the whole genetic thing. But I think most people outraged by it are ignoring why it comes up.
If you want to avoid talking about the issue, then you have to move that removal up a level. So we refuse to consider that there are racial differences in genetics- okay, then you need to move that up a level and racial differences in anything are unacceptable topics. No more concern about statistical differences in, say, homeownership, graduation rates, or crime rates. To make certain causes verboten means the symptoms cannot be properly addressed either.
I believe this kind of absolute and strict colorblindness would be an improvement for society. But I suspect that most of the people complaining about Hanania would not agree.
For what itâs worth, I find Hanania an irritating troll and I donât get the appeal to the Manifest crowd, except in the most cynical manner that heâs a right-winger who mostly shits on other right-wingers. A sort of guilty indulgence, like a comedian who makes jokes mostly about people you already donât like.
oThis isnât directly responsive to your comment but- Iâve gone to that particular edge of the map and poked around a bit. I think people who avoid looking into the question for the above reason typically sound like they expect that there plausibly be dragons. This is a PSA that I saw no dragons, so the reader should consider the dragons less plausible.
There certainly are differences in individual intelligence due to genetics. And at the species level, genes are what cause humans to be smarter than, say, turtles. Itâs also true that thereâs no law of reality that prevents unfortunate things like one group of sapients being noticeably smarter than another due to genetics. However, Iâm pretty sure that this is not a world where that happened with continent-scale populations of homo sapiens[1]. I think itâs more likely that the standard evidence presented in favor instead indicates psychiatristsâ difficulty in accounting for all non-genetic factors.
I donât mean to argue for spending time reading about this. The argument against checking every question still applies, and I donât expect to update anyoneâs expectations of what theyâd find by a huge amount. But my impression is people sound like their expectations are rather gloomy[2]. Iâd like to stake some of my credibility to nudge those expectations towards âprobably fineâ.
I feel like I ought to give a brief and partial explanation of why: Human evolutionary history shows an enormous âhungerâ for higher intelligence. Mutations that increase intelligence with only a moderate cost would tend to rapidly spread across populations, even relatively isolated ones, much like lactose tolerance is doing. It would be strange this pressure dropped off in some locations after human populations diverged.
Itâs possible that there were differing environmental pressures that pushed different tradeoffs over aspects of intelligence. Eg, perhaps at very high altitudes itâs more favorable to consider distant dangers with very thorough system-2 assessments, and in lowlands itâs better to make system-2 faster but less careful. However at the scale corresponding to the term âraceâ (ie roughly continent-scale), I struggle to think of large or moderate environmental trends that would affect optimal cognition style. Whereas continent-scale trends that affect optimal skin pigments are pretty clear.
Adding to this, our understanding of genetics is rapidly growing. If there was a major difference in cognition-affecting mutations corresponding to racial groupings, Iâd have bet a group of scientists would have stumbled on them by now & caused an uproar Iâd hear about. As time goes on the lack of uproars is becoming stronger evidence.
I suspect this is due to a reporting bias by non-experts that talk about this question. Those who perceive âdragons on the mapâ will often feel their integrity is at stake unless they speak up. Those who didnât find any will lose interest and wonât feel their integrity is at stake, so they wonât speak up. So people who calmly state facts on the matter instead of shouting about bias are disproportionately the ones convinced of the genetic differences, which heuristically over-weights their position.
The asymmetry that @Ben Millwood points to below is important, but it goes further. Imagine a hundred well-intentioned people look into whether there are dragons. They look in different places, make different errors, and there are a lot of things that could be confused for dragons or things dragons could be confused for, so this is a noisy process. Unless the evidence is overwhelming in one direction or another, some will come to believe that there are dragons, while others will believe that there are not.
While humanity is not perfect at uncovering the truth in confusing situations, our approach that best approaches the truth is for people to report back what theyâve found, and have open discussion of the evidence. Perhaps some evidence A finds is very convincing to them, but then B shows how theyâve been misinterpreting it. Except this doesnât work on taboo topics:
Many sensible people have (what I interpret as) @NickLaingâs perspective, and people with that perspective will only participate in the public evidence reconciliation process if they failed to find dragons. I donât know, for example, whether this is your perspective.
You wrote essentially the opposite (âThose who perceive âdragons on the mapâ will often feel their integrity is at stake unless they speak up. Those who didnât find any will lose interest and wonât feel their integrity is at stake, so they wonât speak up.â) and I agree some people will think this way, but I think this is many fewer people than are willing to publicly argue for generally-accepted-as-good positions but not generally-accepted-as-evil ones.
Many people really do or donât want dragons to exist, and so will argue for/âagainst them without much real engagement with the evidence.
Good faith participation in a serious debate on the existence of dragons risks your reputation and jeopardizes your ability to contribute in many places.
So I will continue not engaging, publicly or privately, with evidence or arguments on whether there are dragons.
This is a bit discourteous here.
I am not claiming that A is convincing to me in isolation. I am claiming that after a hundred similarly smart people fit different evidence together, thereâs so much model uncertainty that Iâm conservatively downgrading A from âoverwhelmingly obviousâ to âpretty sureâ. I am claiming that if we could somehow make a prediction market that would resolve on the actual truth of the matter, I might bet only half my savings on A, just in case I missed something drastic.
Youâre free to dismiss this as overconfidence of course. But this isnât amateur hour, I understand the implications of what Iâm saying and intend my words to be meaningful.
I think this largely depends on whether a given forum is anonymous or not. In an alternate universe where the dragon scenario was true, I think Iâd end up arguing for it anonymously at some point, though likely not on this forum.
I was not particularly tracking my named-ness as a point of evidence, except insofar as it could be used to determine my engagement with EA & rationality and make updates about my epistemics & good faith.
Sure. I understand itâs epistemically rude to take debate pot-shots when an opposing team would be so disadvantaged, and thereâs a reason to ignore one-sided information. Thereâs no obligation to update or engage if this comes across as adversarial.
But I really am approaching this as cooperatively communicating information. I found I had nonzero stress about the perceived possibility of dragons here, and I expect others do as well. I think a principled refusal to look does have nonzero reputational harm. There will be situations where thatâs the best we can manage, but thereâs also such a thing as a p(dragon) low enough that itâs no longer a good strategy. If it is the case that there are obviously no dragons somewhere, itâd be a good idea for a high-trust group to have a way to call âall clearâ.
So this is my best shot. Hey, anyone reading this? I know this is unilateral and all, but I think weâre good.
Thanks. Thereâs an asymmetry, though, where you can either find out that what everyone already thinks is true (which feels like a bit of a waste of time), or you can find out something deeply uncomfortable. Even if you think the former is where most of the probability is, itâs still not a very appealing prospect.
(Iâm not sure what the rhetorical import of this or what conclusions we should draw from it, just felt like explaining why a lot of people find investigating distasteful even if they think it wonât change their mind.)
Agreed.
I think I wasnât entirely clear; the recommendation was that if my claim sounded rational people should update their probability, not that people should change their asymmetric question policy. Edited a bit to make it more clear.