Yes, I said “anyone else”, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don’t really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I don’t think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there’s often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that it’s true. I don’t know whether it’s true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I don’t know why you would think that.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again — not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived — the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don’t know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
But I do think that EAs updated more quickly that the replication crisis was a big problem. … Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Yes, I said “anyone else”, but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don’t really believe that people in effective altruism would do particularly better in terms of reducing their own bias.
I don’t think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it.
I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there’s often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny.
I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices.
But what I just said about self-licensing is just a hunch. I worry that it’s true. I don’t know whether it’s true or not.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average or median academic working professionally in the social sciences. I don’t know why you would think that.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
Expressing extremely strong views, getting proven wrong, and never discussing them again — not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived — the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.