I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don’t know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I’m not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
But I do think that EAs updated more quickly that the replication crisis was a big problem. … Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Thank you for clarifying your views and getting into the weeds.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.