Thank you for clarifying your views and getting into the weeds.
But I do think that EAs updated more quickly that the replication crisis was a big problem. … Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Thank you for clarifying your views and getting into the weeds.
I don’t know how you would go about proving that to someone who (like me) is skeptical.
The sort of problems with Yudkowsky’s epistemic practices that I’m referring to have existed for much longer than the last few years. Here’s an example from 2017. Another significant example from around 2015-2017 is that he quietly changed his view from skeptical of deep learning as a path to AGI and still leaning toward symbolic AI or GOFAI as the path of AGI to all-in on deep learning, but never publicly explained why.[1] This couldn’t be more central to his life’s work, so that’s very odd.
This blog post from 2015 criticizes some of the irrationalities in Yudkowsky’s Sequences, which were written in 2006-2009.
If you go back to Yudkowsky’s even earlier writings from the late 1990s and early 2000s, some of the very same problems are there.
So, really, these are problems that go back at least 7 years or so and arguably much longer than that, even as long as about 25 years.
Around 2015-2017, I talked to Yudkowsky about this in a Facebook group about AI x-risk, which is part of why I remember it so vividly.