This is a good thing to flag. I actually agree re: anthropic reasoning (though frankly I always feel a bit unsettled by its fundamentally unscientific nature).
My main claim re: AI—as I saw it—was that the contours of the AI risk claim matched quite closely to messianic prophesies, just in modern secular clothing (I’ll note that people both agreed and disagreed with me on this point and interested people should read my short post and the comments). I still stand by that fwiw—I think it’s at minimum an exceptional coincidence.
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
In this reading AI risk is real, no one has a great sense of how to explain it because much of its nature is unknown and simply weird, and so we fall back on narratives that we understand—so Christian-ish messiah-type stories.
I think I agree with the religious comparison- they do seem similar to me and I liked that part of your post. I just think failed apocalyptic predictions don’t give that much evidence that we can discount future apocalyptic predictions.
Religious apocalypses are maybe a little different because I think (but don’t know) that most people who predict the end of the world via God are claiming that all possible worlds end, not just predicting an event that will normally occur.
I mostly think anthropic reasoning is good (but there is a voice in my head telling me I’m crazy whenever I try to apply it).
I’ll re-word my comment to clarify the part re: “the dangers of anthropic reasoning”. I always forget if “anthropic” gets applied to not conditioning on existence and making claims, or the claim that we need to condition on existence when making claims.
This is a good thing to flag. I actually agree re: anthropic reasoning (though frankly I always feel a bit unsettled by its fundamentally unscientific nature).
My main claim re: AI—as I saw it—was that the contours of the AI risk claim matched quite closely to messianic prophesies, just in modern secular clothing (I’ll note that people both agreed and disagreed with me on this point and interested people should read my short post and the comments). I still stand by that fwiw—I think it’s at minimum an exceptional coincidence.
One underrated response that I have been thinking about was by Jason Wagner, who paraphrased one reading of my claim as:
In this reading AI risk is real, no one has a great sense of how to explain it because much of its nature is unknown and simply weird, and so we fall back on narratives that we understand—so Christian-ish messiah-type stories.
Hey Ryan,
I think I agree with the religious comparison- they do seem similar to me and I liked that part of your post. I just think failed apocalyptic predictions don’t give that much evidence that we can discount future apocalyptic predictions.
Religious apocalypses are maybe a little different because I think (but don’t know) that most people who predict the end of the world via God are claiming that all possible worlds end, not just predicting an event that will normally occur.
I mostly think anthropic reasoning is good (but there is a voice in my head telling me I’m crazy whenever I try to apply it).
I’ll re-word my comment to clarify the part re: “the dangers of anthropic reasoning”. I always forget if “anthropic” gets applied to not conditioning on existence and making claims, or the claim that we need to condition on existence when making claims.