Here are a couple thoughts on messianic-ness specifically:
With the classic messiah story, the whole point is that you know the god’s intentions and values. Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible. This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
How many independent messianic claims are there, really? Christianity is the big, obvious example. Judaism (but not Islam?) is another. Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic—they might have Hero’s Journey figures (like Rama from the Ramayana) but that’s different from the epic Christian story about a hidden god about to return and transform the world.
I am interpreting you as saying: ”Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation.” (Why are we worried about AI and not about those other equally messianic possibilities? Presumably AI is the most plausible messianic story around? Or maybe it’s just more tractable since we’re designing the AI vs there’s nothing we can do about aliens or simulation overlords.)
But per my second bullet point, I don’t think that Messianic stories are a huge human universal. I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as “savior figures”). This leads to a different interpretation:
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic. It’s plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative. These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about “capitalism”. It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that’s more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.
But ultimately, while interesting to think about, I’m not sure how far this kind of “base-rate tennis” gets us. Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp. But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.
Here are a couple thoughts on messianic-ness specifically:
With the classic messiah story, the whole point is that you know the god’s intentions and values. Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible. This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
How many independent messianic claims are there, really? Christianity is the big, obvious example. Judaism (but not Islam?) is another. Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic—they might have Hero’s Journey figures (like Rama from the Ramayana) but that’s different from the epic Christian story about a hidden god about to return and transform the world.
I am interpreting you as saying:
”Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation.” (Why are we worried about AI and not about those other equally messianic possibilities? Presumably AI is the most plausible messianic story around? Or maybe it’s just more tractable since we’re designing the AI vs there’s nothing we can do about aliens or simulation overlords.)
But per my second bullet point, I don’t think that Messianic stories are a huge human universal. I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as “savior figures”). This leads to a different interpretation:
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic. It’s plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative. These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about “capitalism”. It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that’s more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.
But ultimately, while interesting to think about, I’m not sure how far this kind of “base-rate tennis” gets us. Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp. But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.