“Humanity has seen many claims of this form.”
What exactly is your reference class here? Are you referring just to religious claims of impending apocalypse (plus EA claims about AI technology? Or are you referring more broadly to any claim of transformative near-term change?
I agree with you that claims of supernatural apocalypse have a bad track record, but such a narrow reference class doesn’t (IMO) include the pretty technically-grounded concerns about AI. Meanwhile, I think that a wider reference class including other seemingly-unbelievable claims of impending transformation would include a couple of important hits. Consider:
It’s 1942. A physicist tells you, “Listen, this is a really technical subject that most people don’t know about, but atomic weapons are really coming. I don’t know when—could be 10 years or 100 -- but if we don’t prepare now, humanity might go extinct.”
It’s January 2020 (or the beginning of any pandemic in history). A random doctor tells you “Hey, I don’t know if this new disease will have 1% mortality or 10% or 0.1%. But if we don’t lock down this entire province today, it could spread to the entire world and cause millions of deaths.”
It’s 1519. One of your empire’s scouts tells you that a bunch of white-skinned people have arrived on the eastern coast in giant boats, and a few priests think maybe it’s the return of Quetzalcoatl or something. You decide that this is obviously crazy—religious-based forecasting has a terrible track record, I mean these priests have LITERALLY been telling you for years that maybe the sun won’t come up tomorrow, and they’ve been wrong every single time. But sure enough, soon the European invaders have slaughtered their way to your capital and destroyed your civilization.
Although the Aztec case is particularly dramatic, many non-European cultures have the experience of suddenly being invaded by a technologically superior foe powered by an exponentially self-improving economic engine—this sounds at least as similar to AI worries as your claim that Christianity and AI worry are in the same class. There might even be more stories of sudden European invasion than predictions of religious apocalypse, which would tilt your base-rate prediction decisively towards believing that transformational changes do sometimes happen.
I appreciate the pushback. I’m thinking of all claims that go roughly like this: “a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish.” This is narrower than “all transformative change” but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.
I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That’s totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.
I appreciate the pushback. I’m thinking of all claims that go roughly like this: “a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish.”
I do think Jackson’s example of what it might feel to non-European cultures with lower military tech to have white conquerers arrive with overwhelming force feels like a surprisingly fitting case study of this paragraph.
I can think of far more examples of terrible things happening than realized examples that’s analogous to “If we do the right things before it arrives, we will experience heaven on earth.” (Perry Expedition is perhaps the closest example that comes to mind). But I think it was not wrong to ex ante believe that technology can be used for lots of good, and that the foreigners at least in theory can be negotiated with.
I wasn’t really trying to say “See, messianic stories about arriving gods really work!”, as to say “Look, there are a lot of stories about huge dramatic changes, AI is not more similar to the story of Christianity as it is to stories about new technologies or plagues or a foreign invasion.” I think the story of European world conquest is particularly appropriate not because it resembles anyone’s religious prophecies, but because it is an example where large societies were overwhelmed and destroyed by the tech+knowledge advantages of tiny groups. This is similar to AI, which would start out outnumbered by all of humanity but might have a huge intelligence + technological advantage.
Responding to your request for times when knowledge of European invasion was actionable for natives: The “Musket Wars” in New Zealand were “a series of as many as 3,000 battles and raids fought among Māori between 1807 and 1837, after Māori first obtained muskets and then engaged in an intertribal arms race in order to gain territory or seek revenge for past defeats”. The bloodshed was hugely net-negative for the Māori as a whole, but individual tribes who were ahead in the arms race could expand their territory at the expense of enemy groups.
Obviously this is not a very inspiring story if we are thinking about potential arms races in AI capabilities:
Māori began acquiring European muskets in the early 19th century from Sydney-based flax and timber merchants. Because they had never had projectile weapons, they initially sought guns for hunting. Ngāpuhi chief Hongi Hika in 1818 used newly acquired muskets to launch devastating raids from his Northland base into the Bay of Plenty, where local Māori were still relying on traditional weapons of wood and stone. In the following years he launched equally successful raids on iwi in Auckland, Thames, Waikato and Lake Rotorua, taking large numbers of his enemies as slaves, who were put to work cultivating and dressing flax to trade with Europeans for more muskets. His success prompted other iwi to procure firearms in order to mount effective methods of defence and deterrence and the spiral of violence peaked in 1832 and 1833, by which time it had spread to all parts of the country except the inland area of the North Island later known as the King Country and remote bays and valleys of Fiordland in the South Island. In 1835 the fighting went offshore as Ngāti Mutunga and Ngāti Tama launched devastating raids on the pacifist Moriori in the Chatham Islands.
I am commenting here and upvoting this specifically because you wrote “I appreciate the pushback.” I really like seeing people disagree while being friendly/civil, and I want to encourage us to do even more of that. I like how you are exploring and elaborating ideas while being polite and respectful.
Here are a couple thoughts on messianic-ness specifically:
With the classic messiah story, the whole point is that you know the god’s intentions and values. Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible. This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
How many independent messianic claims are there, really? Christianity is the big, obvious example. Judaism (but not Islam?) is another. Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic—they might have Hero’s Journey figures (like Rama from the Ramayana) but that’s different from the epic Christian story about a hidden god about to return and transform the world.
I am interpreting you as saying: ”Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation.” (Why are we worried about AI and not about those other equally messianic possibilities? Presumably AI is the most plausible messianic story around? Or maybe it’s just more tractable since we’re designing the AI vs there’s nothing we can do about aliens or simulation overlords.)
But per my second bullet point, I don’t think that Messianic stories are a huge human universal. I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as “savior figures”). This leads to a different interpretation:
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic. It’s plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative. These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about “capitalism”. It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that’s more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.
But ultimately, while interesting to think about, I’m not sure how far this kind of “base-rate tennis” gets us. Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp. But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.
“Humanity has seen many claims of this form.” What exactly is your reference class here? Are you referring just to religious claims of impending apocalypse (plus EA claims about AI technology? Or are you referring more broadly to any claim of transformative near-term change?
I agree with you that claims of supernatural apocalypse have a bad track record, but such a narrow reference class doesn’t (IMO) include the pretty technically-grounded concerns about AI. Meanwhile, I think that a wider reference class including other seemingly-unbelievable claims of impending transformation would include a couple of important hits. Consider:
It’s 1942. A physicist tells you, “Listen, this is a really technical subject that most people don’t know about, but atomic weapons are really coming. I don’t know when—could be 10 years or 100 -- but if we don’t prepare now, humanity might go extinct.”
It’s January 2020 (or the beginning of any pandemic in history). A random doctor tells you “Hey, I don’t know if this new disease will have 1% mortality or 10% or 0.1%. But if we don’t lock down this entire province today, it could spread to the entire world and cause millions of deaths.”
It’s 1519. One of your empire’s scouts tells you that a bunch of white-skinned people have arrived on the eastern coast in giant boats, and a few priests think maybe it’s the return of Quetzalcoatl or something. You decide that this is obviously crazy—religious-based forecasting has a terrible track record, I mean these priests have LITERALLY been telling you for years that maybe the sun won’t come up tomorrow, and they’ve been wrong every single time. But sure enough, soon the European invaders have slaughtered their way to your capital and destroyed your civilization.
Although the Aztec case is particularly dramatic, many non-European cultures have the experience of suddenly being invaded by a technologically superior foe powered by an exponentially self-improving economic engine—this sounds at least as similar to AI worries as your claim that Christianity and AI worry are in the same class. There might even be more stories of sudden European invasion than predictions of religious apocalypse, which would tilt your base-rate prediction decisively towards believing that transformational changes do sometimes happen.
I appreciate the pushback. I’m thinking of all claims that go roughly like this: “a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish.” This is narrower than “all transformative change” but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.
I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That’s totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.
I do think Jackson’s example of what it might feel to non-European cultures with lower military tech to have white conquerers arrive with overwhelming force feels like a surprisingly fitting case study of this paragraph.
I can think of far more examples of terrible things happening than realized examples that’s analogous to “If we do the right things before it arrives, we will experience heaven on earth.” (Perry Expedition is perhaps the closest example that comes to mind). But I think it was not wrong to ex ante believe that technology can be used for lots of good, and that the foreigners at least in theory can be negotiated with.
I wasn’t really trying to say “See, messianic stories about arriving gods really work!”, as to say “Look, there are a lot of stories about huge dramatic changes, AI is not more similar to the story of Christianity as it is to stories about new technologies or plagues or a foreign invasion.” I think the story of European world conquest is particularly appropriate not because it resembles anyone’s religious prophecies, but because it is an example where large societies were overwhelmed and destroyed by the tech+knowledge advantages of tiny groups. This is similar to AI, which would start out outnumbered by all of humanity but might have a huge intelligence + technological advantage.
Responding to your request for times when knowledge of European invasion was actionable for natives: The “Musket Wars” in New Zealand were “a series of as many as 3,000 battles and raids fought among Māori between 1807 and 1837, after Māori first obtained muskets and then engaged in an intertribal arms race in order to gain territory or seek revenge for past defeats”. The bloodshed was hugely net-negative for the Māori as a whole, but individual tribes who were ahead in the arms race could expand their territory at the expense of enemy groups.
Obviously this is not a very inspiring story if we are thinking about potential arms races in AI capabilities:
I am commenting here and upvoting this specifically because you wrote “I appreciate the pushback.” I really like seeing people disagree while being friendly/civil, and I want to encourage us to do even more of that. I like how you are exploring and elaborating ideas while being polite and respectful.
Here are a couple thoughts on messianic-ness specifically:
With the classic messiah story, the whole point is that you know the god’s intentions and values. Versus of course the whole point of the AI worry is that we ourselves might create a godlike being (rather than a preexisting being arriving), and its values might be unknown or bizarre/incomprehensible. This is an important narrative difference (it makes the AI worry more like stories of sorcerers summoning demons or explorers awakening mad Lovecraftian forces), even though the EA community still thinks it can predict some things about the AI and suggest some actions we can take now to prepare.
How many independent messianic claims are there, really? Christianity is the big, obvious example. Judaism (but not Islam?) is another. Most religions (especially when you count all the little tribal/animistic ones) are not actually super-messianic—they might have Hero’s Journey figures (like Rama from the Ramayana) but that’s different from the epic Christian story about a hidden god about to return and transform the world.
I am interpreting you as saying:
”Messianic stories are a human cultural universal, humans just always fall for this messianic crap, so we should be on guard against suspiciously persuasive neo-messianic stories, like that radio astronomy might be on the verge of contacting an advanced alien race, or that we might be on the verge of discovering that we live in a simulation.” (Why are we worried about AI and not about those other equally messianic possibilities? Presumably AI is the most plausible messianic story around? Or maybe it’s just more tractable since we’re designing the AI vs there’s nothing we can do about aliens or simulation overlords.)
But per my second bullet point, I don’t think that Messianic stories are a huge human universal. I would prefer a story where we recognize that Christianity is by far the biggest messianic story out there, and it is probably influencing/causing the perceived abundance of other messianic stories in culture (like all the messianic tropes in literature like Dune, or when people see political types like Trump or Obama or Elon as “savior figures”). This leads to a different interpretation:
“AI might or might not be a real worry, but it’s suspicious that people are ramming it into the Christian-influenced narrative format of the messianic prophecy. Maybe people are misinterpreting the true AI risk in order to fit it into this classic narrative format; I should think twice about anthropomorphizing the danger and instead try to see this as a more abstract technological/economic trend.”
This take is interesting to me, as some people (Robin Hanson, slow takeoff people like Paul Christiano) have predicted a more decentralized version of the AI x-risk story where there is a lot of talk about economic doubling times and whether humans will still complement AI economically in the far future, instead of talking about individual superintelligent systems making treacherous turns and being highly agentic. It’s plausible to me that the decentralized-AI-capabilities story is underrated because it is more complicated / less viral / less familiar a narrative. These kinds of biases are definitely at work when people, eg, bizarrely misinterpret AI worry as part of a political fight about “capitalism”. It seems like almost any highly-technical worry is vulnerable to being outcompeted by a message that’s more based around familiar narrative tropes like human conflict and good-vs-evil morality plays.
But ultimately, while interesting to think about, I’m not sure how far this kind of “base-rate tennis” gets us. Maybe we decide to be a little more skeptical of the AI story, or lean a little towards the slow-takeoff camp. But this is a pretty tiny update compared to just learning about different cause areas and forming an inside view based on the actual details of each cause.