I think there are some risks which have “common mini-versions,” to coin a phrase, and others which don’t. Asteroids have mini-versions (10%-killer-versions), and depending on how common they are the 10%-killers might be more likely than the 100%-killers, or vice versa. I actually don’t know which is more likely in that case.
AI risk is the sort of thing that doesn’t have common mini-versions, I think. An AI with the means and motive to kill 10% of humanity probably also has the means and motive to kill 100%.
Natural pandemics DO have common mini-versions, as you point out.
It’s less clear with engineered pandemics. That depends on how easy they are to engineer to kill everyone vs. how easy they are to engineer to kill not-everyone-but-at-least-10%, and it depends on how motivated various potential engineers are.
Accidental physics risks (like igniting the atmosphere, creating a false vacuum collapse or black hole or something with a particle collider) are way more likely to kill 100% of humanity than 10%. They do not have common mini-versions.
So what about unknown risks? Well, we don’t know. But from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions.
And by the argument you just gave, the “unknown” risks that have common mini-versions won’t actually be unknown, since we’ll see their mini-versions. So “unknown” risks are going to be disproportionately the kind of risk that doesn’t have common mini-versions.
...
As for what I meant about making the exact same argument in the past: I was just saying that we’ve discovered various risks that don’t have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats.
I think I agree with this general approach to thinking about this.
From what I’ve seen of AI risk discussions, I think I’d stand by my prior statement, which I’d paraphrase now as: There are a variety of different types of AI catastrophe scenario that have been discussed. Some seem like they might be more likely or similarly likely to totally wipe us out that to cause a 5-25% death toll. But some don’t. And I haven’t seen super strong arguments for considering the former much more likely than the latter. And it seems like the AI safety community as a whole has become more diverse in their thinking on this sort of thing over the last few years.
For engineered pandemics, it still seems to me that literally 100% of people dying from the pathogens themselves seems much less likely than a very high number dying, perhaps even enough to cause existential catastrophe slightly “indirectly”. However “well” engineered, pathogens themselves aren’t agents which explicitly seek the complete extinction of humanity. (Again, Defence in Depth seems relevant here.) Though this is slightly different from a conversation about the relative likelihood of 10% vs other percentages. (Also, I feel hesitant to discuss this in great deal, for vague information hazards reasons.)
I agree regarding accidental physics risks. But I think the risks from those is far lower than the risks from AI and bio, and probably nanotech, nuclear, etc. (I don’t really bring any independent evidence to the table; this is just based on the views I’ve seen from x-risk researchers.)
from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions.
I think that’d logically follow from your prior statements. But I’m not strongly convinced about those statements, except regarding accidental physics risks, which seem very unlikely.
And by the argument you just gave, the “unknown” risks that have common mini-versions won’t actually be unknown, since we’ll see their mini-versions. So “unknown” risks are going to be disproportionately the kind of risk that doesn’t have common mini-versions.
I think this is an interesting point. It does tentatively update me towards thinking that, conditional on there indeed being “unknown risks” that are already “in play”, they’re more likely than I’d otherwise thing to jump straight to 100%, without “mini versions”.
However, I think the most concerning source of “unknown risks” are new technologies or new actions (risks that aren’t yet “in play”). The unknown equivalents of risks from nanotech, space exploration, unprecedented consolidation of governments across the globe, etc. “Drawing a new ball from the urn”, in Bostrom’s metaphor. So even if such risks do have “common mini-versions”, we wouldn’t yet have seen them.
Also, regarding the portion of unknown risks that are in play, it seems to be appropriate to respond to the argument “Most risks have common mini-versions, but we haven’t seen these for unknown risks (pretty much by definition)” partly by updating towards thinking the unknown risks lack such common mini-versions, but also partly by updating towards thinking unknown risks are unlikely. We aren’t forced to fully take the former interpretation.
Tobias’ original point was ” Also, if engineered pandemics, or “unforeseen” and “other” anthropogenic risks have a chance of 3% each of causing extinction, wouldn’t you expect to see smaller versions of these risks (that kill, say, 10% of people, but don’t result in extinction) much more frequently? But we don’t observe that. ”
Thus he is saying there aren’t any “unknown” risks that do have common mini-versions but just haven’t had time to develop yet. That’s way too strong a claim, I think. Perhaps in my argument against this claim I ended up making claims that were also too strong. But I think my central point is still right: Tobias’ argument rules out things arising in the future that clearly shouldn’t be ruled out, because if we had run that argument in the past it would have ruled out various things (e.g. AI, nukes, physics risks, and come to think of it even asteroid strikes and pandemics if we go far enough back in the past) that in fact happened.
1. I interpreted the original claim—“wouldn’t you expect”—as being basically one in which observation X was evidence against hypothesis Y. Not conclusive evidence, just an update. I didn’t interpret it as “ruling things out” (in a strong way) or saying that there aren’t any unknown risks without common mini-versions (just that it’s less likely that there are than one would otherwise think). Note that his point seemed to be in defence of “Ord’s estimates seem too high to me”, rather than “the risks are 0″.
2. I do think that Tobias’s point, even interpreted that way, was probably too strong, or missing a key detail, in that the key sources of risks are probably emerging or new things, so we wouldn’t expect to have observed their mini-versions yet. Though I do tentatively think I’d expect to see mini-versions before the “full thing”, once the new things do start arising. (I’m aware this is all pretty hand-wavey phrasing.)
3i. As I went into more in my other comment, I think the general expectation that we’ll expect to see very small versions before and more often than small ones, which we expect to see before and more often than medium, which we expect to see before and more often than large, etc., probably would’ve served well in the past. There was progressively more advanced tech before AI, and AI is progressively advancing more. There were progressively more advanced weapons, progressively more destructive wars, progressively larger numbers of nukes, etc. I’d guess the biggest pandemics and asteroid strikes weren’t the first, because the biggest are rare.
3ii. AI is the least clear of those examples, because:
(a) it seems like destruction from AI so far has been very minimal (handful of fatalities from driverless cars, the “flash crash”, etc.), yet it seems plausible major destruction could occur in future
(b) we do have specific arguments, though of somewhat unclear strength, that the same AI might actively avoid causing any destruction for a while, and then suddenly seize decisive strategic advantage etc.
But on (a), I do think most relevant researchers would say the risk this month from AI is extremely low; the risks will rise in future as systems become more capable. So there’s still time in which we may see mini-versions.
And on (b), I’d consider that a case where a specific argument updates us away from a generally pretty handy prior that we’ll see small things earlier and more often than extremely large things. And we also don’t yet have super strong reason to believe that those arguments are really painting the right picture, as far as I’m aware.
3iii. I think if we interpreted Tobias’s point as something like “We’ll never see anything that’s unlike the past”, then yes, of course that’s ridiculous. So as I mentioned elsewhere, I think it partly depends on how we carve up reality, how we define things, etc. E.g., do we put nukes in a totally new bucket, or consider it a part of trends in weaponry/warfare/explosives?
But in any case, my interpretation of Tobias’s point, where it’s just about it being unlikely to see extreme things before smaller versions, would seem to work with e.g. nukes, even if we put them in their own special category—we’d be surprise by the first nuke, but we’d indeed see there’s one nuke before there are thousands, and there are two detonations on cities before there’s a full-scale nuclear war (if there ever is one, which hopefully and plausibly there won’t be).
In general I think you’ve thought this through more carefully than me so without having read all your points I’m just gonna agree with you.
So yeah, I think the main problem with Tobias’ original point was that unknown risks are probably mostly new things that haven’t arisen yet and thus the lack of observed mini-versions of them is no evidence against them. But I still think it’s also true that some risks just don’t have mini-versions, or rather are as likely or more likely to have big versions than mini-versions. I agree that most risks are not like this, including some of the examples I reached for initially.
As for what I meant about making the exact same argument in the past: I was just saying that we’ve discovered various risks that don’t have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats.
Hmm. I’m not sure I’m understanding you correctly. But I’ll respond to what I think you’re saying.
Firstly, the risk of natural pandemics, which the Spanish Flu was a strong example of, did have “common mini-versions”. In fact, Wikipedia says the Black Death was the “most fatal pandemic recorded in human history”. So I really don’t think we’d have ruled out the Spanish Flu happening by using the sort of argument I’m discussing (which I’m not sure I’d call “my argument”) - we’d have seen it as unlikely in any given year, and that would’ve been correct. But I imagine we’d have given it approximately the correct ex ante probability.
Secondly, even nuclear weapons, which I assume is what your reference to 1944 is about, seem like they could fit neatly in this sort of argument. It’s a new weapon, but weapons and wars existed for a long time. And the first nukes really couldn’t have killed everyone. And then we gradually had more nukes, more test explosions, more Cold War events, as we got closer and closer to it being possible for 100% of people to die from it. And we haven’t had 100% die. So it again seems like we wouldn’t have ruled out what ended up happening.
Likewise, AI can arguably be seen as a continuation of past technological, intellectual, scientific, etc. progress in various ways. Of course, various trends might change in shape, speed up, etc. But so far they do seem to have mostly done so somewhat gradually, such that none of the developments would’ve been “ruled out” by expecting the future to looking roughly similar to the past or the past+extrapolation. (I’m not an expert on this, but I think this is roughly the conclusion AI Impacts is arriving at based on their research.)
Perhaps a key point is that we indeed shouldn’t say “The future will be exactly like the past.” But instead “The future seems likely to typically be fairly well modelled as a rough extrapolation of some macro trends. But there’ll be black swans sometimes. And we can’t totally rule out totally surprising things, especially if we do very new things.”
This is essentially me trying to lay out a certain way of looking at things. It’s not necessarily the one I strongly adopt. (I actually hadn’t thought about this viewpoint much before, so I’ve found trying to lay it out/defend it here interesting.)
In fact, as I said, I (at least sort-of) disagreed with Tobias’ original comment, and I’m very concerned about existential risks. And I think a key point is that new technologies and actions can change the distributions we’re drawing from, in ways that we don’t understand. I’m just saying it still seems quite plausible to me (and probably likely, though not guaranteed) that we’d see a 5-25%-style catastrophe from a particular type of risk before a 100% catastrophe from it. And I think history seems consistent with that, and that that idea probably would’ve done fairly well in the past.
(Also, as I noted in my other comment, I haven’t yet seen very strong evidence or arguments against the idea that “somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats”—or more specifically, that AI might result in that, whether or not it has the “motive” to do that. It seems to me the jury is still out. So I don’t think I’d use the fact an argument reaches that conclusion as a point against that argument.)
Likewise, AI can arguably be seen as a continuation of past technological, intellectual, scientific, etc. progress in various ways. Of course, various trends might change in shape, speed up, etc. But so far they do seem to have mostly done so somewhat gradually, such that none of the developments would’ve been “ruled out” by expecting the future to looking roughly similar to the past or the past+extrapolation. (I’m not an expert on this, but I think this is roughly the conclusion AI Impacts is arriving at based on their research.)
I agree with all this and don’t think it significantly undermines anything I said.
I think the community has indeed developed more diverse views over the years, but I still think the original take (as seen in Bostrom’s Superintelligence) is the closest to the truth. The fact that the community has gotten more diverse can be easily explained as the result of it growing a lot bigger and having a lot more time to think. (Having a lot more time to think means more scenarios can be considered, more distinctions made, etc. More time for disagreements to arise and more time for those disagreements to seem like big deals when really they are fairly minor; the important things are mostly agreed on but not discussed anymore.) Or maybe you are right and this is evidence that Bostrom is wrong. Idk. But currently I think it is weak evidence, given the above.
Yeah in retrospect I really shouldn’t have picked nukes and natural pandemics as my two examples. Natural pandemics do have common mini-versions, and nukes, well, the jury is still out on that one. (I think it could go either way. I think that nukes maybe can kill everyone, because the people who survive the initial blasts might die from various other causes, e.g. civilizational collapse or nuclear winter. But insofar as we think that isn’t plausible, then yeah killing 10% is way more likely than killing 100%. (I’m assuming we count killing 99% as killing 10% here?) )
I think AI, climate change tail risks, physics risks, grey goo, etc. would be better examples for me to talk about.
With nukes, I do share the view that they could plausibly kill everyone. If there’s a nuclear war, followed by nuclear winter, and everyone dies during that winter, rather than most people dying and then the rest succumbing 10 years later from something else or never recovering, I’d consider that nuclear war causing 100% deaths.
My point was instead that that really couldn’t have happened in 1945. So there was one nuke, and a couple explosions, and gradually more nukes and test explosions, etc., before there was a present risk of 100% of people dying from this source. So we did see something like “mini-versions”—Hiroshima and Nagasaki, test explosions, Cuban Missile Crisis—before we saw 100% (which indeed, we still haven’t and hopefully won’t).
With climate change, we’re already seeing mini-versions. I do think it’s plausible that there could be a relatively sudden jump due to amplifying feedback loops. But “relatively sudden” might mean over months or years or something like that. And it wouldn’t be a total bolt from the blue in any case—the damage is already accruing and increasing, and likely would do in the lead up to such tail risks.
AI, physics risks, and nanotech are all plausible cases where there’d be a sudden jump. And I’m very concerned about AI and somewhat about nanotech. But note that we don’t actually have clear evidence that those things could cause such sudden jumps. I obviously don’t think we should wait for such evidence, because if it came we’d be dead. But it just seems worth remembering that before using “Hypothesis X predicts no sudden jump in destruction from Y” as an argument against hypothesis X.
Also, as I mentioned in my other comment, I’m now thinking maybe the best way to look at that is specific arguments in the case of AI, physics risks, and nanotech updating us away from the generally useful prior that we’ll see small things before extreme versions of the same things.
I think there are some risks which have “common mini-versions,” to coin a phrase, and others which don’t. Asteroids have mini-versions (10%-killer-versions), and depending on how common they are the 10%-killers might be more likely than the 100%-killers, or vice versa. I actually don’t know which is more likely in that case.
AI risk is the sort of thing that doesn’t have common mini-versions, I think. An AI with the means and motive to kill 10% of humanity probably also has the means and motive to kill 100%.
Natural pandemics DO have common mini-versions, as you point out.
It’s less clear with engineered pandemics. That depends on how easy they are to engineer to kill everyone vs. how easy they are to engineer to kill not-everyone-but-at-least-10%, and it depends on how motivated various potential engineers are.
Accidental physics risks (like igniting the atmosphere, creating a false vacuum collapse or black hole or something with a particle collider) are way more likely to kill 100% of humanity than 10%. They do not have common mini-versions.
So what about unknown risks? Well, we don’t know. But from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions.
And by the argument you just gave, the “unknown” risks that have common mini-versions won’t actually be unknown, since we’ll see their mini-versions. So “unknown” risks are going to be disproportionately the kind of risk that doesn’t have common mini-versions.
...
As for what I meant about making the exact same argument in the past: I was just saying that we’ve discovered various risks that don’t have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats.
I think I agree with this general approach to thinking about this.
From what I’ve seen of AI risk discussions, I think I’d stand by my prior statement, which I’d paraphrase now as: There are a variety of different types of AI catastrophe scenario that have been discussed. Some seem like they might be more likely or similarly likely to totally wipe us out that to cause a 5-25% death toll. But some don’t. And I haven’t seen super strong arguments for considering the former much more likely than the latter. And it seems like the AI safety community as a whole has become more diverse in their thinking on this sort of thing over the last few years.
For engineered pandemics, it still seems to me that literally 100% of people dying from the pathogens themselves seems much less likely than a very high number dying, perhaps even enough to cause existential catastrophe slightly “indirectly”. However “well” engineered, pathogens themselves aren’t agents which explicitly seek the complete extinction of humanity. (Again, Defence in Depth seems relevant here.) Though this is slightly different from a conversation about the relative likelihood of 10% vs other percentages. (Also, I feel hesitant to discuss this in great deal, for vague information hazards reasons.)
I agree regarding accidental physics risks. But I think the risks from those is far lower than the risks from AI and bio, and probably nanotech, nuclear, etc. (I don’t really bring any independent evidence to the table; this is just based on the views I’ve seen from x-risk researchers.)
I think that’d logically follow from your prior statements. But I’m not strongly convinced about those statements, except regarding accidental physics risks, which seem very unlikely.
I think this is an interesting point. It does tentatively update me towards thinking that, conditional on there indeed being “unknown risks” that are already “in play”, they’re more likely than I’d otherwise thing to jump straight to 100%, without “mini versions”.
However, I think the most concerning source of “unknown risks” are new technologies or new actions (risks that aren’t yet “in play”). The unknown equivalents of risks from nanotech, space exploration, unprecedented consolidation of governments across the globe, etc. “Drawing a new ball from the urn”, in Bostrom’s metaphor. So even if such risks do have “common mini-versions”, we wouldn’t yet have seen them.
Also, regarding the portion of unknown risks that are in play, it seems to be appropriate to respond to the argument “Most risks have common mini-versions, but we haven’t seen these for unknown risks (pretty much by definition)” partly by updating towards thinking the unknown risks lack such common mini-versions, but also partly by updating towards thinking unknown risks are unlikely. We aren’t forced to fully take the former interpretation.
Tobias’ original point was ” Also, if engineered pandemics, or “unforeseen” and “other” anthropogenic risks have a chance of 3% each of causing extinction, wouldn’t you expect to see smaller versions of these risks (that kill, say, 10% of people, but don’t result in extinction) much more frequently? But we don’t observe that. ”
Thus he is saying there aren’t any “unknown” risks that do have common mini-versions but just haven’t had time to develop yet. That’s way too strong a claim, I think. Perhaps in my argument against this claim I ended up making claims that were also too strong. But I think my central point is still right: Tobias’ argument rules out things arising in the future that clearly shouldn’t be ruled out, because if we had run that argument in the past it would have ruled out various things (e.g. AI, nukes, physics risks, and come to think of it even asteroid strikes and pandemics if we go far enough back in the past) that in fact happened.
1. I interpreted the original claim—“wouldn’t you expect”—as being basically one in which observation X was evidence against hypothesis Y. Not conclusive evidence, just an update. I didn’t interpret it as “ruling things out” (in a strong way) or saying that there aren’t any unknown risks without common mini-versions (just that it’s less likely that there are than one would otherwise think). Note that his point seemed to be in defence of “Ord’s estimates seem too high to me”, rather than “the risks are 0″.
2. I do think that Tobias’s point, even interpreted that way, was probably too strong, or missing a key detail, in that the key sources of risks are probably emerging or new things, so we wouldn’t expect to have observed their mini-versions yet. Though I do tentatively think I’d expect to see mini-versions before the “full thing”, once the new things do start arising. (I’m aware this is all pretty hand-wavey phrasing.)
3i. As I went into more in my other comment, I think the general expectation that we’ll expect to see very small versions before and more often than small ones, which we expect to see before and more often than medium, which we expect to see before and more often than large, etc., probably would’ve served well in the past. There was progressively more advanced tech before AI, and AI is progressively advancing more. There were progressively more advanced weapons, progressively more destructive wars, progressively larger numbers of nukes, etc. I’d guess the biggest pandemics and asteroid strikes weren’t the first, because the biggest are rare.
3ii. AI is the least clear of those examples, because:
(a) it seems like destruction from AI so far has been very minimal (handful of fatalities from driverless cars, the “flash crash”, etc.), yet it seems plausible major destruction could occur in future
(b) we do have specific arguments, though of somewhat unclear strength, that the same AI might actively avoid causing any destruction for a while, and then suddenly seize decisive strategic advantage etc.
But on (a), I do think most relevant researchers would say the risk this month from AI is extremely low; the risks will rise in future as systems become more capable. So there’s still time in which we may see mini-versions.
And on (b), I’d consider that a case where a specific argument updates us away from a generally pretty handy prior that we’ll see small things earlier and more often than extremely large things. And we also don’t yet have super strong reason to believe that those arguments are really painting the right picture, as far as I’m aware.
3iii. I think if we interpreted Tobias’s point as something like “We’ll never see anything that’s unlike the past”, then yes, of course that’s ridiculous. So as I mentioned elsewhere, I think it partly depends on how we carve up reality, how we define things, etc. E.g., do we put nukes in a totally new bucket, or consider it a part of trends in weaponry/warfare/explosives?
But in any case, my interpretation of Tobias’s point, where it’s just about it being unlikely to see extreme things before smaller versions, would seem to work with e.g. nukes, even if we put them in their own special category—we’d be surprise by the first nuke, but we’d indeed see there’s one nuke before there are thousands, and there are two detonations on cities before there’s a full-scale nuclear war (if there ever is one, which hopefully and plausibly there won’t be).
In general I think you’ve thought this through more carefully than me so without having read all your points I’m just gonna agree with you.
So yeah, I think the main problem with Tobias’ original point was that unknown risks are probably mostly new things that haven’t arisen yet and thus the lack of observed mini-versions of them is no evidence against them. But I still think it’s also true that some risks just don’t have mini-versions, or rather are as likely or more likely to have big versions than mini-versions. I agree that most risks are not like this, including some of the examples I reached for initially.
Hmm. I’m not sure I’m understanding you correctly. But I’ll respond to what I think you’re saying.
Firstly, the risk of natural pandemics, which the Spanish Flu was a strong example of, did have “common mini-versions”. In fact, Wikipedia says the Black Death was the “most fatal pandemic recorded in human history”. So I really don’t think we’d have ruled out the Spanish Flu happening by using the sort of argument I’m discussing (which I’m not sure I’d call “my argument”) - we’d have seen it as unlikely in any given year, and that would’ve been correct. But I imagine we’d have given it approximately the correct ex ante probability.
Secondly, even nuclear weapons, which I assume is what your reference to 1944 is about, seem like they could fit neatly in this sort of argument. It’s a new weapon, but weapons and wars existed for a long time. And the first nukes really couldn’t have killed everyone. And then we gradually had more nukes, more test explosions, more Cold War events, as we got closer and closer to it being possible for 100% of people to die from it. And we haven’t had 100% die. So it again seems like we wouldn’t have ruled out what ended up happening.
Likewise, AI can arguably be seen as a continuation of past technological, intellectual, scientific, etc. progress in various ways. Of course, various trends might change in shape, speed up, etc. But so far they do seem to have mostly done so somewhat gradually, such that none of the developments would’ve been “ruled out” by expecting the future to looking roughly similar to the past or the past+extrapolation. (I’m not an expert on this, but I think this is roughly the conclusion AI Impacts is arriving at based on their research.)
Perhaps a key point is that we indeed shouldn’t say “The future will be exactly like the past.” But instead “The future seems likely to typically be fairly well modelled as a rough extrapolation of some macro trends. But there’ll be black swans sometimes. And we can’t totally rule out totally surprising things, especially if we do very new things.”
This is essentially me trying to lay out a certain way of looking at things. It’s not necessarily the one I strongly adopt. (I actually hadn’t thought about this viewpoint much before, so I’ve found trying to lay it out/defend it here interesting.)
In fact, as I said, I (at least sort-of) disagreed with Tobias’ original comment, and I’m very concerned about existential risks. And I think a key point is that new technologies and actions can change the distributions we’re drawing from, in ways that we don’t understand. I’m just saying it still seems quite plausible to me (and probably likely, though not guaranteed) that we’d see a 5-25%-style catastrophe from a particular type of risk before a 100% catastrophe from it. And I think history seems consistent with that, and that that idea probably would’ve done fairly well in the past.
(Also, as I noted in my other comment, I haven’t yet seen very strong evidence or arguments against the idea that “somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats”—or more specifically, that AI might result in that, whether or not it has the “motive” to do that. It seems to me the jury is still out. So I don’t think I’d use the fact an argument reaches that conclusion as a point against that argument.)
I agree with all this and don’t think it significantly undermines anything I said.
I think the community has indeed developed more diverse views over the years, but I still think the original take (as seen in Bostrom’s Superintelligence) is the closest to the truth. The fact that the community has gotten more diverse can be easily explained as the result of it growing a lot bigger and having a lot more time to think. (Having a lot more time to think means more scenarios can be considered, more distinctions made, etc. More time for disagreements to arise and more time for those disagreements to seem like big deals when really they are fairly minor; the important things are mostly agreed on but not discussed anymore.) Or maybe you are right and this is evidence that Bostrom is wrong. Idk. But currently I think it is weak evidence, given the above.
Yeah in retrospect I really shouldn’t have picked nukes and natural pandemics as my two examples. Natural pandemics do have common mini-versions, and nukes, well, the jury is still out on that one. (I think it could go either way. I think that nukes maybe can kill everyone, because the people who survive the initial blasts might die from various other causes, e.g. civilizational collapse or nuclear winter. But insofar as we think that isn’t plausible, then yeah killing 10% is way more likely than killing 100%. (I’m assuming we count killing 99% as killing 10% here?) )
I think AI, climate change tail risks, physics risks, grey goo, etc. would be better examples for me to talk about.
With nukes, I do share the view that they could plausibly kill everyone. If there’s a nuclear war, followed by nuclear winter, and everyone dies during that winter, rather than most people dying and then the rest succumbing 10 years later from something else or never recovering, I’d consider that nuclear war causing 100% deaths.
My point was instead that that really couldn’t have happened in 1945. So there was one nuke, and a couple explosions, and gradually more nukes and test explosions, etc., before there was a present risk of 100% of people dying from this source. So we did see something like “mini-versions”—Hiroshima and Nagasaki, test explosions, Cuban Missile Crisis—before we saw 100% (which indeed, we still haven’t and hopefully won’t).
With climate change, we’re already seeing mini-versions. I do think it’s plausible that there could be a relatively sudden jump due to amplifying feedback loops. But “relatively sudden” might mean over months or years or something like that. And it wouldn’t be a total bolt from the blue in any case—the damage is already accruing and increasing, and likely would do in the lead up to such tail risks.
AI, physics risks, and nanotech are all plausible cases where there’d be a sudden jump. And I’m very concerned about AI and somewhat about nanotech. But note that we don’t actually have clear evidence that those things could cause such sudden jumps. I obviously don’t think we should wait for such evidence, because if it came we’d be dead. But it just seems worth remembering that before using “Hypothesis X predicts no sudden jump in destruction from Y” as an argument against hypothesis X.
Also, as I mentioned in my other comment, I’m now thinking maybe the best way to look at that is specific arguments in the case of AI, physics risks, and nanotech updating us away from the generally useful prior that we’ll see small things before extreme versions of the same things.