We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I’ve noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year national risk assessments at present even though area under curve is greater in aggregate than every other risk. But in a sense potentially ‘inevitable’ (for the demonstration risk profiles I dreamed up) over a human lifetime. This then begs the question of how to monitor the trajectory (surely this is one role of national risk assessment, to invest in ‘fire alarms’, but this then requires these risks to be included in the assessment so the monitoring can be prioritized). Persuading policymakers is definitely going to be easier by leveraging decade long actuarial tables than having esoteric discussions about total utilitarianism.
Additionally, in the recent FLI ‘World Building Contest’ the winning entry from Mako Yass made quite a point of the fact that in the world he built the impetus for AI safety and global cooperation on this issue came from the development of very clear and very specific scenario development of how exactly AI could come to kill everyone. This is analogous to Carl Sagan/Turco’s work on nuclear winter in the early 1980s , a specific picture changed minds. We need this for AI.
We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks.
This can be taken further—if your main priority is people alive today (or yourself) - near term catastrophic risks that aren’t x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it’s much more probable that one kills, say, at least 90% of people. On the other hand I’m not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big.
Then again, AI can be misused much worse than other things. So maybe the chance that it doesn’t kill me but still, for example, lets a totalitarian government enslave me, is pretty big?
We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I’ve noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year national risk assessments at present even though area under curve is greater in aggregate than every other risk. But in a sense potentially ‘inevitable’ (for the demonstration risk profiles I dreamed up) over a human lifetime. This then begs the question of how to monitor the trajectory (surely this is one role of national risk assessment, to invest in ‘fire alarms’, but this then requires these risks to be included in the assessment so the monitoring can be prioritized). Persuading policymakers is definitely going to be easier by leveraging decade long actuarial tables than having esoteric discussions about total utilitarianism.
Additionally, in the recent FLI ‘World Building Contest’ the winning entry from Mako Yass made quite a point of the fact that in the world he built the impetus for AI safety and global cooperation on this issue came from the development of very clear and very specific scenario development of how exactly AI could come to kill everyone. This is analogous to Carl Sagan/Turco’s work on nuclear winter in the early 1980s , a specific picture changed minds. We need this for AI.
This can be taken further—if your main priority is people alive today (or yourself) - near term catastrophic risks that aren’t x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it’s much more probable that one kills, say, at least 90% of people. On the other hand I’m not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big.
Then again, AI can be misused much worse than other things. So maybe the chance that it doesn’t kill me but still, for example, lets a totalitarian government enslave me, is pretty big?