Further development of a mathematical model to realise how important timelines for re-evolution are.
Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.
So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:
Relative AI risk increases, since risk from most other sources is discounted a bit.
Absolute AI risk increases, since it pushes towards shorter AGI timelines.
*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!
Welcome to the forum!
Re-evolution timelines have another interesting effect on overall risk — all else equal, the more confident one is that intelligence will re-evolve, the more confident one should be that we will be able to build AGI,* which should increase one’s estimate of existential risk from AI.
So it seems that AI risk gets a twofold ‘boost’ from evidence for a speedy re-emergence of intelligent life:
Relative AI risk increases, since risk from most other sources is discounted a bit.
Absolute AI risk increases, since it pushes towards shorter AGI timelines.
*Shulman & Bostrom 2012 discuss this type of argument, and some complexities in adjusting for observation selection effects
Thanks for your comment Matthew. This is definitely an interesting effect which I had not considered. I wonder whether though the absolute AI risk may increase, it would not affect our actions as we would have no way to affect the development of AI by future intelligent life as we would be extinct. The only way I could think of to affect the risk of AI from future life would be to create an aligned AGI ourselves before humanity goes extinct!