If you think the expected value is negative regardless of what you can do or move, you should of course become the existential risk[1].
But, actually estimating whether humanity will be net-negative requires you to know what you value, which is something you’re probably fuzzy about. We lack the technology so far to extract terminal goals from people, which you want to have before taking any irrevocable actions.
I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations.
I think that the other 8 billion of us might want a say, and I’d guess we’d not be particularly happy if we got collectively eviscerated because some random person made a math error.
If you think the expected value is negative regardless of what you can do or move, you should of course become the existential risk[1].
But, actually estimating whether humanity will be net-negative requires you to know what you value, which is something you’re probably fuzzy about. We lack the technology so far to extract terminal goals from people, which you want to have before taking any irrevocable actions.
Future-you might resent past-you for publicly doubting the merits of humanity, since I reckon you’d want to be a secret existential risk.
I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations.
I think that the other 8 billion of us might want a say, and I’d guess we’d not be particularly happy if we got collectively eviscerated because some random person made a math error.