The case for risk that you sketch isn’t the only case that one can lay out, but if we are focussing on this case, then your response is not unreasonable. But do you want go give up or do you want to try? The immediate response to your last suggestion is surely: Why devote limited resources to some other problem if this is the one that destroys humanity anyway?
You might relate to the following recent good posts:
“But do you want to give up or do you want to try?”
I suppose my instinctive reaction is that if there’s very little reason to suppose we’ll succeed we’d be better off allocating our resources to other causes and improving human life while it exists. But I recognise that this isn’t a universal intuition.
The case for risk that you sketch isn’t the only case that one can lay out, but if we are focussing on this case, then your response is not unreasonable. But do you want go give up or do you want to try? The immediate response to your last suggestion is surely: Why devote limited resources to some other problem if this is the one that destroys humanity anyway?
You might relate to the following recent good posts:
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
https://www.lesswrong.com/posts/uFNgRumrDTpBfQGrs/let-s-think-about-slowing-down-ai
“But do you want to give up or do you want to try?”
I suppose my instinctive reaction is that if there’s very little reason to suppose we’ll succeed we’d be better off allocating our resources to other causes and improving human life while it exists. But I recognise that this isn’t a universal intuition.
Thank you for the links, I will have a look :)