(It’s also possible that it wouldn’t even be a good idea if we could [prevent the development of transformative AI] — after all, that would mean forgoing the benefits as well as preventing the risks.)
I think this is a good point. The goal is maximising the expected value of the future, not minimising the probability of the worst outcome.
I think this is a good point. The goal is maximising the expected value of the future, not minimising the probability of the worst outcome.