This seems like a case of different prior distributions. I think it’s a specific hypothesis to say that strong optimisers won’t happen (i.e. there has to be a specific reason for this, otherwise it’s the default, for convergent instrumental reasons).
Yes, high uncertainty here. Problem is that your credence on AI being a strong optimiser is a ceiling on p(doom| AGI) under every scenario I’ve read
What makes you think it’s unlikely that strong optimisers will come about?
Prior: most specific hypotheses are wrong. Update: we don’t have strong evidence in any direction. Conclusion: more likely than not this is wrong.
This seems like a case of different prior distributions. I think it’s a specific hypothesis to say that strong optimisers won’t happen (i.e. there has to be a specific reason for this, otherwise it’s the default, for convergent instrumental reasons).