You might be right re forecasting (though someone willing in general to frequently bet on 2% scenarios manifesting should fairly quickly outperform someone who frequently bets against them—if their credences are actually more accurate).
I think you’re wrong about UK AISI not putting much credence on extinction scenarios? I’ve seen jobadverts from AISI talking about loss of control risk (i.e., AI takeover), and how ‘the risks from AI are not sci-fi, they are urgent.’ And I know people working at AISI who, last I spoke to them, put ≫10% on extinction.
The two jobs you mention only refer to ‘loss of control’ as a single concern among many - ‘risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.’
I’m not claiming that these orgs don’t or shouldn’t take the lesser risks and extreme tail risks seriously (I think they should and do), but denying the claim that people who ‘think seriously’ about AI risks necessarily lean towards high extinction probabilities.
You might be right re forecasting (though someone willing in general to frequently bet on 2% scenarios manifesting should fairly quickly outperform someone who frequently bets against them—if their credences are actually more accurate).
The two jobs you mention only refer to ‘loss of control’ as a single concern among many - ‘risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.’
I’m not claiming that these orgs don’t or shouldn’t take the lesser risks and extreme tail risks seriously (I think they should and do), but denying the claim that people who ‘think seriously’ about AI risks necessarily lean towards high extinction probabilities.