Signal boosting my twitter poll, which I am very curious to have answered:
https://twitter.com/BondeKirk/status/1758884801954582990
Basically the question I’m trying to get at is whether having hands-on experience training LLMs (proxy for technical expertise) makes you more or less likely to take existential risks from AI seriously.
I think this leaves out what is perhaps the most important step in making a quality forecast, which is to consider the baserates!