I agree that not everyone already knows what they need to know. Our crux issue is probably “who needs to get it and how will they learn it?” I think we more than have the evidence to teach and set an example of knowing for the public. I think you think we need to make a very respectable and detailed case to convince elites. I think you can take multiple routes to influencing elites and that they will be more receptive when the reality of AI risk is a more popular view. I don’t think timelines are a great tool for convincing either of these groups because they create such a sense of panic and there’s such an invitation to quibble with the forecasts instead of facing the thrust of the evidence.
I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren’t statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I’m not sure—I really think the timeline models have been useful for our community’s strategy and for informing other audiences).
But also, they only sometimes create a sense of panic; I could see specificity being helpful for people getting out of the mode of “it’s vaguely inevitable, nothing to be done, just gotta hope it all works out.” (Notably the timeline models sometimes imply longer timelines than the vibes coming out of the AI companies and Bay Area house parties.)
I agree that not everyone already knows what they need to know. Our crux issue is probably “who needs to get it and how will they learn it?” I think we more than have the evidence to teach and set an example of knowing for the public. I think you think we need to make a very respectable and detailed case to convince elites. I think you can take multiple routes to influencing elites and that they will be more receptive when the reality of AI risk is a more popular view. I don’t think timelines are a great tool for convincing either of these groups because they create such a sense of panic and there’s such an invitation to quibble with the forecasts instead of facing the thrust of the evidence.
I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren’t statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I’m not sure—I really think the timeline models have been useful for our community’s strategy and for informing other audiences).
But also, they only sometimes create a sense of panic; I could see specificity being helpful for people getting out of the mode of “it’s vaguely inevitable, nothing to be done, just gotta hope it all works out.” (Notably the timeline models sometimes imply longer timelines than the vibes coming out of the AI companies and Bay Area house parties.)