I’m also heartened by recent polling, and spend a lot of time time these days thinking about how to argue for the importance of existential risks from artificial intelligence.
I’m guessing the main difference in our perspective here is that you see including existing harms in public messaging as “hiding under the banner” of another issue. In my mind, (1) existing harms are closely related to the threat models for existential risks (i.e. how do we get these systems to do the things we want and not do the other things); and (2) I think it’s just really important for advocates to try to build coalitions between different interest groups with shared instrumental goals (e.g. building voter support for AI regulation). I’ve seen a lot of social movements devolve into factionalism, and I see the early stages of that happening in AI safety, which I think is a real shame.
Like, one thing that would really help the safety situation is if frontier models were treated like nuclear power plants and couldn’t just be deployed at a single company’s whim without meeting a laundry list of safety criteria (both because of the direct effects of the safety criteria, and because such criteria literally just buys us some time). If it is the case that X-risk interest groups can build power and increase the chance of passing legislation by allying with others who want to include (totally legitimate) harms like respecting intellectual property in that list of criteria, I don’t see that as hiding under another’s banner. I see it as building strategic partnerships.
Anyway, this all goes a bit further than the point I was making in my initial comment, which is that I think the public isn’t very sensitive to subtle differences in messaging — and that’s okay because those subtle differences are much more important when you are drafting legislation compared to generally building public pressure.
I’m also heartened by recent polling, and spend a lot of time time these days thinking about how to argue for the importance of existential risks from artificial intelligence.
I’m guessing the main difference in our perspective here is that you see including existing harms in public messaging as “hiding under the banner” of another issue. In my mind, (1) existing harms are closely related to the threat models for existential risks (i.e. how do we get these systems to do the things we want and not do the other things); and (2) I think it’s just really important for advocates to try to build coalitions between different interest groups with shared instrumental goals (e.g. building voter support for AI regulation). I’ve seen a lot of social movements devolve into factionalism, and I see the early stages of that happening in AI safety, which I think is a real shame.
Like, one thing that would really help the safety situation is if frontier models were treated like nuclear power plants and couldn’t just be deployed at a single company’s whim without meeting a laundry list of safety criteria (both because of the direct effects of the safety criteria, and because such criteria literally just buys us some time). If it is the case that X-risk interest groups can build power and increase the chance of passing legislation by allying with others who want to include (totally legitimate) harms like respecting intellectual property in that list of criteria, I don’t see that as hiding under another’s banner. I see it as building strategic partnerships.
Anyway, this all goes a bit further than the point I was making in my initial comment, which is that I think the public isn’t very sensitive to subtle differences in messaging — and that’s okay because those subtle differences are much more important when you are drafting legislation compared to generally building public pressure.