Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.
I’m possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:
[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I would argue that many of the predominant positions on AI in the community are already markers of grey tribe membership.)
I spontaneously feel like I’d want you to spend more time thinking about politicization risks than this cursory treatment here indicates.
E.g. politization is probably not a binary, and I’d be plausibly very grateful for work that on the margin reduces the intensity of politicization.
E.g. politicization can probably take in thousands of different shapes, some of which are much more conducive for policymakers to still have reasonably sane discussions on issues relevant to existential risks.
More generally, I’m pretty positively surprised with how things are going on the political side of AI, and I’m a bit protective of it. While I don’t have any insider knowledge and haven’t thought much about all of this, I see bipartisan and sensible sounding stuff from Congress, I see Ursula von der Leyen saying AI is a potential x-risks in front of the EU parliament, I see the UK AI Safety Summit, I see the Frontier Model Forum, the UN says things about existential risks. As a consequence, I’d spontaneously rather see more reasonable voices being supportive and encouraging and protective of the current momentum, rather than potentially increasing the adversarial tone and “politicization noise”, making things more hot-button, less open and transparent, etc.
One random concrete way public protests could affect things negatively: If AI pause protests would have started half a year ago ealier, would e.g. Microsoft chief executives still have signed the CAIS open letter?
Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.
I’m possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:
I spontaneously feel like I’d want you to spend more time thinking about politicization risks than this cursory treatment here indicates.
E.g. politization is probably not a binary, and I’d be plausibly very grateful for work that on the margin reduces the intensity of politicization.
E.g. politicization can probably take in thousands of different shapes, some of which are much more conducive for policymakers to still have reasonably sane discussions on issues relevant to existential risks.
More generally, I’m pretty positively surprised with how things are going on the political side of AI, and I’m a bit protective of it. While I don’t have any insider knowledge and haven’t thought much about all of this, I see bipartisan and sensible sounding stuff from Congress, I see Ursula von der Leyen saying AI is a potential x-risks in front of the EU parliament, I see the UK AI Safety Summit, I see the Frontier Model Forum, the UN says things about existential risks. As a consequence, I’d spontaneously rather see more reasonable voices being supportive and encouraging and protective of the current momentum, rather than potentially increasing the adversarial tone and “politicization noise”, making things more hot-button, less open and transparent, etc.
One random concrete way public protests could affect things negatively: If AI pause protests would have started half a year
agoealier, would e.g. Microsoft chief executives still have signed the CAIS open letter?