(Edit: Disclosure: I am executive director of CSER)
Re: your second question, I don’t personally have a good answer re: bad advice—as these get hundreds of submissions I haven’t read all or even most. (I do recall seeing some that have dismissed or ridiculed AI xrisk as conceptually nonsensical.)
Submissions I’ve been involved in tend towards (a) summarising already published work (b) making sensible, noncontroversial recommendations (c) occasionally gently keeping the overton window open (e.g. ‘many AI experts think AGI is plausible at some point in the future, but on a very uncertain timeline; we should take safe development seriously and there is good work that can be done, and that is being done, at present on technical AI safety’, as opposed to ‘AI xrisk is real, scary and imminent)‘. The aim of (c) being typically to counterweigh the ‘AI safety/alignment is nonsense and everyone working on it is deluded’ view rather than to promote action.
There are a few reasons for this. These open calls for evidence are noisy processes, and not the best way to influence policy on controversial topics or in very concrete ways. However, producing reputable input for them is a good way to get established as a reputable, trustworthy expertise source and partner. In particular, my impression is that it allows people in government, including those already concerned with these issues, greater scope to engage with orgs like ours in more in-depth conversation and analysis (more appropriate for the ‘controversial/concrete action-relevant’ engagement). It’s easier to justify investing time and resources in an org that’s been favourably featured in these processes as opposed to ‘random centre somewhere working on slightly unusual topics’. But it can be hard to disentangle exactly how much these submissions play a role, versus Cambridge/Oxford ‘brand’, track record of academic success and publications, 1-1 meetings with policymakers that would have happened anyway, etc.
(Edit: Disclosure: I am executive director of CSER)
Re: your second question, I don’t personally have a good answer re: bad advice—as these get hundreds of submissions I haven’t read all or even most. (I do recall seeing some that have dismissed or ridiculed AI xrisk as conceptually nonsensical.)
Submissions I’ve been involved in tend towards (a) summarising already published work (b) making sensible, noncontroversial recommendations (c) occasionally gently keeping the overton window open (e.g. ‘many AI experts think AGI is plausible at some point in the future, but on a very uncertain timeline; we should take safe development seriously and there is good work that can be done, and that is being done, at present on technical AI safety’, as opposed to ‘AI xrisk is real, scary and imminent)‘. The aim of (c) being typically to counterweigh the ‘AI safety/alignment is nonsense and everyone working on it is deluded’ view rather than to promote action.
There are a few reasons for this. These open calls for evidence are noisy processes, and not the best way to influence policy on controversial topics or in very concrete ways. However, producing reputable input for them is a good way to get established as a reputable, trustworthy expertise source and partner. In particular, my impression is that it allows people in government, including those already concerned with these issues, greater scope to engage with orgs like ours in more in-depth conversation and analysis (more appropriate for the ‘controversial/concrete action-relevant’ engagement). It’s easier to justify investing time and resources in an org that’s been favourably featured in these processes as opposed to ‘random centre somewhere working on slightly unusual topics’. But it can be hard to disentangle exactly how much these submissions play a role, versus Cambridge/Oxford ‘brand’, track record of academic success and publications, 1-1 meetings with policymakers that would have happened anyway, etc.