Thanks for the post—I think (unoriginally) there are some ways heavy regulation of AI could be very counterproductive or ineffective for safety:
If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind internationally in AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory.
If China “catches up” to the US in AI (due to US over-regulation) when AI is looking increasingly economically and militarily important, that could motivate US policymakers to hit the gas on AI (which would at least undo some of the earlier slowing down of AI, and might spark an international race to the bottom on AI).
Also, you mention,
The community strategy (insofar as there even is one) is to bet everything on getting a couple of technical alignment folks onto the team at top research labs in the hopes that they will miraculously solve alignment before the mad scientists in the office next door turn on the doomsday machine.
From conversation, my understanding is some governance/policy folks fortunately have (somewhat) more promising ideas than that. (This doesn’t show up much on this site, partly because these professionals tend to be busy and the ideas are fairly rough.) I hear there’s some work aimed at posting about some of these ideas—until then, chatting with people (e.g., by reaching out to people) might be the best way to learn about these ideas.
Another (unoriginal) way that heavy AI reg could be counterproductive for safety: AGI alignment research probably increases in productivity as you get close to AGI. So, regulation in jurisdictions with the actors who are closest to AGI (currently, US/UK) would give those actors less time to do high productivity AGI alignment research, before the 2nd place actor catches up
And within a jurisdiction, you might think that responsible actors are most likely to comply to regulation, differentially slowing them down
If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind internationally in AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory.
There’s a country-agnostic version of that argument about self-regulation: “If AGI companies in which safety-concerned people are especially influential allow safety concerns to slow down their progress towards AGI, then these companies will fall behind. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory”.
Therefore, without any regulation, it’s not clear to what extent the presence of safety-concerned people in AGI companies will matter.
Research has to slow down enough for an AI developer to fall behind; an AI developer that has some lead over their competition would have some slack, potentially enabling safety-concerned people to contribute. (That doesn’t necessarily mean companies should try to get a lead though.)
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
Generally, I think self-regulation is usually promoted by industry actors in order to prevent actual regulation. Based on your username and a bit of internet research, you seem to be an AI Governance Research Contractor at a major AGI company. Is this correct? If so, I suggest that you disclose that affiliation on your profile bio (considering that you engage in the topic of AI regulation on this forum).
(To be clear, your comments here seem consistent with you acting in good faith and having the best intentions.)
I’m still figuring out how I want to engage on this forum; for now, I generally, tentatively prefer to not disclose personal information on here. I’d encourage readers to conservatively assume I have conflicts of interest, and to assess my comments and posts based on their merits. (My vague sense is that this is a common approach to this forum—common enough that non-disclosure doesn’t imply an absence of conflicts of interest—but maybe I’ve misread? I’m not confident about the approach I’m taking—feel free to message me on this forum if you’d like to discuss this further.)
On your other point, I agree that suspicion toward self-regulation is often warranted; I think my earlier point was sufficiently hedged (“plausible”; “some”) to be compatible with such suspicion.
(Crossposting, with tweaks)
Thanks for the post—I think (unoriginally) there are some ways heavy regulation of AI could be very counterproductive or ineffective for safety:
If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind internationally in AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory.
If China “catches up” to the US in AI (due to US over-regulation) when AI is looking increasingly economically and militarily important, that could motivate US policymakers to hit the gas on AI (which would at least undo some of the earlier slowing down of AI, and might spark an international race to the bottom on AI).
Also, you mention,
From conversation, my understanding is some governance/policy folks fortunately have (somewhat) more promising ideas than that. (This doesn’t show up much on this site, partly because these professionals tend to be busy and the ideas are fairly rough.) I hear there’s some work aimed at posting about some of these ideas—until then, chatting with people (e.g., by reaching out to people) might be the best way to learn about these ideas.
Another (unoriginal) way that heavy AI reg could be counterproductive for safety: AGI alignment research probably increases in productivity as you get close to AGI. So, regulation in jurisdictions with the actors who are closest to AGI (currently, US/UK) would give those actors less time to do high productivity AGI alignment research, before the 2nd place actor catches up
And within a jurisdiction, you might think that responsible actors are most likely to comply to regulation, differentially slowing them down
There’s a country-agnostic version of that argument about self-regulation: “If AGI companies in which safety-concerned people are especially influential allow safety concerns to slow down their progress towards AGI, then these companies will fall behind. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory”.
Therefore, without any regulation, it’s not clear to what extent the presence of safety-concerned people in AGI companies will matter.
I’m mostly sympathetic—I’d add a few caveats:
Research has to slow down enough for an AI developer to fall behind; an AI developer that has some lead over their competition would have some slack, potentially enabling safety-concerned people to contribute. (That doesn’t necessarily mean companies should try to get a lead though.)
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
Generally, I think self-regulation is usually promoted by industry actors in order to prevent actual regulation. Based on your username and a bit of internet research, you seem to be an AI Governance Research Contractor at a major AGI company. Is this correct? If so, I suggest that you disclose that affiliation on your profile bio (considering that you engage in the topic of AI regulation on this forum).
(To be clear, your comments here seem consistent with you acting in good faith and having the best intentions.)
I’m still figuring out how I want to engage on this forum; for now, I generally, tentatively prefer to not disclose personal information on here. I’d encourage readers to conservatively assume I have conflicts of interest, and to assess my comments and posts based on their merits. (My vague sense is that this is a common approach to this forum—common enough that non-disclosure doesn’t imply an absence of conflicts of interest—but maybe I’ve misread? I’m not confident about the approach I’m taking—feel free to message me on this forum if you’d like to discuss this further.)
On your other point, I agree that suspicion toward self-regulation is often warranted; I think my earlier point was sufficiently hedged (“plausible”; “some”) to be compatible with such suspicion.
This was discussed here too.