If AI progress slows down enough in countries were safety-concerned people are especially influential, then these countries (and their companies) will fall behind internationally in AI development. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory.
There’s a country-agnostic version of that argument about self-regulation: “If AGI companies in which safety-concerned people are especially influential allow safety concerns to slow down their progress towards AGI, then these companies will fall behind. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory”.
Therefore, without any regulation, it’s not clear to what extent the presence of safety-concerned people in AGI companies will matter.
Research has to slow down enough for an AI developer to fall behind; an AI developer that has some lead over their competition would have some slack, potentially enabling safety-concerned people to contribute. (That doesn’t necessarily mean companies should try to get a lead though.)
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
Generally, I think self-regulation is usually promoted by industry actors in order to prevent actual regulation. Based on your username and a bit of internet research, you seem to be an AI Governance Research Contractor at a major AGI company. Is this correct? If so, I suggest that you disclose that affiliation on your profile bio (considering that you engage in the topic of AI regulation on this forum).
(To be clear, your comments here seem consistent with you acting in good faith and having the best intentions.)
I’m still figuring out how I want to engage on this forum; for now, I generally, tentatively prefer to not disclose personal information on here. I’d encourage readers to conservatively assume I have conflicts of interest, and to assess my comments and posts based on their merits. (My vague sense is that this is a common approach to this forum—common enough that non-disclosure doesn’t imply an absence of conflicts of interest—but maybe I’ve misread? I’m not confident about the approach I’m taking—feel free to message me on this forum if you’d like to discuss this further.)
On your other point, I agree that suspicion toward self-regulation is often warranted; I think my earlier point was sufficiently hedged (“plausible”; “some”) to be compatible with such suspicion.
There’s a country-agnostic version of that argument about self-regulation: “If AGI companies in which safety-concerned people are especially influential allow safety concerns to slow down their progress towards AGI, then these companies will fall behind. This would eliminate much/most of safety-concerned people’s opportunities for impacting AI’s trajectory”.
Therefore, without any regulation, it’s not clear to what extent the presence of safety-concerned people in AGI companies will matter.
I’m mostly sympathetic—I’d add a few caveats:
Research has to slow down enough for an AI developer to fall behind; an AI developer that has some lead over their competition would have some slack, potentially enabling safety-concerned people to contribute. (That doesn’t necessarily mean companies should try to get a lead though.)
It seems plausible for some useful regulation to take the form of industry self-regulation (which safety-concerned people at these companies could help advance).
Generally, I think self-regulation is usually promoted by industry actors in order to prevent actual regulation. Based on your username and a bit of internet research, you seem to be an AI Governance Research Contractor at a major AGI company. Is this correct? If so, I suggest that you disclose that affiliation on your profile bio (considering that you engage in the topic of AI regulation on this forum).
(To be clear, your comments here seem consistent with you acting in good faith and having the best intentions.)
I’m still figuring out how I want to engage on this forum; for now, I generally, tentatively prefer to not disclose personal information on here. I’d encourage readers to conservatively assume I have conflicts of interest, and to assess my comments and posts based on their merits. (My vague sense is that this is a common approach to this forum—common enough that non-disclosure doesn’t imply an absence of conflicts of interest—but maybe I’ve misread? I’m not confident about the approach I’m taking—feel free to message me on this forum if you’d like to discuss this further.)
On your other point, I agree that suspicion toward self-regulation is often warranted; I think my earlier point was sufficiently hedged (“plausible”; “some”) to be compatible with such suspicion.