I do make the “by default” claim but I also give reasons why advocating for specific regulations can backfire. E.g the environmentalist success with NEPA. Environmentalists had huge success in getting the specific legal powers and constraints on govt that they asked for but those have been repurposed in service of default govt incentives. Also, advocacy for a specific set of regulations has spillovers onto others. When AI safety advocates make the case for fearing AI progress they provide support for a wide range of responses to AI including lots of nonsensical ones.
Yes, some regulations backfire, and this is a good flag to keep in mind when designing policy, but to actually make the reference-class argument here work, you’d have to show that this is what we should expect from AI policy, which would include showing that failures like NEPA are either much more relevant for the AI case or more numerous than other, more successful regulations, like (in my opinion) the Clean Air Act, Sarbanes-Oxley, bans on CFCs or leaded gasoline, etc. I know it’s not quite as simple as “I would simply design good regulations instead of bad ones,” but it’s also not as simple as “some regulations are really counterproductive, so you shouldn’t advocate for any.” Among other things, this assumes that nobody else will be pushing for really counterproductive regulations!
I do make the “by default” claim but I also give reasons why advocating for specific regulations can backfire. E.g the environmentalist success with NEPA. Environmentalists had huge success in getting the specific legal powers and constraints on govt that they asked for but those have been repurposed in service of default govt incentives. Also, advocacy for a specific set of regulations has spillovers onto others. When AI safety advocates make the case for fearing AI progress they provide support for a wide range of responses to AI including lots of nonsensical ones.
Yes, some regulations backfire, and this is a good flag to keep in mind when designing policy, but to actually make the reference-class argument here work, you’d have to show that this is what we should expect from AI policy, which would include showing that failures like NEPA are either much more relevant for the AI case or more numerous than other, more successful regulations, like (in my opinion) the Clean Air Act, Sarbanes-Oxley, bans on CFCs or leaded gasoline, etc. I know it’s not quite as simple as “I would simply design good regulations instead of bad ones,” but it’s also not as simple as “some regulations are really counterproductive, so you shouldn’t advocate for any.” Among other things, this assumes that nobody else will be pushing for really counterproductive regulations!