There are other safety problems—often ones that are more speculative—that the market is not incentivizing companies to solve.
My personal response would be as follows:
As Leopold presents it, the key pressure here that keeps labs in check is societal constraints on deployment, not perceived ability to make money. The hope is that society’s response has the following properties:
thoughtful, prominent experts are attuned to these risks and demand rigorous responses
policymakers are attuned to (thoughtful) expert opinion
policy levers exist that provide policymakers with oversight / leverage over labs
If labs are sufficiently thoughtful, they’ll notice that deploying models is in fact bad for them! Can’t make profit if you’re dead. *taps forehead knowingly*
but in practice I agree that lots of people are motivated by the tastiness of progress, pro-progress vibes, etc., and will not notice the skulls.
Counterpoints to 1:
Good regulation of deployment is hard (though not impossible in my view).
reasonable policy responses are difficult to steer towards
attempts at raising awareness of AI risk could lead to policymakers getting too excited about the promise of AI while ignoring the risks
experts will differ; policymakers might not listen to the right experts
Good regulation of development is much harder, and will eventually be necessary.
This is the really tricky one IMO. I think it requires pretty far-reaching regulations that would be difficult to get passed today, and would probably misfire a lot. But doesn’t seem impossible, and I know people are working on laying groundwork for this in various ways (e.g. pushing for labs to incorporate evals in their development process).
My personal response would be as follows:
As Leopold presents it, the key pressure here that keeps labs in check is societal constraints on deployment, not perceived ability to make money. The hope is that society’s response has the following properties:
thoughtful, prominent experts are attuned to these risks and demand rigorous responses
policymakers are attuned to (thoughtful) expert opinion
policy levers exist that provide policymakers with oversight / leverage over labs
If labs are sufficiently thoughtful, they’ll notice that deploying models is in fact bad for them! Can’t make profit if you’re dead. *taps forehead knowingly*
but in practice I agree that lots of people are motivated by the tastiness of progress, pro-progress vibes, etc., and will not notice the skulls.
Counterpoints to 1:
Good regulation of deployment is hard (though not impossible in my view).
reasonable policy responses are difficult to steer towards
attempts at raising awareness of AI risk could lead to policymakers getting too excited about the promise of AI while ignoring the risks
experts will differ; policymakers might not listen to the right experts
Good regulation of development is much harder, and will eventually be necessary.
This is the really tricky one IMO. I think it requires pretty far-reaching regulations that would be difficult to get passed today, and would probably misfire a lot. But doesn’t seem impossible, and I know people are working on laying groundwork for this in various ways (e.g. pushing for labs to incorporate evals in their development process).