I’m not sure whether this counts as a heuristic or not, but anyway… I think an important point is that it’s not clear what the incentive problem is, or (more weakly) that the incentive problem is nowhere near as bad as suggested in classical AI risk arguments. Designers of AI systems have very strong incentives for them not to be as useless as painted in classical AI risk arguments. If you were running a paperclip company, you would very strongly want to figure out a way to make a paperclip maximiser that doesn’t use people’s atoms to make paperclips, and there would be lots of opportunities to learn as you go on this front—there would be incentives for capabilities to develop in tow with safety
The incentive problem is made better by the fact that the market for AI development is quite concentrated—there are <10 major players in the field who currently look most likely to make very advanced AGIs. Thus, you only need to get coordination on safety from <10 actors. In contrast, biotechnology is comparatively a complete nightmare from a coordination point of view, requiring coordination among pretty much all states. Climate change also seems difficult on this front, though less bad than bio.
I’m not sure whether this counts as a heuristic or not, but anyway… I think an important point is that it’s not clear what the incentive problem is, or (more weakly) that the incentive problem is nowhere near as bad as suggested in classical AI risk arguments. Designers of AI systems have very strong incentives for them not to be as useless as painted in classical AI risk arguments. If you were running a paperclip company, you would very strongly want to figure out a way to make a paperclip maximiser that doesn’t use people’s atoms to make paperclips, and there would be lots of opportunities to learn as you go on this front—there would be incentives for capabilities to develop in tow with safety
The incentive problem is made better by the fact that the market for AI development is quite concentrated—there are <10 major players in the field who currently look most likely to make very advanced AGIs. Thus, you only need to get coordination on safety from <10 actors. In contrast, biotechnology is comparatively a complete nightmare from a coordination point of view, requiring coordination among pretty much all states. Climate change also seems difficult on this front, though less bad than bio.