Specifically addressing your AI art point: In this case, you risk this fallacy being used to prop up technologies which solve problems they created. Which, I suspect, is part of the popular backlash against AI art in the first place. These justifications continue to be used by fossil fuel companies in developing ‘biofuelds’, ‘sustainable aviation fuel’, etc., and it’s not possible to falsify that some future iteration of the current harmful technology might exist; meanwhile the companies continue to pollute, often at greater and greater scales. There is a big difference between these companies developing sustainable fuels on the side, and redirecting 100% of their resources to that development. I suspect you might feel the same way about AI safety vs. general AI development.
Maybe we can amend the framing to exclude this somehow, because I really like the rest of this (the nuclear energy example felt particularly salient). To differentiate your examples, nuclear power intended to replace an existing harmful energy source, but AI art doesn’t replace… harmful manual artists? So I would perhaps frame it as occurring only when a promising new technology has potential harms today, but has some long tail of probabilities that could make it less harmful (rather than better) than the technology it replaces.
Specifically addressing your AI art point: In this case, you risk this fallacy being used to prop up technologies which solve problems they created. Which, I suspect, is part of the popular backlash against AI art in the first place. These justifications continue to be used by fossil fuel companies in developing ‘biofuelds’, ‘sustainable aviation fuel’, etc., and it’s not possible to falsify that some future iteration of the current harmful technology might exist; meanwhile the companies continue to pollute, often at greater and greater scales. There is a big difference between these companies developing sustainable fuels on the side, and redirecting 100% of their resources to that development. I suspect you might feel the same way about AI safety vs. general AI development.
Maybe we can amend the framing to exclude this somehow, because I really like the rest of this (the nuclear energy example felt particularly salient). To differentiate your examples, nuclear power intended to replace an existing harmful energy source, but AI art doesn’t replace… harmful manual artists? So I would perhaps frame it as occurring only when a promising new technology has potential harms today, but has some long tail of probabilities that could make it less harmful (rather than better) than the technology it replaces.