It’s important to recognize the troubles posed by deepfakes. Stable Diffusion makes those troubles real. Its use, enhanced and unfettered, poses a genuine threat to human culture and society, because it will make the source and representation of any image or video indistinguishable from those produced by some authentic means. For example, historical imagery can be reliably faked, along with crime footage, etc, etc[1]. But that is not why I wrote this shortform.
Stable Diffusion was put out, open-source, with no debate, and no obstacles other than technical and funding ones. You AI safety folks know what this means, when big players like Google, Microsoft, and Open-AI are producing AI art models with restrictions of various sorts, and a start-up comes along and releases a similar product with no restriction. Well, the license says you cannot use it for certain things. Also, it has a safety feature, that you can turn off. I believe that people are turning it off.
Every person discussing Dall-e2 and its competitors and their restrictions and limitations are not really having the same conversation any more. Now the conversation is what to do when that technology is let loose on the web, unrestricted, open-source, can be developed further, and is put to use freely. Hmm.
I hope there is some element of the AI safety community looking at how to handle the release of AGI software (without safeguards) into the global software community. Clearly, there is only so much you can to do put safeguards on AGI development. The real question will be what to do when AGI development occurs with no safeguards and the technology is publicly available. I see the parallel easily. The same ethical concerns. The same genuine restraint on the part of large corporations. And the same “oh well” when some other company doesn’t show the same restraint.
I read about Stable Diffusion today.
Stable Diffusion is an uncensored AI art model.
It’s important to recognize the troubles posed by deepfakes. Stable Diffusion makes those troubles real. Its use, enhanced and unfettered, poses a genuine threat to human culture and society, because it will make the source and representation of any image or video indistinguishable from those produced by some authentic means. For example, historical imagery can be reliably faked, along with crime footage, etc, etc[1]. But that is not why I wrote this shortform.
Stable Diffusion was put out, open-source, with no debate, and no obstacles other than technical and funding ones. You AI safety folks know what this means, when big players like Google, Microsoft, and Open-AI are producing AI art models with restrictions of various sorts, and a start-up comes along and releases a similar product with no restriction. Well, the license says you cannot use it for certain things. Also, it has a safety feature, that you can turn off. I believe that people are turning it off.
Every person discussing Dall-e2 and its competitors and their restrictions and limitations are not really having the same conversation any more. Now the conversation is what to do when that technology is let loose on the web, unrestricted, open-source, can be developed further, and is put to use freely. Hmm.
I hope there is some element of the AI safety community looking at how to handle the release of AGI software (without safeguards) into the global software community. Clearly, there is only so much you can to do put safeguards on AGI development. The real question will be what to do when AGI development occurs with no safeguards and the technology is publicly available. I see the parallel easily. The same ethical concerns. The same genuine restraint on the part of large corporations. And the same “oh well” when some other company doesn’t show the same restraint.