The potential harms of these technologies come from their unbounded scope
Previous technologies also have quite unbounded scopes. That does not seem to me different from the technology of film. The example of film in the post you were replying too also has an unbounded scope.
This can therefore inform the kinds of models / training techniques that are more dangerous: e.g. that for which the scope is the widest
Technologies with a broad scope are more like to be dangerous but they are also more likely to be valuable.
If you look at the scope of photoshop it can be already used by people to make deepfake porn. It can also used by people to print fake money.
Forbidding broad-scope technologies to be deployed would have likely prevented most of the progress in the last century and would make a huge damper on future progress as well.
When it comes to gene editing, our society decides to regulate its application but is very open that developing the underlying technology is valuable.
The analogy to how we treat gene editing would be to pass laws to regulate image creation. The fact that deepfake porn is currently not heavily criminalized is a legislative choice. We could pass laws to regulate it like other sexual assaults.
Instead of regulating at the point of technology creation, you could focus on regulating technology use. To the extent that we are doing a bad job at that currently, you could build a think tank that lobbies for laws to regulate problems like deepfake porn creation and that constantly analysis new problems and lobbies for the to be regulated.
When it comes to gene editing, our society decides to regulate its application but is very open that developing the underlying technology is valuable.
Here, I would refer to the third principle proposed in the “What Do We Want” section as well (on Cost-Benefit evaluation): I think that there should be at least more work done to try and anticipate / mitigate harms done by these general technologies. Like what is the rough likelihood of an extremely good outcome vs. extremely bad outcome for model X being deployed? If I add modification Y to it, does this change?
I don’t think our views are actually inconsistent here: if society scopes down the allowed usage of a general technology to comply with a set of regulatory standards that are deemed safe, that would work for me.
My personal view on the danger here really is really that there isn’t enough technical work here to mitigate the misusage of models, or even to enforce compliance in a good way. We really need technical work on that, and only then can we start effectively asking the regulation question. Until then, we might want to just delay release of super-powerful successors for this kind of technologies, until we can give better performance guarantees for systems like this, deployed this publicly.
Previous technologies also have quite unbounded scopes. That does not seem to me different from the technology of film. The example of film in the post you were replying too also has an unbounded scope.
Technologies with a broad scope are more like to be dangerous but they are also more likely to be valuable.
If you look at the scope of photoshop it can be already used by people to make deepfake porn. It can also used by people to print fake money.
Forbidding broad-scope technologies to be deployed would have likely prevented most of the progress in the last century and would make a huge damper on future progress as well.
When it comes to gene editing, our society decides to regulate its application but is very open that developing the underlying technology is valuable.
The analogy to how we treat gene editing would be to pass laws to regulate image creation. The fact that deepfake porn is currently not heavily criminalized is a legislative choice. We could pass laws to regulate it like other sexual assaults.
Instead of regulating at the point of technology creation, you could focus on regulating technology use. To the extent that we are doing a bad job at that currently, you could build a think tank that lobbies for laws to regulate problems like deepfake porn creation and that constantly analysis new problems and lobbies for the to be regulated.
When it comes to the issue of deepfake porn, it’s also worth looking why it’s not criminalized. When Googling I found https://inforrm.org/2022/07/19/deepfake-porn-and-the-law-commissions-final-report-on-intimate-image-abuse-some-initial-thoughts-colette-allen/ which makes the case that it should be regulated but which cites a government report which suggests that deepfake porn creation should be legal while sharing it shouldn’t be legal. I would support making both illegal, but I think approaching the problem from the usage point of view seem the right strategy.
Here, I would refer to the third principle proposed in the “What Do We Want” section as well (on Cost-Benefit evaluation): I think that there should be at least more work done to try and anticipate / mitigate harms done by these general technologies. Like what is the rough likelihood of an extremely good outcome vs. extremely bad outcome for model X being deployed? If I add modification Y to it, does this change?
I don’t think our views are actually inconsistent here: if society scopes down the allowed usage of a general technology to comply with a set of regulatory standards that are deemed safe, that would work for me.
My personal view on the danger here really is really that there isn’t enough technical work here to mitigate the misusage of models, or even to enforce compliance in a good way. We really need technical work on that, and only then can we start effectively asking the regulation question. Until then, we might want to just delay release of super-powerful successors for this kind of technologies, until we can give better performance guarantees for systems like this, deployed this publicly.