We can also proactively create AI regulation aimed specifically at promoting individual autonomy and freedom. Some general objectives for such policies could include:
Establish a “right to refuse simulation”, as a way of preempting the most extreme forms of targeted manipulation.
Can you elaborate on how you think such a regulation could be implemented? Currently the trend seems to be that AI will be able to emulate anything that it’s trained on. In essence, your proposal might look like ensuring that AIs are not trained on human data without permission. In practice, this might take the form of a very strict copyright regime. Is that what you suggest?
One alternative is that AI should be allowed to be trained on other people’s data without restriction, but AIs should refuse any request to emulate specific individuals during inference. That sounds more sensible to me, and is in line with what OpenAI seems to be doing with DallE-3.
I’m not opposed to training AIs on human data, so long as those AIs don’t make non-consensual emulations of a particular person which are good enough that strategies optimized to manipulate the AI are also very effective against that person. In practice, I think the AI does have to be pretty deliberately set up to mirror a specific person for such approaches to be extremely effective.
I’d be in favor of a somewhat more limited version of the restriction OpenAI is apparently doing, where the thing that’s restricted is deliberately aiming to make really good emulations of a specific person[1]. E.g., “rewrite this stuff in X person’s style” is fine, but “gather a bunch of bio-metric and behavioral data on X, fit an AI to that data, then optimize visual stimuli to that AI so it likes Pepsi” isn’t.
Potentially with a further limitation that the restriction only applies to people who create the AI with the intent of manipulating the real version of the simulated person.
Can you elaborate on how you think such a regulation could be implemented? Currently the trend seems to be that AI will be able to emulate anything that it’s trained on. In essence, your proposal might look like ensuring that AIs are not trained on human data without permission. In practice, this might take the form of a very strict copyright regime. Is that what you suggest?
One alternative is that AI should be allowed to be trained on other people’s data without restriction, but AIs should refuse any request to emulate specific individuals during inference. That sounds more sensible to me, and is in line with what OpenAI seems to be doing with DallE-3.
I’m not opposed to training AIs on human data, so long as those AIs don’t make non-consensual emulations of a particular person which are good enough that strategies optimized to manipulate the AI are also very effective against that person. In practice, I think the AI does have to be pretty deliberately set up to mirror a specific person for such approaches to be extremely effective.
I’d be in favor of a somewhat more limited version of the restriction OpenAI is apparently doing, where the thing that’s restricted is deliberately aiming to make really good emulations of a specific person[1]. E.g., “rewrite this stuff in X person’s style” is fine, but “gather a bunch of bio-metric and behavioral data on X, fit an AI to that data, then optimize visual stimuli to that AI so it likes Pepsi” isn’t.
Potentially with a further limitation that the restriction only applies to people who create the AI with the intent of manipulating the real version of the simulated person.