Yes, the consequences are probably less severe in this context, which is why I wouldn’t consider this a particularly strong argument. Imo, it’s more important to understand this line of thinking for the purpose of modeling outsider’s reactions to potential censorship, as this seems to be how people irl are responding to OpenAI, et al’s policy decisions.
I would also like to emphasize again that sometimes regulation is necessary, and I am not against it on principle, though I do believe it should be used with caution; this post is critiquing the details of how we are implementing censorship in large models, not so much its use in the first place.
Yes, the consequences are probably less severe in this context, which is why I wouldn’t consider this a particularly strong argument. Imo, it’s more important to understand this line of thinking for the purpose of modeling outsider’s reactions to potential censorship, as this seems to be how people irl are responding to OpenAI, et al’s policy decisions.
I would also like to emphasize again that sometimes regulation is necessary, and I am not against it on principle, though I do believe it should be used with caution; this post is critiquing the details of how we are implementing censorship in large models, not so much its use in the first place.