you’ve implied elsewhere in this thread that you think that developers should be treated like pre-1938 drug manufacturers, with no rules.
I think you misread me. I’ve said across multiple comments that I favor targeted regulations that are based on foreseeable harms after we’ve gotten more acquainted with the technology. I don’t think that’s very similar to an indefinite pause, and it certainly isn’t the same as “no rules”.
That makes sense—I was confused, since you said different things, and some of them were subjunctive, and some were speaking about why you disagree with proposed analogies.
Given your perspective, is loss-of-control from more capable and larger models not a foreseeable harm? If we see a single example of this, and we manage to shut it down, would you then be in favor of a regulate-before-training approach?
I think you misread me. I’ve said across multiple comments that I favor targeted regulations that are based on foreseeable harms after we’ve gotten more acquainted with the technology. I don’t think that’s very similar to an indefinite pause, and it certainly isn’t the same as “no rules”.
That makes sense—I was confused, since you said different things, and some of them were subjunctive, and some were speaking about why you disagree with proposed analogies.
Given your perspective, is loss-of-control from more capable and larger models not a foreseeable harm? If we see a single example of this, and we manage to shut it down, would you then be in favor of a regulate-before-training approach?