That makes sense—I was confused, since you said different things, and some of them were subjunctive, and some were speaking about why you disagree with proposed analogies.
Given your perspective, is loss-of-control from more capable and larger models not a foreseeable harm? If we see a single example of this, and we manage to shut it down, would you then be in favor of a regulate-before-training approach?
That makes sense—I was confused, since you said different things, and some of them were subjunctive, and some were speaking about why you disagree with proposed analogies.
Given your perspective, is loss-of-control from more capable and larger models not a foreseeable harm? If we see a single example of this, and we manage to shut it down, would you then be in favor of a regulate-before-training approach?