“And the truth is, as even Gabriel Weil accepts, it’s super unlikely that the training and deployment of highly capable advanced AI would fit the longstanding legal definition of an “abnormally dangerous activity”.”
This is not an accurate characterization of my views. Here’s a relevant quote from the linked paper:
“Training and deploying frontier AI models are clearly not activities of common usage, at least with current technology, given the enormous computational resource requirements of these systems.117 This means that, under the Restatement Third’s test, the key issue is whether training or deploying advanced AI systems satisfies the first criterion of creating a foreseeable and highly significant risk of harm even when reasonable care is exercised. This question is likely to be controversial. While I think the available evidence does support the conclusion that reasonable care may be insufficient to reduce the risk of catastrophic AI misalignment or misuse to below levels that would qualify as “highly significant,” recognizing any software development project as abnormally dangerous would represent a substantial doctrinal innovation.”
In the following paragraph, I further clarify: “To be clear, I do think an accurate understanding of the risks of advanced AI systems supports strict liability for the training and deployment of advanced AI systems, but it is important to recognize that this is not the likely outcome of a mechanical application of existing doctrine to harms caused by these systems.”
In a later section of the paper, I say “If courts are persuaded by the arguments summarized in part I that advanced AI systems do indeed pose a significant risk of causing human extinction, they should recognize training and deploying those systems as an abnormally dangerous activity. To be sure, treating software development as abnormally dangerous would represent a significant extension of existing strict liability, but it is entirely consistent with the doctrinal rationale for the abnormally dangerous activities category.”
I don’t see how you can read me as accepting that “it’s super unlikely that the training and deployment of highly capable advanced AI would fit the longstanding legal definition of an “abnormally dangerous activity”.”
“And the truth is, as even Gabriel Weil accepts, it’s super unlikely that the training and deployment of highly capable advanced AI would fit the longstanding legal definition of an “abnormally dangerous activity”.”
This is not an accurate characterization of my views. Here’s a relevant quote from the linked paper:
“Training and deploying frontier AI models are clearly not activities of common usage, at least with current technology, given the enormous computational resource requirements of these systems.117 This means that, under the Restatement Third’s test, the key issue is whether training or deploying advanced AI systems satisfies the first criterion of creating a foreseeable and highly significant risk of harm even when reasonable care is exercised. This question is likely to be controversial. While I think the available evidence does support the conclusion that reasonable care may be insufficient to reduce the risk of catastrophic AI misalignment or misuse to below levels that would qualify as “highly significant,” recognizing any software development project as abnormally dangerous would represent a substantial doctrinal innovation.”
In the following paragraph, I further clarify: “To be clear, I do think an accurate understanding of the risks of advanced AI systems supports strict liability for the training and deployment of advanced AI systems, but it is important to recognize that this is not the likely outcome of a mechanical application of existing doctrine to harms caused by these systems.”
In a later section of the paper, I say “If courts are persuaded by the arguments summarized in part I that advanced AI systems do indeed pose a significant risk of causing human extinction, they should recognize training and deploying those systems as an abnormally dangerous activity. To be sure, treating software development as abnormally dangerous would represent a significant extension of existing strict liability, but it is entirely consistent with the doctrinal rationale for the abnormally dangerous activities category.”
I don’t see how you can read me as accepting that “it’s super unlikely that the training and deployment of highly capable advanced AI would fit the longstanding legal definition of an “abnormally dangerous activity”.”