Twitter thread on the letter says more—essentially the post-crisis liability that Anthropic proposes is basically equivalent to bankruptcy of the conpany, which is a pretty mild consequence given the risks at hand. Also, an AI regulatory agency is probably going to be necessary at some point in the future, so it’s better to set it up now and give it time to mature.
I don’t have twitter so I can’t view the thread, but bankruptcy of a company for facilitating $500m of damage being caused (the monetary threshold of the bill) doesn’t seem very mild?
Bankruptcy is an upper bound, not a lower bound. If you could pay up enough in damages and still stay solvent, you probably do. The (alleged/proposed) Anthropic changes isn’t “if you do at least 500M in damages, you’ll go bankrupt.” It’s more like “If you do at least $X in damages, the worst that can happen to you is that your company[1] will go bankrupt.”
(To be clear, not a lawyer/legislator, did not read the letter very carefully, etc)
not to be accountable for “pre-harm” enforcement of AI Safety standards (ie. wait for a catastrophe before enforcing any liability).
“if a catastrophic event does occur … the quality of the company’s SSP should be a factor in determining whether the developer exercised ‘reasonable care.’”. (ie. if your safety protocols look good, you can be let off the hook for the consequences of catastrophe).
Also significantly weakening whistleblower protections.
Yeah this seems like a reasonable summary of why the letter is probably bad, but tbc I thought it was questionable before I was able to read the letter (so I don’t want to get credit for doing the homework).
Hypothetically, if companies like Anthropic or OpenAI wanted to create a set of heuristics that lets them acquire power while generating positive (or at least neutral) safety-washing PR among credulous nerds, they can have an modus operandi of:
a) publicly claim to be positive on serious regulations with teeth, whistleblowing, etc, and that the public should not sign a blank check for AI companies inventing among the most dangerous technologies in history, while
b) privately do almost everything in their power to undermine serious regulations or oversight or public accountability.
If we live in that world (which tbc I’m not saying is certain), someone needs to say that the emperor has no clothes. I don’t like being that someone, but here we are.
Without having read the letter yet, why do you find it questionable?
I’ve read the letter and have the same question
Twitter thread on the letter says more—essentially the post-crisis liability that Anthropic proposes is basically equivalent to bankruptcy of the conpany, which is a pretty mild consequence given the risks at hand. Also, an AI regulatory agency is probably going to be necessary at some point in the future, so it’s better to set it up now and give it time to mature.
I don’t have twitter so I can’t view the thread, but bankruptcy of a company for facilitating $500m of damage being caused (the monetary threshold of the bill) doesn’t seem very mild?
Bankruptcy is an upper bound, not a lower bound. If you could pay up enough in damages and still stay solvent, you probably do. The (alleged/proposed) Anthropic changes isn’t “if you do at least 500M in damages, you’ll go bankrupt.” It’s more like “If you do at least $X in damages, the worst that can happen to you is that your company[1] will go bankrupt.”
(To be clear, not a lawyer/legislator, did not read the letter very carefully, etc)
If I understand correctly, they are also pushing for limited liability, so directors/executives are not responsible either.
https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=RfJsudqwEMwTR5S5q
TL;DR
Anthropic are pushing for two key changes
not to be accountable for “pre-harm” enforcement of AI Safety standards (ie. wait for a catastrophe before enforcing any liability).
“if a catastrophic event does occur … the quality of the company’s SSP should be a factor in determining whether the developer exercised ‘reasonable care.’”. (ie. if your safety protocols look good, you can be let off the hook for the consequences of catastrophe).
Also significantly weakening whistleblower protections.
Yeah this seems like a reasonable summary of why the letter is probably bad, but tbc I thought it was questionable before I was able to read the letter (so I don’t want to get credit for doing the homework).
Hypothetically, if companies like Anthropic or OpenAI wanted to create a set of heuristics that lets them acquire power while generating positive (or at least neutral) safety-washing PR among credulous nerds, they can have an modus operandi of:
a) publicly claim to be positive on serious regulations with teeth, whistleblowing, etc, and that the public should not sign a blank check for AI companies inventing among the most dangerous technologies in history, while
b) privately do almost everything in their power to undermine serious regulations or oversight or public accountability.
If we live in that world (which tbc I’m not saying is certain), someone needs to say that the emperor has no clothes. I don’t like being that someone, but here we are.