I don’t understand this post, because it seems to be parodying Anthropic’s Responsible Scaling Policies (ie, saying that the RSPs are not sufficient), but the analogy to nuclear power is confusing since IMO nuclear power has in fact been harmfully over-regulated, such that advocating for a “balanced, pragmatic approach to mitigating potential harms from nuclear power” does actually seem good, compared to the status quo where society hugely overreacted to the risks of nuclear power without properly taking a balanced view of the costs vs benefits.
Maybe you can imagine how confused I am, if we use another example of an area where I think there is a harmful attitude of regulating entirely with a view towards avoiding visible errors of commision, and completely ignoring errors of omission:
Hi, we’re your friendly local pharma company. Many in our community have been talking about the need for “vaccine safety.”… We will conduct ongoing evaluations of whether our new covid vaccine might cause catastrophic harm (conservatively defined as >10,000 vaccine-side-effect-induced deaths).
We aren’t sure yet exactly whether the vaccine will have rare serious side effects, since of course we haven’t yet deployed the vaccine in the full population, and we’re rushing to deploy the vaccine quickly in order to save the lives of the thousands of people dying to covid every day. But fortunately, our current research suggests that our vaccine is unlikely to cause unacceptable harm. The frequency and severity of side effects seen so far in medical trials of the vaccine are far below our threshold of concern… the data suggest that we don’t need to adopt additional safety measures at present.
To me, vaccine safety and nuclear safety seem like the least helpful possible analogies to the AI situation, since the FDA and NRC regulatory agencies are both heavily infected with an “avoid deaths of commision at nearly any cost” attitude, which ignores tradeoffs and creates a massive “invisible graveyard” of excess deaths-of-omission. What we want from AI regulation isn’t an insanely one-sided focus that greatly exaggerates certain small harms. Rather, for AI it’s perfectly sufficient to take the responsible, normal, common-sensical approach of balancing costs and benefits. The problem is just that the costs might be extremely high, like a significant chance of causing human extinction!!
Another specific bit of confusion: when you mention that Chernobyl only killed 50 people, is this supposed to convey: 1. This sinister company is deliberately lowballing the Chernobyl deaths in order to justify continuing to ignore real risks, since a linear-no-threshold model suggests that Chernobyl might indeed have caused tens of thousands of excess cancer deaths around the world? (I am pretty pro- nuclear power, but nevertheless the linear-no-threshold model seems plausible to me personally.) 2. That Chernobyl really did kill only 50 people, and therefore the company is actually correct to note that nuclear accidents aren’t a big deal? (But then I’m super-confused about the overall message of the post...) 3. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation on nuclear power plants in order to prevent other rare accidents that might kill 50 people tops? (This seems like extreme over-regulation of a beneficial technology, compared to the much larger number of people who die from the smoke of coal-fired power plants and other power sources.) 4. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation, because future accidents might indeed kill over 10,000 people? (This seems like it would imply some kind of conversation about first-principles reasoning and tail risks and stuff, but this isn’t present in the post?)
Thanks for your comment, Jackson! I’ve removed my post since it seems that it was too confusing. One message that I meant to convey is that the imaginary nuclear company essentially does not have any safety commitments currently in effect (“we aren’t sure yet how to operate our plant safely”) and is willing to accept any number of deaths less than <10,000 people, despite adopting this “responsible nuclear policy.”
I don’t understand this post, because it seems to be parodying Anthropic’s Responsible Scaling Policies (ie, saying that the RSPs are not sufficient), but the analogy to nuclear power is confusing since IMO nuclear power has in fact been harmfully over-regulated, such that advocating for a “balanced, pragmatic approach to mitigating potential harms from nuclear power” does actually seem good, compared to the status quo where society hugely overreacted to the risks of nuclear power without properly taking a balanced view of the costs vs benefits.
Maybe you can imagine how confused I am, if we use another example of an area where I think there is a harmful attitude of regulating entirely with a view towards avoiding visible errors of commision, and completely ignoring errors of omission:
To me, vaccine safety and nuclear safety seem like the least helpful possible analogies to the AI situation, since the FDA and NRC regulatory agencies are both heavily infected with an “avoid deaths of commision at nearly any cost” attitude, which ignores tradeoffs and creates a massive “invisible graveyard” of excess deaths-of-omission. What we want from AI regulation isn’t an insanely one-sided focus that greatly exaggerates certain small harms. Rather, for AI it’s perfectly sufficient to take the responsible, normal, common-sensical approach of balancing costs and benefits. The problem is just that the costs might be extremely high, like a significant chance of causing human extinction!!
Another specific bit of confusion: when you mention that Chernobyl only killed 50 people, is this supposed to convey:
1. This sinister company is deliberately lowballing the Chernobyl deaths in order to justify continuing to ignore real risks, since a linear-no-threshold model suggests that Chernobyl might indeed have caused tens of thousands of excess cancer deaths around the world? (I am pretty pro- nuclear power, but nevertheless the linear-no-threshold model seems plausible to me personally.)
2. That Chernobyl really did kill only 50 people, and therefore the company is actually correct to note that nuclear accidents aren’t a big deal? (But then I’m super-confused about the overall message of the post...)
3. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation on nuclear power plants in order to prevent other rare accidents that might kill 50 people tops? (This seems like extreme over-regulation of a beneficial technology, compared to the much larger number of people who die from the smoke of coal-fired power plants and other power sources.)
4. That Chernobyl really did kill only 50 people, but NEVERTHELESS we need stifling regulation, because future accidents might indeed kill over 10,000 people? (This seems like it would imply some kind of conversation about first-principles reasoning and tail risks and stuff, but this isn’t present in the post?)
Thanks for your comment, Jackson! I’ve removed my post since it seems that it was too confusing. One message that I meant to convey is that the imaginary nuclear company essentially does not have any safety commitments currently in effect (“we aren’t sure yet how to operate our plant safely”) and is willing to accept any number of deaths less than <10,000 people, despite adopting this “responsible nuclear policy.”