Executive summary: This exploratory essay argues that while it’s often impossible to determine the optimal value of a goal (like AI safety), it is still decision-relevant and tractable to assess whether it is undervalued or overvalued on the margins—and the author concludes that AI existential risk reduction is clearly undervalued and should receive greater policy attention today.
Key points:
In high-uncertainty contexts, one doesn’t need to calculate the total or optimal value of an option; it’s often enough to judge whether it is undervalued or overvalued relative to the current benchmark (illustrated with examples from art markets and trading).
This “marginal thinking” applies in politics: policymakers can ask whether a goal (e.g. crime prevention, welfare spending) should be weighted more or less heavily, even without knowing its exact optimal level.
Applying this to AI existential risk, the author finds it difficult to calculate the “optimal” tradeoff between utopia and extinction scenarios, but argues that policymakers don’t need this precision to make better decisions.
On the margins, AI safety is severely undervalued: most politicians and the public barely recognize existential risk, and many low-cost, high-value policy improvements (e.g. AI developer safety protocols, whistleblower protections) remain unimplemented.
While it’s possible that AI safety could eventually be overemphasized, the author sees that risk as very distant; for now, more prioritization is warranted.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory essay argues that while it’s often impossible to determine the optimal value of a goal (like AI safety), it is still decision-relevant and tractable to assess whether it is undervalued or overvalued on the margins—and the author concludes that AI existential risk reduction is clearly undervalued and should receive greater policy attention today.
Key points:
In high-uncertainty contexts, one doesn’t need to calculate the total or optimal value of an option; it’s often enough to judge whether it is undervalued or overvalued relative to the current benchmark (illustrated with examples from art markets and trading).
This “marginal thinking” applies in politics: policymakers can ask whether a goal (e.g. crime prevention, welfare spending) should be weighted more or less heavily, even without knowing its exact optimal level.
Applying this to AI existential risk, the author finds it difficult to calculate the “optimal” tradeoff between utopia and extinction scenarios, but argues that policymakers don’t need this precision to make better decisions.
On the margins, AI safety is severely undervalued: most politicians and the public barely recognize existential risk, and many low-cost, high-value policy improvements (e.g. AI developer safety protocols, whistleblower protections) remain unimplemented.
While it’s possible that AI safety could eventually be overemphasized, the author sees that risk as very distant; for now, more prioritization is warranted.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.