Executive summary: The concept of a “safety tax function” provides a framework for analyzing the relationship between technological capability and safety investment requirements, reconciling the ideas of “solving” safety problems and paying ongoing safety costs.
Key points:
Safety tax functions can represent both “once-and-done” and ongoing safety problems, as well as hybrid cases.
Graphing safety requirements vs. capability levels on log-log axes allows for analysis of safety tax dynamics across different technological eras.
Key factors in safety coordination include peak tax requirement, suddenness and duration of peaks, and asymptotic tax level.
Safety is not binary; contours represent different risk tolerance levels as capabilities scale.
The model could be extended to account for world-leading vs. minimum safety standards, non-scalar capabilities/safety, and sequencing effects.
This framework may help provide an intuitive grasp of strategic dynamics in AI safety and other potentially dangerous technologies.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The concept of a “safety tax function” provides a framework for analyzing the relationship between technological capability and safety investment requirements, reconciling the ideas of “solving” safety problems and paying ongoing safety costs.
Key points:
Safety tax functions can represent both “once-and-done” and ongoing safety problems, as well as hybrid cases.
Graphing safety requirements vs. capability levels on log-log axes allows for analysis of safety tax dynamics across different technological eras.
Key factors in safety coordination include peak tax requirement, suddenness and duration of peaks, and asymptotic tax level.
Safety is not binary; contours represent different risk tolerance levels as capabilities scale.
The model could be extended to account for world-leading vs. minimum safety standards, non-scalar capabilities/safety, and sequencing effects.
This framework may help provide an intuitive grasp of strategic dynamics in AI safety and other potentially dangerous technologies.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.