Executive summary: Risk compensation, where safety improvements lead to more dangerous usage, can partially, fully, or more than fully offset the benefits of safety work, and deserves more attention in analyzing existential risk reduction efforts.
Key points:
Risk compensation occurs when safety improvements lead users to engage in more dangerous usage of a system, offsetting some or all of the safety benefits.
A simple model shows risk compensation can fully or more than fully negate safety improvements, depending on the survival function and the value placed on life without the risky system.
Safety improvements effectively decrease the “price of capabilities”, inducing more dangerous usage, similar to how lowering the price of one good affects demand for its substitutes/complements.
Risk compensation is likely more significant when current usage is far below what would be chosen if safety were no concern.
Estimating the magnitude of risk compensation in existential risk domains is difficult but important.
To reduce risk, one should prefer safety work in domains with long lags before compensation, lower value placed on life without the system, safety improvements focused on lower-capability systems, and improvements perceived as less effective than they are.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Risk compensation, where safety improvements lead to more dangerous usage, can partially, fully, or more than fully offset the benefits of safety work, and deserves more attention in analyzing existential risk reduction efforts.
Key points:
Risk compensation occurs when safety improvements lead users to engage in more dangerous usage of a system, offsetting some or all of the safety benefits.
A simple model shows risk compensation can fully or more than fully negate safety improvements, depending on the survival function and the value placed on life without the risky system.
Safety improvements effectively decrease the “price of capabilities”, inducing more dangerous usage, similar to how lowering the price of one good affects demand for its substitutes/complements.
Risk compensation is likely more significant when current usage is far below what would be chosen if safety were no concern.
Estimating the magnitude of risk compensation in existential risk domains is difficult but important.
To reduce risk, one should prefer safety work in domains with long lags before compensation, lower value placed on life without the system, safety improvements focused on lower-capability systems, and improvements perceived as less effective than they are.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.