Executive summary: Bayesian probability distributions, rather than single point estimates, should be used when reasoning about highly uncertain risks like existential threats from AI, but this approach has some counterintuitive and potentially problematic implications.
Key points:
In real-world Bayesian reasoning, both priors and likelihood ratios are often uncertain, so probability distributions should be used instead of single numbers.
This approach can be applied to estimating the probability of existential risks like AI doom scenarios, by assigning probabilities to different models of AI development.
However, taking the expected value over a wide probability distribution can lead to counterintuitive results, like a “humble cosmologist” being interpreted as assigning a relatively high probability to simulation shutdown risk.
Naively using expected value in decision making could lead to taking drastic actions based on highly speculative risks.
This approach may privilege unfalsifiable hypotheses and seems to rate speculative risks as more probable than risks grounded in empirical evidence.
The author is uncertain about the full implications and potential solutions, but remains wary of relying on single-number probability estimates for existential risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Bayesian probability distributions, rather than single point estimates, should be used when reasoning about highly uncertain risks like existential threats from AI, but this approach has some counterintuitive and potentially problematic implications.
Key points:
In real-world Bayesian reasoning, both priors and likelihood ratios are often uncertain, so probability distributions should be used instead of single numbers.
This approach can be applied to estimating the probability of existential risks like AI doom scenarios, by assigning probabilities to different models of AI development.
However, taking the expected value over a wide probability distribution can lead to counterintuitive results, like a “humble cosmologist” being interpreted as assigning a relatively high probability to simulation shutdown risk.
Naively using expected value in decision making could lead to taking drastic actions based on highly speculative risks.
This approach may privilege unfalsifiable hypotheses and seems to rate speculative risks as more probable than risks grounded in empirical evidence.
The author is uncertain about the full implications and potential solutions, but remains wary of relying on single-number probability estimates for existential risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.