Don’t treat probabilities less than 0.5 as if they’re 0

Example: “[wishy-washy argument that AI isn’t risky], therefore we shouldn’t work on AI safety.” How confident are you about that? From your perspective, there’s a non-trivial possibility that you’re wrong. And I don’t even mean 1%, I mean like 30%. Almost everyone working on AI safety think it has less than a 50% chance of killing everyone, but it’s still a good expected value to work on it.

Example: “Shrimp are not moral patients so we shouldn’t try to help them.” Again, how confident are you about that? There’s no way you can be confident enough for this argument to change your prioritization. The margin of error on the cost-effectiveness of some intervention is way higher than the difference in subjective probability on “shrimp are sentient” between someone who does, and someone who does not, care about shrimp welfare.

EAs are better at avoiding this fallacy than pretty much any other group, but still broadly bad at it.

Edited to add more examples:

  1. “I think there will be a slow takeoff, therefore we will spend ~zero effort planning for what to do in case of an intelligence explosion.”

  2. “I think we can solve AI alignment by getting AI to do our homework. Therefore it’s fine for our alignment plans to critically depend on this working, and we don’t need to bother with a backup plan.”

  3. “I don’t foresee any discontinuous leaps in AI capabilities, so it’s safe to keep building iteratively more powerful models, and we can calibrate risk as we go.”

(I’m sure there are plenty of examples outside of AI safety, but that’s what I’ve been thinking about lately.)