… AI safety research seems unlikely to have strong enough negative unexpected consequences to outweigh the positive ones in expectation.
to
… Still, it’s possible that there will be a strong enough flow of negative (unforseen) consequences to outweigh the positives. We should take these seriously, and try to make them less unforseen so we can correct for them, or at least have more accurate expected-value estimates. But given what’s at stake, they would need to be pretty darn negative to pull down the expected values enough to outweigh a non-trivial risk of extinction.
Important point. I changed
to
I added an EDIT block in the first paragraph after quoting you (I’ve misinterpreted your sentence).