That’s fair; upon re-reading your comment it’s actually pretty obvious you meant the conditional probability, in which case I agree multiplying is fine.
I think the conditional statements are actually straightforward—e.g. once we’ve built something far more capable than humanity, and that system “rebels” against us, it’s pretty certain that we lose, and point (2) is the classic question of how hard alignment is. Your point (1) about whether we build far-superhuman AGI in the next 30 years or so seems like the most uncertain one here.
Yeah, no worries, I was afraid I’d messed up the math for a second there!
It’s funny, I think my estimates are the opposite of yours, I think 1 is probably the most likely, whereas I view 3 as vastly unlikely. None of the proposed takeover scenarios seem within the realm of plausibility, at least in the near future. But I’ve already stated my case elsewhere.
That’s fair; upon re-reading your comment it’s actually pretty obvious you meant the conditional probability, in which case I agree multiplying is fine.
I think the conditional statements are actually straightforward—e.g. once we’ve built something far more capable than humanity, and that system “rebels” against us, it’s pretty certain that we lose, and point (2) is the classic question of how hard alignment is. Your point (1) about whether we build far-superhuman AGI in the next 30 years or so seems like the most uncertain one here.
Yeah, no worries, I was afraid I’d messed up the math for a second there!
It’s funny, I think my estimates are the opposite of yours, I think 1 is probably the most likely, whereas I view 3 as vastly unlikely. None of the proposed takeover scenarios seem within the realm of plausibility, at least in the near future. But I’ve already stated my case elsewhere.