My biggest disagreement is on the probabilities of something “other” going wrong, which look too modestly large to me after a decent attempt to think about what might fail.
After a lot of discussion in the original post, I made a new model where I (a) removed steps I thought were very likely to succeed and (b) removed most of the mass from “other” since discussing with people had increased my confidence that we’d been exhaustive: http://lesswrong.com/lw/fz9
Sorry, if I’d done my homework I’d have linked to that and might have said you agreed with me!
I was pointing out the disagreement not to critique, but to highlight that in the piece Eliezer linked to as exhibiting the problems described, it seemed to me like the biggest issue was in fact a rather different problem.
After a lot of discussion in the original post, I made a new model where I (a) removed steps I thought were very likely to succeed and (b) removed most of the mass from “other” since discussing with people had increased my confidence that we’d been exhaustive: http://lesswrong.com/lw/fz9
Sorry, if I’d done my homework I’d have linked to that and might have said you agreed with me!
I was pointing out the disagreement not to critique, but to highlight that in the piece Eliezer linked to as exhibiting the problems described, it seemed to me like the biggest issue was in fact a rather different problem.