I believe you that you’re honestly speaking for your own views, and for the views of lots of other people in ML. From experience, I know that there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway. (With the justification Eliezer highlighted, and in many cases with other justifications, though I don’t think these are adequate.)
I’d be interested to hear your views about this, and why you don’t think superintelligence risk is a reason to pause scaling today. I can imagine a variety of reasons someone might think this, but I have no idea what your reason is, and I think conversation about this is often quite productive.
I don’t see where Eliezer has said “plausibly including nuclear”. The point of mentioning nuclear was to highlight the scale of the risk on Eliezer’s model (‘this is bad enough that even a nuclear confrontation would be preferable’), not to predict nuclear confrontation.