That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical—e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I don’t know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom’s ‘Maxipok’ rule). So I think not wholeheartedly going for maximum expected value isn’t a sign of irrationality, and could reflect different, sound, decision approaches.
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical—e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I don’t know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom’s ‘Maxipok’ rule). So I think not wholeheartedly going for maximum expected value isn’t a sign of irrationality, and could reflect different, sound, decision approaches.