Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to ‘work on’ any imminent and unavoidable challenge whose resolution could require or result in “hard-to-reverse decisions with important and long-lasting consequences”. Current x-risks have been established as sort of the ‘most obvious’ such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as ‘hard-to-reverse’ and the consequences of which are ‘long-lasting’). But can we think of any other such challenges or any other category of such challenges? I don’t know of any others that I’ve found anywhere near as convincing as the x-risk case, but I suppose that’s why the example project on case studies could be important?
Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I’m not saying Will counts as ‘pivoting’ but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today’s resources into the narrower goal of reducing x-risk from misaligned AI?
I think Neel makes a good point.
And to me the sort of ‘other’ elephant in the room is the value of Wytham Abbey beyond thinking of it purely as an investment, i.e. in the comment that you link, Owen Cotton-Barratt tried to explain something about his belief in the value of specialist venues that can host conferences, workshops, researcher meetings etc. and that are committed to promoting a certain flavour of “open-ended intellectual exploration”. One can obviously reasonably disagree with him (and admittedly it is very difficult to figure out the monetary value of this stuff), but it probably at least deserves rebutting explicitly?