I agree with the general thrust of this post—at least the weak version that we should consider this to be an unexplored field, worth putting some effort into. But I strong disagree with the sentiment here:
maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.
Most people (though not everyone) thinks we’re much more likely than not to Survive this century. Metaculus puts *extinction* risk at about 4%; a survey of superforecasters put it at 1%. Toby Ord put total existential risk this century at 16%.
Not that I disagree with their estimates (if anything I’d guess they’re too high), but because ‘extinction this century’ is a tiny fraction of the amount of time in which we could go extinct before going interstellar. There’s any number of Very Bad Things that could happen to us this century or soon thereafter which could substantially reduce our probability of surviving long term (in particular, of getting to a state where we can survive the expansion of our sun).
Citation needed on this point. I you’re underrepresenting the selection bias for a start—it’s extremely hard to know how many people have engaged with and rejected the doomer ideas since they have far less incentive to promote their views. And those who do often find sloppy argument and gross misuses of the data in some of the prominent doomer arguments. (I didn’t have to look too deeply to realise the orthogonality thesis was a substantial source of groupthink)
Even within AI safety workers, it’s far from clear to me that the relationship you assert exists. My impression of the AI safety space is that there are many orgs working on practical problems that they take very seriously without putting much credence in the human-extinction scenarios (FAR.AI, Epoch, UK AISI off the top of my head).
One guy also looked at the explicit views of AI experts and found if anything an anticorrelation between their academic success and their extinction-related concern. That was looking back over a few years and obviously a lot can change in that time, but the arguments for AI extinction had already been around for well over a decade at the time of that survey.
This is true for forecasting in every domain. There are virtually always domain experts who have spent their careers thinking about any given question, and yet superforecasters seem to systematically outperform them. If this weren’t true, superforecasting wouldn’t be a field—we’d just go straight to the domain experts for our predictions.