”FERGUSON: I think the problem is that we are haunted by doomsday scenarios because they’re seared in our subconscious by religion, even though we think we’re very secular. We have this hunch that the end is nigh. The world is going to end in 12 years, or no, it must be 10. So I think part of the problem of modernity is that we’re still haunted by the end time.
We also have the nasty suspicion — this is there in Nick Bostrom’s work — that we’ve created a whole bunch of technologies that have actually increased the probability rather than reduced the probability of an extinction-level event. On the other hand, we’re told that there’s a singularity in prospect when all the technologies will come together to produce superhuman beings with massively extended lifespans and the added advantage of artificial general intelligence.
The epistemic problem, as I see it is — Ian Morris wrote this in one of his recent books— which is the scenario? Extinction-level events or the singularity? That seems a tremendously widely divergent set of scenarios to choose from. I sense that — perhaps this is just the historian’s instinct — that each of these scenarios is, in fact, a very low probability indeed, and that we should spend more time thinking about the more likely scenarios that lie between them.
Your essay, which I was prompted to read before our conversation, about the epistemic problem and consequentialism set me thinking about work I’d done on counterfactual history, for which I would have benefited from reading that essay sooner.
I think that if you ask what are the counterfactuals of the future, we spend too much time thinking about the quite unlikely scenarios of the end of the world through climate change or some other calamity of the sort that Bostrom talks about, or some extraordinary leap forward. I can’t help feeling that these are — not that we can attach probabilities; they lie in the realm of uncertainty — but they don’t seem likely scenarios to me.
I think we’ll end up with something that’s rather more mundane, and perhaps a relief if we’re really serious about the end of the world, or perhaps a disappointment if we’re serious about the singularity.”
Doom: The Politics of Catastrophe by Niall Ferguson examines the way governments have handled catastrophes in the past, with widely varying results.
I enjoyed his podcast with Tyler Cowen on it which touches on AI risk
https://conversationswithtyler.com/episodes/niall-ferguson/
”FERGUSON: I think the problem is that we are haunted by doomsday scenarios because they’re seared in our subconscious by religion, even though we think we’re very secular. We have this hunch that the end is nigh. The world is going to end in 12 years, or no, it must be 10. So I think part of the problem of modernity is that we’re still haunted by the end time.
We also have the nasty suspicion — this is there in Nick Bostrom’s work — that we’ve created a whole bunch of technologies that have actually increased the probability rather than reduced the probability of an extinction-level event. On the other hand, we’re told that there’s a singularity in prospect when all the technologies will come together to produce superhuman beings with massively extended lifespans and the added advantage of artificial general intelligence.
The epistemic problem, as I see it is — Ian Morris wrote this in one of his recent books— which is the scenario? Extinction-level events or the singularity? That seems a tremendously widely divergent set of scenarios to choose from. I sense that — perhaps this is just the historian’s instinct — that each of these scenarios is, in fact, a very low probability indeed, and that we should spend more time thinking about the more likely scenarios that lie between them.
Your essay, which I was prompted to read before our conversation, about the epistemic problem and consequentialism set me thinking about work I’d done on counterfactual history, for which I would have benefited from reading that essay sooner.
I think that if you ask what are the counterfactuals of the future, we spend too much time thinking about the quite unlikely scenarios of the end of the world through climate change or some other calamity of the sort that Bostrom talks about, or some extraordinary leap forward. I can’t help feeling that these are — not that we can attach probabilities; they lie in the realm of uncertainty — but they don’t seem likely scenarios to me.
I think we’ll end up with something that’s rather more mundane, and perhaps a relief if we’re really serious about the end of the world, or perhaps a disappointment if we’re serious about the singularity.”