I should point out that the natural tendency for civilizations to fall appears to apply to subsets of the human civilization, rather than the entirety of humanity historically. While locally catastrophic, these events were not existential, as humanity survived and recovered.
I’d also argue that the collapse of a civilization requires far more probabilities to go to zero and has greater and more complex causal effects than all time machines just failing to work when tried.
And, the reality is that at this time we do not know if the Non-Cancel Principle is true or false, and whether or not the universe will prevent time travel. Given this, we face the dilemma that if we precommit to not developing time travel and time travel turns out to be possible, then we have just limited ourselves and will probably be outcompeted by a civilization that develops time travel instead of us.
Of course, I meant not Bronze age collapse, but known plethora of existential risks. But your argument that others will outcompete us is valid—unless the totality of x-risks is a universal Great Filter.
I should point out that the natural tendency for civilizations to fall appears to apply to subsets of the human civilization, rather than the entirety of humanity historically. While locally catastrophic, these events were not existential, as humanity survived and recovered.
I’d also argue that the collapse of a civilization requires far more probabilities to go to zero and has greater and more complex causal effects than all time machines just failing to work when tried.
And, the reality is that at this time we do not know if the Non-Cancel Principle is true or false, and whether or not the universe will prevent time travel. Given this, we face the dilemma that if we precommit to not developing time travel and time travel turns out to be possible, then we have just limited ourselves and will probably be outcompeted by a civilization that develops time travel instead of us.
Of course, I meant not Bronze age collapse, but known plethora of existential risks. But your argument that others will outcompete us is valid—unless the totality of x-risks is a universal Great Filter.