Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy’s worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn’t actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.
Side note: Bostrom does not hold or argue for 100% weight on total utilitarianism such as to take overwhelming losses on other views for tiny gains on total utilitarian stances. In Superintelligence he specifically rejects an example extreme tradeoff of that magnitude (not reserving one galaxy’s worth of resources out of millions for humanity/existing beings even if posthumans would derive more wellbeing from a given unit of resources).
I also wouldn’t actually accept a 10 million year delay in tech progress (and the death of all existing beings who would otherwise have enjoyed extended lives from advanced tech, etc) for a 0.001% reduction in existential risk.