All good points, but Tarsney’s argument doesn’t depend on the assumption that longtermist interventions cannot accidentally increase x-risk. It just depends on the assumption that there’s some way that we could spend $1 million that would increase the epistemic probability that humanity survives the next thousand years by at least 2x10^-14.
All good points, but Tarsney’s argument doesn’t depend on the assumption that longtermist interventions cannot accidentally increase x-risk. It just depends on the assumption that there’s some way that we could spend $1 million that would increase the epistemic probability that humanity survives the next thousand years by at least 2x10^-14.