Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down
I still don’t think this goes through either. I’m saying we should care about our world going extinct just as much as if it were the only world (given we can’t causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimedat slowing AI down (/pushing for a global moratorium on AGI development). There’s not enough time for AI Safety work to bear fruit otherwise.
I still don’t think this goes through either. I’m saying we should care about our world going extinct just as much as if it were the only world (given we can’t causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimed at slowing AI down (/pushing for a global moratorium on AGI development). There’s not enough time for AI Safety work to bear fruit otherwise.