I seem to remember a comment from Carl Shulman saying the risk of simulation shut-down should not be assumed to be less than 1 in 1 M per year (or maybe it was per century). This suggests there is still a long way before it happens. On the other hand, I would intuitively think the risk to be higher if the time we are in really is special. I do not remember whether the comment was taking that into account.
And even if we are a simulation, to all intents and purposes it is real to us. It doesnāt seem like much of a consolation that the simulators might restart the simulation after we go extinct (any more than the Many Worlds interpretation of Quantum Mechanics gives solace over many universes still existing nearby in probability space in the multiverse).
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down.
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down
I still donāt think this goes through either. Iām saying we should care about our world going extinct just as much as if it were the only world (given we canāt causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimedat slowing AI down (/āpushing for a global moratorium on AGI development). Thereās not enough time for AI Safety work to bear fruit otherwise.
Thanks for engaging, Greg!
I seem to remember a comment from Carl Shulman saying the risk of simulation shut-down should not be assumed to be less than 1 in 1 M per year (or maybe it was per century). This suggests there is still a long way before it happens. On the other hand, I would intuitively think the risk to be higher if the time we are in really is special. I do not remember whether the comment was taking that into account.
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down.
I still donāt think this goes through either. Iām saying we should care about our world going extinct just as much as if it were the only world (given we canāt causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimed at slowing AI down (/āpushing for a global moratorium on AGI development). Thereās not enough time for AI Safety work to bear fruit otherwise.