Interesting. But how soon is “soon”? And even if we are a simulation, to all intents and purposes it is real to us. It doesn’t seem like much of a consolation that the simulators might restart the simulation after we go extinct (any more than the Many Worlds interpretation of Quantum Mechanics gives solace over many universes still existing nearby in probability space in the multiverse).
I seem to remember a comment from Carl Shulman saying the risk of simulation shut-down should not be assumed to be less than 1 in 1 M per year (or maybe it was per century). This suggests there is still a long way before it happens. On the other hand, I would intuitively think the risk to be higher if the time we are in really is special. I do not remember whether the comment was taking that into account.
And even if we are a simulation, to all intents and purposes it is real to us. It doesn’t seem like much of a consolation that the simulators might restart the simulation after we go extinct (any more than the Many Worlds interpretation of Quantum Mechanics gives solace over many universes still existing nearby in probability space in the multiverse).
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down.
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down
I still don’t think this goes through either. I’m saying we should care about our world going extinct just as much as if it were the only world (given we can’t causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimedat slowing AI down (/pushing for a global moratorium on AGI development). There’s not enough time for AI Safety work to bear fruit otherwise.
Interesting. But how soon is “soon”? And even if we are a simulation, to all intents and purposes it is real to us. It doesn’t seem like much of a consolation that the simulators might restart the simulation after we go extinct (any more than the Many Worlds interpretation of Quantum Mechanics gives solace over many universes still existing nearby in probability space in the multiverse).
Maybe the simulators will stage an intervention over us reaching the Singularity. I don’t think we can rely on this though (indeed, this is part of the exotic scenarios that make up the ~10% chance that I think we aren’t doomed from AGI by default).
Thanks for engaging, Greg!
I seem to remember a comment from Carl Shulman saying the risk of simulation shut-down should not be assumed to be less than 1 in 1 M per year (or maybe it was per century). This suggests there is still a long way before it happens. On the other hand, I would intuitively think the risk to be higher if the time we are in really is special. I do not remember whether the comment was taking that into account.
Yes, it is not a consolation. It is an argument for focussing more on interventions which have nearterm benefits, like corporate campaigns for chicken welfare, instead of ones whose benefits may not be realised due to simulation shut-down.
I still don’t think this goes through either. I’m saying we should care about our world going extinct just as much as if it were the only world (given we can’t causally influence the others).
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!
Thanks for your donations to the LTFF. I think they need to start funding stuff aimed at slowing AI down (/pushing for a global moratorium on AGI development). There’s not enough time for AI Safety work to bear fruit otherwise.