We can’t say for certain that travel to other universes is impossible, so we can’t rule it out as a theoretical possibility.
How’s this argument different from saying, for example, that we can’t rule out God’s existence so we should take him into consideration? Or that we can’t rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?
Alexey Turchin created this chart of theoretical ways that the heat death of the universe could be survivable by our descendants.
The linked post is basically a definition of what “survival” means, without any argument on how any of it is at all plausible.
If you believe that a superintelligence causing torture is implausible, you also have to accept that a superintelligence creating a utopia is also implausible.
How’s this argument different from saying, for example, that we can’t rule out God’s existence so we should take him into consideration? Or that we can’t rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?
If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells resulting from bad AI alignment, and hells resulting from living in a simulation. Hells resulting from bad AI alignment can be plausibly avoided by contributing in some way to solving the AI alignment problem. It’s not clear how hells resulting from living in a simulation could be avoided, but it’s possible that ways to avoid these sorts of hells could be discovered with further analysis of different theoretical types of simulations we may be living in, such as in thismap. Robin Hanson explored some of the potential utilitarian implications of the simulation hypothesis in his article How To Live In A Simulation. Furthermore, mind enhancement could potentially reduce S-risks. If you manage to improve your general thinking abilities, you could potentially discover a new way to reduce S-risks.
A Christian or a Muslim could argue that you ought to convert to their religions in order to avoid going to hell. But a problem with Pascal’s Wager-type arguments is the issue of tradeoffs. It’s not clear that practicing a religion is the most optimal way to avoid hell/S-risks. The time spent going to church, praying, and otherwise being dedicated to your religion is time not spent thinking about AI safety and strategizing ways to avoid S-risks. Working on AI safety, strategizing ways to avoid S-risks, and trying to improve your thinking abilities would probably be more effective at reducing your risk of going to some sort of hell than, say, converting to Christianity would.
The linked post is basically a definition of what “survival” means, without any argument on how any of it is at all plausible.
It mentions finding ways to travel to other universes, send information to other universes, creating a superintelligence to figure out ways to avoid heat death, convincing the creators of the simulation to not turn it off, etc. While these hypothetical ways to survive heat death do involve a lot of speculative physics, they are more than just “defining survival”.
I believe neither is plausible by mistake.
Yet we live in a reality where happiness and suffering exist seemingly by mistake. Your nervous system is the result of millions of years of evolution, not the result of an intelligent designer.
How’s this argument different from saying, for example, that we can’t rule out God’s existence so we should take him into consideration? Or that we can’t rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?
The linked post is basically a definition of what “survival” means, without any argument on how any of it is at all plausible.
I believe neither is plausible by mistake.
If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells resulting from bad AI alignment, and hells resulting from living in a simulation. Hells resulting from bad AI alignment can be plausibly avoided by contributing in some way to solving the AI alignment problem. It’s not clear how hells resulting from living in a simulation could be avoided, but it’s possible that ways to avoid these sorts of hells could be discovered with further analysis of different theoretical types of simulations we may be living in, such as in this map. Robin Hanson explored some of the potential utilitarian implications of the simulation hypothesis in his article How To Live In A Simulation. Furthermore, mind enhancement could potentially reduce S-risks. If you manage to improve your general thinking abilities, you could potentially discover a new way to reduce S-risks.
A Christian or a Muslim could argue that you ought to convert to their religions in order to avoid going to hell. But a problem with Pascal’s Wager-type arguments is the issue of tradeoffs. It’s not clear that practicing a religion is the most optimal way to avoid hell/S-risks. The time spent going to church, praying, and otherwise being dedicated to your religion is time not spent thinking about AI safety and strategizing ways to avoid S-risks. Working on AI safety, strategizing ways to avoid S-risks, and trying to improve your thinking abilities would probably be more effective at reducing your risk of going to some sort of hell than, say, converting to Christianity would.
It mentions finding ways to travel to other universes, send information to other universes, creating a superintelligence to figure out ways to avoid heat death, convincing the creators of the simulation to not turn it off, etc. While these hypothetical ways to survive heat death do involve a lot of speculative physics, they are more than just “defining survival”.
Yet we live in a reality where happiness and suffering exist seemingly by mistake. Your nervous system is the result of millions of years of evolution, not the result of an intelligent designer.