So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
It means your preference ordering says that it’s very good for you to be alive.
We can stipulate that you get decisive evidence that you’re not in a simulation.
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
e.g. unless you’re Carissa Sevar