I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong! Notably, I haven’t replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts.
Putting aside whether or not what you say is correct, do you think it’s possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.
I mean, it’s always possible. But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer’s anti-zombie argument, or animal nonconsciousness are correct.
You’ve commented 12 times so far on that post, including on all 4 of the top responses. My advice: try engaging from a perspective of inquiry and seeking understanding, rather than agreement / disagreement. This might take longer than making a bunch of rapid-fire responses to every negative comment, but will probably be more effective.
My own experience commenting and getting a response from you is that there’s not much room for disagreement on decision theory—the issue is more that you don’t have a solid grasp of the basics of the thing you’re trying to criticize, and I (and others) are explaining why. I don’t mind elaborating more for others, but I probably won’t engage further with you unless you change your tone and approach, or articulate a more informed objection.
You’re response in the decision theory case was that there’s no way that a rational agent could be in that epistemic state. But we can just stipulate it for the purpose of the hypothetical.
In addition, the scenario doesn’t require absurdly low odds. Suppose that a demon has a 70% chance of creating people who will chop their legs off. You’ve been created and your actions will affect no one else. FDT implies that you have strong reason to chop your legs off even though it doesn’t benefit you at all.
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong! Notably, I haven’t replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts.
Putting aside whether or not what you say is correct, do you think it’s possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.
I mean, it’s always possible. But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer’s anti-zombie argument, or animal nonconsciousness are correct.
You’ve commented 12 times so far on that post, including on all 4 of the top responses. My advice: try engaging from a perspective of inquiry and seeking understanding, rather than agreement / disagreement. This might take longer than making a bunch of rapid-fire responses to every negative comment, but will probably be more effective.
My own experience commenting and getting a response from you is that there’s not much room for disagreement on decision theory—the issue is more that you don’t have a solid grasp of the basics of the thing you’re trying to criticize, and I (and others) are explaining why. I don’t mind elaborating more for others, but I probably won’t engage further with you unless you change your tone and approach, or articulate a more informed objection.
You’re response in the decision theory case was that there’s no way that a rational agent could be in that epistemic state. But we can just stipulate it for the purpose of the hypothetical.
In addition, the scenario doesn’t require absurdly low odds. Suppose that a demon has a 70% chance of creating people who will chop their legs off. You’ve been created and your actions will affect no one else. FDT implies that you have strong reason to chop your legs off even though it doesn’t benefit you at all.
I did not say this.
OK, in that case, the agent in the hypothetical should probably consider whether they are in a short-lived simulation.
No, it might say that, depending on (among other things) what exactly it means to value your own existence.
It means your preference ordering says that it’s very good for you to be alive.
We can stipulate that you get decisive evidence that you’re not in a simulation.
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
e.g. unless you’re Carissa Sevar