Note for readers: this was also posted on LessWrong, where it received a very different reception and a bunch of good responses. Summary: the author is confidently, egregiously wrong (or at least very confused) about most of the object-level points he accuses Eliezer and others of being mistaken or overconfident about.
Also, the writing here seems much more like it is deliberately engineered to get you to believe something (that Eliezer is bad) than anything Eliezer has ever actually written. If you initially found such arguments convincing, consider examining whether you have been “duped” by the author.
I don’t think you’ve summarised the LessWrong comments well. Currently, they don’t really engage with the substantive content of the post and/or aren’t convincing to me. They spend a lot of time criticising the tone of the post. The comments here by Dr. David Mathers are a far better critique than anything on LessWrong.
I do agree that the post title goes too far compared to what is actually argued.
Also, the writing here seems much more like it is deliberately engineered to get you to believe something (that Eliezer is bad) than anything Eliezer has ever actually written. If you initially found such arguments convincing, consider examining whether you have been “duped” by the author.
This paragraph seems in bad faith without substantiating, currently it’s just vague rhetoric. What do you mean by “deliberately engineered to get you to believe something”? That sounds to me like a way of framing “making an argument” to sound malicious.
I personally commented with an object-level objection; plenty of others have done the same.
I mostly take issue with the factual claims in the post, which I think is riddled with errors and misunderstandings (many of which have been pointed out), but the language is also unnecessarily emotionally charged and inflammatory in many places. A quick sampling:
But as I grew older and learned more, I realized it was all bullshit.
it becomes clear that his view is a house of cards, built entirely on falsehoods and misrepresentations.
And I spend much more time listening to Yukowsky’s followers spout nonsense than most other people.
(phrased in a maximally Eliezer like way): … (condescending chuckle)
I am frankly pretty surprised to see this so highly-upvoted on the EAF; the tone is rude and condescending, more so than anything I can recall Eliezer writing, and much more so than the usual highly-upvoted posts here.
The OP seems more interested in arguing about whatever “mainstream academics” believe than responding to (or even understanding) object-level objections. But even on that topic, they make a bunch of misstatements and overclaims. From a comment:
But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer’s anti-zombie argument, or animal nonconsciousness are correct.
(Plenty of people who disagree with the author and agree or partially agree with Eliezer about the object-level topics are in academia. Some of them even post on LessWrong and the EAF!)
I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong! Notably, I haven’t replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts.
Putting aside whether or not what you say is correct, do you think it’s possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.
I mean, it’s always possible. But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer’s anti-zombie argument, or animal nonconsciousness are correct.
You’ve commented 12 times so far on that post, including on all 4 of the top responses. My advice: try engaging from a perspective of inquiry and seeking understanding, rather than agreement / disagreement. This might take longer than making a bunch of rapid-fire responses to every negative comment, but will probably be more effective.
My own experience commenting and getting a response from you is that there’s not much room for disagreement on decision theory—the issue is more that you don’t have a solid grasp of the basics of the thing you’re trying to criticize, and I (and others) are explaining why. I don’t mind elaborating more for others, but I probably won’t engage further with you unless you change your tone and approach, or articulate a more informed objection.
You’re response in the decision theory case was that there’s no way that a rational agent could be in that epistemic state. But we can just stipulate it for the purpose of the hypothetical.
In addition, the scenario doesn’t require absurdly low odds. Suppose that a demon has a 70% chance of creating people who will chop their legs off. You’ve been created and your actions will affect no one else. FDT implies that you have strong reason to chop your legs off even though it doesn’t benefit you at all.
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
Note for readers: this was also posted on LessWrong, where it received a very different reception and a bunch of good responses. Summary: the author is confidently, egregiously wrong (or at least very confused) about most of the object-level points he accuses Eliezer and others of being mistaken or overconfident about.
Also, the writing here seems much more like it is deliberately engineered to get you to believe something (that Eliezer is bad) than anything Eliezer has ever actually written. If you initially found such arguments convincing, consider examining whether you have been “duped” by the author.
I don’t think you’ve summarised the LessWrong comments well. Currently, they don’t really engage with the substantive content of the post and/or aren’t convincing to me. They spend a lot of time criticising the tone of the post. The comments here by Dr. David Mathers are a far better critique than anything on LessWrong.
I do agree that the post title goes too far compared to what is actually argued.
This paragraph seems in bad faith without substantiating, currently it’s just vague rhetoric. What do you mean by “deliberately engineered to get you to believe something”? That sounds to me like a way of framing “making an argument” to sound malicious.
I personally commented with an object-level objection; plenty of others have done the same.
I mostly take issue with the factual claims in the post, which I think is riddled with errors and misunderstandings (many of which have been pointed out), but the language is also unnecessarily emotionally charged and inflammatory in many places. A quick sampling:
I am frankly pretty surprised to see this so highly-upvoted on the EAF; the tone is rude and condescending, more so than anything I can recall Eliezer writing, and much more so than the usual highly-upvoted posts here.
The OP seems more interested in arguing about whatever “mainstream academics” believe than responding to (or even understanding) object-level objections. But even on that topic, they make a bunch of misstatements and overclaims. From a comment:
(Plenty of people who disagree with the author and agree or partially agree with Eliezer about the object-level topics are in academia. Some of them even post on LessWrong and the EAF!)
I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong! Notably, I haven’t replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts.
Putting aside whether or not what you say is correct, do you think it’s possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.
I mean, it’s always possible. But the views I defend here are utterly mainstream. Virtually no people in academia think either FDT, Eliezer’s anti-zombie argument, or animal nonconsciousness are correct.
You’ve commented 12 times so far on that post, including on all 4 of the top responses. My advice: try engaging from a perspective of inquiry and seeking understanding, rather than agreement / disagreement. This might take longer than making a bunch of rapid-fire responses to every negative comment, but will probably be more effective.
My own experience commenting and getting a response from you is that there’s not much room for disagreement on decision theory—the issue is more that you don’t have a solid grasp of the basics of the thing you’re trying to criticize, and I (and others) are explaining why. I don’t mind elaborating more for others, but I probably won’t engage further with you unless you change your tone and approach, or articulate a more informed objection.
You’re response in the decision theory case was that there’s no way that a rational agent could be in that epistemic state. But we can just stipulate it for the purpose of the hypothetical.
In addition, the scenario doesn’t require absurdly low odds. Suppose that a demon has a 70% chance of creating people who will chop their legs off. You’ve been created and your actions will affect no one else. FDT implies that you have strong reason to chop your legs off even though it doesn’t benefit you at all.
I did not say this.
OK, in that case, the agent in the hypothetical should probably consider whether they are in a short-lived simulation.
No, it might say that, depending on (among other things) what exactly it means to value your own existence.
It means your preference ordering says that it’s very good for you to be alive.
We can stipulate that you get decisive evidence that you’re not in a simulation.
So then chop your legs off if you care about maximizing your total amount of experience of being alive across the multiverse (though maybe check that your measure of such experience is well-defined before doing so), or don’t chop them off if you care about maximizing the fraction of high-quality subjective experience of being alive that you have.
This seems more like an anthropics issue than a question where you need any kind of fancy decision theory though. It’s probably better to start by understanding decision theory without examples that involve existence or not, since those introduce a bunch of weird complications about the nature of the multiverse and what it even means to exist (or fail to exist) in the first place.
Let’s stipulate you have good evidence that you are the only being in the universe, and no one else will exist in the future. You don’t care about what happens to anyone else.
OK. Simultaneously believing that and believing the truth of the original setup seems dangerously close to believing a contradiction.
But anyway, you don’t really need all those stipulations to decide not to chop your legs off; just don’t do that if you value your legs. (You also don’t need FDT to see that you should defect against CooperateBot in a prisoner’s dilemma, though of course FDT will give the same answer.)
A couple of general points to keep in mind when dealing with thought experiments that involve thorny or exotic questions of (non-)existence:
“Entities that don’t exist don’t care that they don’t exist” is a vacuously true, for most ordinary definitions of non-existence. If you fail to exist as a result of your decision process, that’s generally not a problem for you, unless you also have unusual preferences over or beliefs about the precise nature of existence and non-existence.[1]
If you make the universe inconsistent as a result of your decision process, that’s also not a problem for you (or for your decision process). Though it may be a problem for the universe creator, which in the case of a thought experiment could be said to be the author of that thought experiment.
An even simpler view is that logically inconsistent universes don’t actually exist at all—what would it even mean for there to be a universe (or even a thought experiment) in which, say, 1 + 2 = 4? Though if you accepted the simpler view, you’d probably also be a physicalist.
I continue to advise you to avoid confidently pontificating on decision theory thought experiments that directly involve non-existence, until you are more practiced at applying them correctly in ordinary situations.
e.g. unless you’re Carissa Sevar