But I think R_UDT also has an important point in its disfavor. It fails to satisfy what might be called the “Don’t Make Things Worse Principle,” which says that: It’s not rational to take decisions that will definitely make things worse. Will’s Bomb case is an example of a case where R_UDT violates the this principle, which is very similar to his “Guaranteed Payoffs Principle.”
I think “Don’t Make Things Worse” is a plausible principle at first glance.
One argument against this principle is that CDT endorses following it if you must, but would prefer to self-modify to stop following it (since doing so has higher expected causal utility). The general policy of following the “Don’t Make Things Worse Principle” makes things worse.
Once you’ve already adopted son-of-CDT, which says something like “act like UDT in future dilemmas insofar as the correlations were produced after I adopted this rule, but act like CDT in those dilemmas insofar as the correlations were produced before I adopted this rule”, it’s not clear to me why you wouldn’t just go: “Oh. CDT has lost the thing I thought made it appealing in the first place, this ‘Don’t Make Things Worse’ feature. If we’re going to end up stuck with UDT plus extra theoretical ugliness and loss-of-utility tacked on top, then why not just switch to UDT full stop?”
A more general argument against the Bomb intuition pump is that it involves trading away larger amounts of utility in most possible world-states, in order to get a smaller amount of utility in the Bomb world-state. From Abram Demski’s comments:
[...] In Bomb, the problem clearly stipulates that an agent who follows the FDT recommendation has a trillion trillion to one odds of doing better than an agent who follows the CDT/EDT recommendation. Complaining about the one-in-a-trillion-trillion chance that you get the bomb while being the sort of agent who takes the bomb is, to an FDT-theorist, like a gambler who has just lost a trillion-trillion-to-one bet complaining that the bet doesn’t look so rational now that the outcome is known with certainty to be the one-in-a-trillion-trillion case where the bet didn’t pay well.
[...] One way of thinking about this is to say that the FDT notion of “decision problem” is different from the CDT or EDT notion, in that FDT considers the prior to be of primary importance, whereas CDT and EDT consider it to be of no importance. If you had instead specified ‘bomb’ with just the certain information that ‘left’ is (causally and evidentially) very bad and ‘right’ is much less bad, then CDT and EDT would regard it as precisely the same decision problem, whereas FDT would consider it to be a radically different decision problem.
Another way to think about this is to say that FDT “rejects” decision problems which are improbable according to their own specification. In cases like Bomb where the situation as described is by its own description a one in a trillion trillion chance of occurring, FDT gives the outcome only one-trillion-trillion-th consideration in the expected utility calculation, when deciding on a strategy.
[...] This also hopefully clarifies the sense in which I don’t think the decisions pointed out in (III) are bizarre. The decisions are optimal according to the very probability distribution used to define the decision problem.
There’s a subtle point here, though, since Will describes the decision problem from an updated perspective—you already know the bomb is in front of you. So UDT “changes the problem” by evaluating “according to the prior”. From my perspective, because the very statement of theBombproblem suggests that there were also other possible outcomes, we can rightly insist to evaluate expected utility in terms of those chances.
Perhaps this sounds like an unprincipled rejection of the Bomb problem as you state it. My principle is as follows: you should not state a decision problem without having in mind a well-specified way to predictably put agents into that scenario. Let’s call the way-you-put-agents-into-the-scenario the “construction”. We then evaluate agents on how well they deal with the construction.
For examples like Bomb, the construction gives us the overall probability distribution—this is then used for the expected value which UDT’s optimality notion is stated in terms of.
For other examples, as discussed in Decisions are for making bad outcomes inconsistent, the construction simply breaks when you try to put certain decision theories into it. This can also be a good thing; it means the decision theory makes certain scenarios altogether impossible.
One argument against this principle is that CDT endorses following it if you must, but would prefer to self-modify to stop following it (since doing so has higher expected causal utility).
A more general argument against the Bomb intuition pump is that it involves trading away larger amounts of utility in most possible world-states, in order to get a smaller amount of utility in the Bomb world-state.
This just seems to be the point that R_CDT is self-effacing: It says that people should not follow P_CDT, because following other decision procedures will produce better outcomes in expectation.
I definitely agree that R_CDT is self-effacing in this way (at least in certain scenarios). The question is just whether self-effacingness or failure to satisfy “Don’t Make Things Worse” is more relevant when trying to judge the likelihood of a criterion of rightness being correct. I’m not sure whether it’s possible to do much here other than present personal intuitions.
The point that R_UDT only violates the “Don’t Make Things Worse” principle only infrequently seems relevant, but I’m still not sure this changes my intuitions very much.
If we’re going to end up stuck with UDT plus extra theoretical ugliness and loss-of-utility tacked on top, then why not just switch to UDT full stop?
I may just be missing something, but I don’t see what this theoretical ugliness is. And I don’t intuitively find the ugliness/elegance of the decision procedure recommend by a criterion of rightness to be very relevant when trying to judge whether the criterion is correct.
[[EDIT: Just an extra thought on the fact that R_CDT is self-effacing. My impression is that self-effacingness is typically regarded as a relatively weak reason to reject a moral theory. For example, a lot of people regard utilitarianism as self-effacing both because it’s costly to directly evaluate the utility produced by actions and because others often react poorly to people who engage in utilitarian-style reasoning -- but this typically isn’t regarded as a slam-dunk reasons to believe that utilitarianism is false. I think the SEP article on consequentialism is expressing a pretty mainstream position when it says: “[T]here is nothing incoherent about proposing a decision procedure that is separate from one’s criterion of the right.… Criteria can, thus, be self-effacing without being self-refuting.” Insofar as people don’t tend to buy self-effacingness as a slam-dunk argument against the truth of moral theories, it’s not clear why they should buy it as a slam-dunk argument against the truth of normative decision theories.]]
is more relevant when trying to judge the likelihood of a criterion of rightness being correct
Sorry to drop in in the middle of this back and forth, but I am curious—do you think it’s quite likely that there is a single criterion of rightness that is objectively “correct”?
It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. “don’t make things worse”, or “don’t be self-effacing”). And so far there doesn’t seem to be any single criterion that satisfies all of them.
So why not just conclude that, similar to the case with voting and Arrow’s theorem, perhaps there’s just no single perfect criterion of rightness.
In other words, once we agree that CDT doesn’t make things worse, but that UDT is better as a general policy, is there anything left to argue about about which is “correct”?
EDIT: Decided I had better go and read your Realism and Rationality post, and ended up leaving a lengthy comment there.
Sorry to drop in in the middle of this back and forth, but I am curious—do you think it’s quite likely that there is a single criterion of rightness that is objectively “correct”?
It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. “don’t make things worse”, or “don’t be self-effacing”). And so far there doesn’t seem to be any single criterion that satisfies all of them.
So why not just conclude that, similar to the case with voting and Arrow’s theorem, perhaps there’s just no single perfect criterion of rightness.
Happy to be dropped in on :)
I think it’s totally conceivable that no criterion of rightness is correct (e.g. because the concept of a “criterion of rightness” turns out to be some spooky bit of nonsense that doesn’t really map onto anything in the real world.)
I suppose the main things I’m arguing are just that:
When a philosopher expresses support for a “decision theory,” they are typically saying that they believe some claim about what the correct criterion of rightness is.
Claims about the correct criterion of rightness are distinct from decision procedures.
Therefore, when a member of the rationalist community uses the word “decision theory” to refer to a decision procedure, they are talking about something that’s pretty conceptually distinct from what philosophers typically have in mind. Discussions about what decision procedure performs best or about what decision procedure we should build into future AI systems [[EDIT: or what decision procedure most closely matches our preferences about decision procedures]] don’t directly speak to the questions that most academic “decision theorists” are actually debating with one another.
I also think that, conditional on there being a correct criterion of rightness, R_CDT is more plausible than R_UDT. But this is a relatively tentative view. I’m definitely not a super hardcore R_CDT believer.
It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. “don’t make things worse”, or “don’t be self-effacing”). And so far there doesn’t seem to be any single criterion that satisfies all of them.
So why not just conclude that, similar to the case with voting and Arrow’s theorem, perhaps there’s just no single perfect criterion of rightness.
I guess here—in almost definitely too many words—is how I think about the issue here. (Hopefully these comments are at least somewhat responsive to your question.)
It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn’t even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reaction, though, is to continue to believe that some things have property P—but just drop the assumption that these things will also have both property Q1 and property Q2.
This obviously a pretty abstract description, so I’ll give a few examples. (No need to read the examples if the point seems obvious.)
Ethics: I might initially be inclined to think that it’s always ethical (property P) to maximize happiness and that it’s always unethical to torture people. But then I may realize that there’s an inconsistency here: in at least rare circumstances, such as ticking time-bomb scenarios where torture can extract crucial information, there may be no decision that is both happiness maximizing (Q1) and torture-avoiding (Q2). It seems like a natural reaction here is just to drop either the belief that maximizing happiness is always ethical or that torture is always unethical. It doesn’t seem like I need to abandon my belief that some actions have the property of being ethical.
Theology: I might initially be inclined to think that God is all-knowing, all-powerful, and all-good. But then I might come to believe (whether rightly or not) that, given the existance of evil, these three properties are inconsistent. I might then continue to believe that God exists, but just drop my belief that God is all-good. (To very awkwardly re-express this in the language of properties: This would mean dropping my belief that any entity that has the property of being God also has the property of being all-good).
Politician-bashing: I might initially be inclined to characterize some politician both as an incompetent leader and as someone who’s successfully carrying out an evil long-term plan to transform the country. Then I might realize that these two characterizations are in tension with one another. A pretty natural reaction, then, might be to continue to believe the politician exists—but just drop my belief that they’re incompetent.
To turn to the case of the decision-theoretic criterion of rightness, I might initially be inclined to think that the correct criterion of rightness will satisfy both “Don’t Make Things Worse” and “No Self-Effacement.” It’s now become clear, though, that no criterion of rightness can satisfy both of these principles. I think it’s pretty reasoanble, then, to continue to believe that there’s a correct criterion of rightness—but just drop the belief that the correct criterion of rightness will also satisfy “No Self-Effacement.”
It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn’t even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reaction, though, is to continue to believe that some things have property P—but just drop the assumption that these things will also have both property Q1 and property Q2.
I think I disagree with the claim (or implication) that keeping P is more often more natural. Well, you’re just saying it’s “often” natural, and I suppose it’s natural in some cases and not others. But I think we may disagree on how often it’s natural, though hard to say at this very abstract level. (Did you see my comment in response to your Realism and Rationality post?)
In particular, I’m curious what makes you optimistic about finding a “correct” criterion of rightness. In the case of the politician, it seems clear that learning they don’t have some of the properties you thought shouldn’t call into question whether they exist at all.
But for the case of a criterion of rightness, my intuition (informed by the style of thinking in my comment), is that there’s no particular reason to think there should be one criterion that obviously fits the bill. Your intuition seems to be the opposite, and I’m not sure I understand why.
My best guess, particularly informed by reading through footnote 15 on your Realism and Rationality post, is that when faced with ethical dilemmas (like your torture vs lollipop examples), it seems like there is a correct answer. Does that seem right?
(I realize at this point we’re talking about intuitions and priors on a pretty abstract level, so it may be hard to give a good answer.)
I think I disagree with the claim (or implication) that keeping P is more often more natural. Well, you’re just saying it’s “often” natural, and I suppose it’s natural in some cases and not others. But I think we may disagree on how often it’s natural, though hard to say at this very abstract level. (Did you see my comment in response to your Realism and Rationality post?)
In particular, I’m curious what makes you optimistic about finding a “correct” criterion of rightness. In the case of the politician, it seems clear that learning they don’t have some of the properties you thought shouldn’t call into question whether they exist at all.
But for the case of a criterion of rightness, my intuition (informed by the style of thinking in my comment), is that there’s no particular reason to think there should be one criterion that obviously fits the bill. Your intuition seems to be the opposite, and I’m not sure I understand why.
Hey again!
I appreciated your comment on the LW post. I started writing up a response to this comment and your LW one, back when the thread was still active, and then stopped because it had become obscenely long. Then I ended up badly needing to procrastinate doing something else today. So here’s an over-long document I probably shouldn’t have written, which you are under no social obligation to read.
I think there’s a key piece of your thinking that I don’t quite understand / disagree with, and it’s the idea that normativity is irreducible.
I think I follow you that if normativity were irreducible, then it wouldn’t be a good candidate for abandonment or revision. But that seems almost like begging the question. I don’t understand why it’s irreducible.
Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn’t seem like it poses any challenge to your intuitions from footnote 6 in the document (starting with “I personally care a lot about the question: ‘Is there anything I should do, and, if so, what?’”). And at the same time it explains why there are weird edge cases where the concept seems to break down.
So few things in life seem to be irreducible. (E.g. neither Eric nor Ben is irreducible!) So why would normativity be?
[You also should feel under no social obligation to respond, though it would be fun to discuss this the next time we find ourselves at the same party, should such a situation arise.]
This is a good discussion! Ben, thank you for inspiring so many of these different paths we’ve been going down. :) At some point the hydra will have to stop growing, but I do think the intuitions you’ve been sharing are widespread enough that it’s very worthwhile to have public discussion on these points.
Therefore, when a member of the rationalist community uses the word “decision theory” to refer to a decision procedure, they are talking about something that’s pretty conceptually distinct from what philosophers typically have in mind. Discussions about what decision procedure performs best or about what decision procedure we should build into future AI systems don’t directly speak to the questions that most academic “decision theorists” are actually debating with one another.
On the contrary:
MIRI is more interested in identifying generalizations about good reasoning (“criteria of rightness”) than in fully specifying a particular algorithm.
MIRI does discuss decision algorithms in order to better understand decision-making, but this isn’t different in kind from the ordinary way decision theorists hash things out. E.g., the traditional formulation of CDT is underspecified in dilemmas like Death in Damascus. Joyce and Arntzenius’ response to this wasn’t to go “algorithms are uncouth in our field”; it was to propose step-by-step procedures that they think capture the intuitions behind CDT and give satisfying recommendations for how to act.
MIRI does discuss “what decision procedure performs best”, but this isn’t any different from traditional arguments in the field like “naive EDT is wrong because it performs poorly in the smoking lesion problem”. Compared to the average decision theorist, the average rationalist puts somewhat more weight on some considerations and less weight on others, but this isn’t different in kind from the ordinary disagreements that motivate different views within academic decision theory, and these disagreements about what weight to give categories of consideration are themselves amenable to argument.
As I noted above, MIRI is primarily interested in decision theory for the sake of better understanding the nature of intelligence, optimization, embedded agency, etc., not for the sake of picking a “decision theory we should build into future AI systems”. Again, this doesn’t seem unlike the case of philosophers who think that decision theory arguments will help them reach conclusions about the nature of rationality.
I think it’s totally conceivable that no criterion of rightness is correct (e.g. because the concept of a “criterion of rightness” turns out to be some spooky bit of nonsense that doesn’t really map onto anything in the real world.)
Could you give an example of what the correctness of a meta-criterion like “Don’t Make Things Worse” could in principle consist in?
I’m not looking here for a “reduction” in the sense of a full translation into other, simpler terms. I just want a way of making sense of how human brains can tell what’s “decision-theoretically normative” in cases like this.
Human brains didn’t evolve to have a primitive “normativity detector” that beeps every time a certain thing is Platonically Normative. Rather, different kinds of normativity can be understood by appeal to unmysterious matters like “things brains value as ends”, “things that are useful for various ends”, “things that accurately map states of affairs”...
When I think of other examples of normativity, my sense is that in every case there’s at least one good account of why a human might be able to distinguish “truly” normative things from non-normative ones. E.g. (considering both epistemic and non-epistemic norms):
1. If I discover two alien species who disagree about the truth-value of “carbon atoms have six protons”, I can evaluate their correctness by looking at the world and seeing whether their statement matches the world.
2. If I discover two alien species who disagree about the truth value of “pawns cannot move backwards in chess” or “there are statements in the language of Peano arithmetic that can neither be proved nor disproved in Peano arithmetic”, then I can explain the rules of ‘proving things about chess’ or ‘proving things about PA’ as a symbol game, and write down strings of symbols that collectively constitute a ‘proof’ of the statement in question.
I can then assert that if any member of any species plays the relevant ‘proof’ game using the same rules, from now until the end of time, they will never prove the negation of my result, and (paper, pen, time, and ingenuity allowing) they will always be able to re-prove my result.
(I could further argue that these symbol games are useful ones to play, because various practical tasks are easier once we’ve accumulated enough knowledge about legal proofs in certain games. This usefulness itself provides a criteria for choosing between “follow through on the proof process” and “just start doodling things or writing random letters down”.)
The above doesn’t answer questions like “do the relevant symbols have Platonic objects as truthmakers or referents?”, or “why do we live in a consistent universe?”, or the like. But the above answer seems sufficient for rejecting any claim that there’s something pointless, epistemically suspect, or unacceptably human-centric about affirming Gödel’s first incompleteness theorem. The above is minimally sufficient grounds for going ahead and continuing to treat math as something more significant than theology, regardless of whether we then go on to articulate a more satisfying explanation of why these symbol games work the way they do.
3. If I discover two alien species who disagree about the truth-value of “suffering is terminally valuable”, then I can think of at least two concrete ways to evaluate which parties are correct. First, I can look at the brains of a particular individual or group, see what that individual or group terminally values, and see whether the statement matches what’s encoded in those brains. Commonly the group I use for this purpose is human beings, such that if an alien (or a housecat, etc.) terminally values suffering, I say that this is “wrong”.
Alternatively, I can make different “wrong” predicates for each species: wronghuman, wrongalien1, wrongalien2, wronghousecat, etc.
This has the disadvantage of maybe making it sound like all these values are on “equal footing” in an internally inconsistent way (“it’s wrong to put undue weight on what’s wronghuman!”, where the first “wrong” is secretly standing in for “wronghuman”), but has the advantage of making it easy to see why the aliens’ disagreement might be important and substantive, while still allowing that aliens’ normative claims can be wrong (because they can be mistaken about their own core values).
The details of how to go from a brain to an encoding of “what’s right” seem incredibly complex and open to debate, but it seems beyond reasonable dispute that if the information content of a set of terminal values is encoded anywhere in the universe, it’s going to be in brains (or constructs from brains) rather than in patterns of interstellar dust, digits of pi, physical laws, etc.
If a criterion like “Don’t Make Things Worse” deserves a lot of weight, I want to know what that weight is coming from.
If the answer is “I know it has to come from something, but I don’t know what yet”, then that seems like a perfectly fine placeholder answer to me.
If the answer is “This is like the ‘terminal values’ case, in that (I hypothesize) it’s just an ineradicable component of what humans care about”, then that also seems structurally fine, though I’m extremely skeptical of the claim that the “warm glow of feeling causally efficacious” is important enough to outweigh other things of great value in the real world.
If the answer is “I think ‘Don’t Make Things Worse’ is instrumentally useful, i.e., more useful than UDT for achieving the other things humans want in life”, then I claim this is just false. But, again, this seems like the right kind of argument to be making; if CDT is better than UDT, then that betterness ought to consist in something.
I mostly agree with this. I think the disagreement between CDT and FDT/UDT advocates is less about definitions, and more about which of these things feels more compelling:
1. On the whole, FDT/UDT ends up with more utility.
(I think this intuition tends to hold more force with people the more emotionally salient “more utility” is to you. E.g., consider a version of Newcomb’s problem where two-boxing gets you $100, while one-boxing gets you $100,000 and saves your child’s life.)
2. I’m not the slave of my decision theory, or of the predictor, or of any environmental factor; I can freely choose to do anything in any dilemma, and by choosing to not leave money on the table (e.g., in a transparent Newcomb problem with a 1% chance of predictor failure where I’ve already observed that the second box is empty), I’m “getting away with something” and getting free utility that the FDT agent would miss out on.
(I think this intuition tends to hold more force with people the more emotionally salient it is to imagine the dollars sitting right there in front of you and you knowing that it’s “too late” for one-boxing to get you any more utility in this world.)
There are other considerations too, like how much it matters to you that CDT isn’t self-endorsing. CDT prescribes self-modifying in all future dilemmas so that you behave in a more UDT-like way. It’s fine to say that you personally lack the willpower to follow through once you actually get into the dilemma and see the boxes sitting in front of you; but it’s still the case that a sufficiently disciplined and foresightful CDT agent will generally end up behaving like FDT in the very dilemmas that have been cited to argue for CDT.
If a more disciplined and well-prepared version of you would have one-boxed, then isn’t there something off about saying that two-boxing is in any sense “correct”? Even the act of praising CDT seems a bit self-destructive here, inasmuch as (a) CDT prescribes ditching CDT, and (b) realistically, praising or identifying with CDT is likely to make it harder for a human being to follow through on switching to son-of-CDT (as CDT prescribes).
Mind you, if the sentence “CDT is the most rational decision theory” is true in some substantive, non-trivial, non-circular sense, then I’m inclined to think we should acknowledge this truth, even if it makes it a bit harder to follow through on the EDT+CDT+UDT prescription to one-box in strictly-future Newcomblike problems. When the truth is inconvenient, I tend to think it’s better to accept that truth than to linguistically conceal it.
But the arguments I’ve seen for “CDT is the most rational decision theory” to date have struck me as either circular, or as reducing to “I know CDT doesn’t get me the most utility, but something about it just feels right”.
It’s fine, I think, if “it just feels right” is meant to be a promissory note for some forthcoming account — a clue that there’s some deeper reason to favor CDT, though we haven’t discovered it yet. As the FDT paper puts it:
These are odd conclusions. It might even be argued that sufficiently odd behavior provides evidence that what FDT agents see as “rational” diverges from what humans see as “rational.” And given enough divergence of that sort, we might be justified in predicting that FDT will systematically fail to get the most utility in some as-yet-unknown fair test.
On the other hand, if “it just feels right” is meant to be the final word on why “CDT is the most rational decision theory”, then I feel comfortable saying that “rational” is a poor choice of word here, and neither maps onto a key descriptive category nor maps onto any prescription or norm worthy of being followed.
My impression is that most CDT advocates who know about FDT think FDT is making some kind of epistemic mistake, where the most popular candidate (I think) is some version of magical thinking.
Superstitious people often believe that it’s possible to directly causally influence things across great distances of time and space. At a glance, FDT’s prescription (“one-box, even though you can’t causally affect whether the box is full”) as well as its account of how and why this works (“you can somehow ‘control’ the properties of abstract objects like ‘decision functions’”) seem weird and spooky in the manner of a superstition.
FDT’s response: if a thing seems spooky, that’s a fine first-pass reason to be suspicious of it. But at some point, the accusation of magical thinking has to cash out in some sort of practical, real-world failure—in the case of decision theory, some systematic loss of utility that isn’t balanced by an equal, symmetric loss of utility from CDT. After enough experience of seeing a tool outperforming the competition in scenario after scenario, at some point calling the use of that tool “magical thinking” starts to ring rather hollow. At that point, it’s necessary to consider the possibility that FDT is counter-intuitive but correct (like Einstein’s “spukhafte Fernwirkung”), rather than magical.
In turn, FDT advocates tend to think the following reflects an epistemic mistake by CDT advocates:
2. I’m not the slave of my decision theory, or of the predictor, or of any environmental factor; I can freely choose to do anything in any dilemma, and by choosing to not leave money on the table (e.g., in a transparent Newcomb problem with a 1% chance of predictor failure where I’ve already observed that the second box is empty), I’m “getting away with something” and getting free utility that the FDT agent would miss out on.
The alleged mistake here is a violation of naturalism. Humans tend to think of themselves as free Cartesian agents acting upon the world, rather than as deterministic subprocesses of a larger deterministic process. If we consistently and whole-heartedly accepted the “deterministic subprocess” view of our decision-making, we would find nothing strange about the idea that it’s sometimes right for this subprocess to do locally incorrect things for the sake of better global results.
E.g., consider the transparent Newcomb problem with a 1% chance of predictor error. If we think of the brain’s decision-making as a rule-governed system whose rules we are currently determining (via a meta-reasoning process that is itself governed by deterministic rules), then there’s nothing strange about enacting a rule that gets us $1M in 99% of outcomes and $0 in 1% of outcomes; and following through when the unlucky 1% scenario hits us is nothing to agonize over, it’s just a consequence of the rule we already decided. In that regard, steering the rule-governed system that is your brain is no different than designing a factory robot that performs well enough in 99% of cases to offset the 1% of cases where something goes wrong.
(Note how a lot of these points are more intuitive in CS language. I don’t think it’s a coincidence that people coming from CS were able to improve on academic decision theory’s ideas on these points; I think it’s related to what kinds of stumbling blocks get in the way of thinking in these terms.)
Suppose you initially tell yourself:
“I’m going to one-box in all strictly-future transparent Newcomb problems, since this produces more expected causal (and evidential, and functional) utility. One-boxing and receiving $1M in 99% of future states is worth the $1000 cost of one-boxing in the other 1% of future states.”
Suppose that you then find yourself facing the 1%-likely outcome where Omega leaves the box empty regardless of your choice. You then have a change of heart and decide to two-box after all, taking the $1000.
I claim that the above description feels from the inside like your brain is escaping the iron chains of determinism (even if your scientifically literate system-2 verbal reasoning fully recognizes that you’re a deterministic process). And I claim that this feeling (plus maybe some reluctance to fully accept the problem description as accurate?) is the only thing that makes CDT’s decision seem reasonable in this case.
In reality, however, if we end up not following through on our verbal commitment and we one-box in that 1% scenario, then this would just prove that we’d been mistaken about what rule we had successfully installed in our brains. As it turns out, we were really following the lower-global-utility rule from the outset. A lack of follow-through or a failure of will is itself a part of the decision-making process that Omega is predicting; however much it feels as though a last-minute swerve is you “getting away with something”, it’s really just you deterministically following through on an algorithm that will get you less utility in 99% of scenarios (while happening to be bad at predicting your own behavior and bad at following through on verbalized plans).
I should emphasize that the above is my own attempt to characterize the intuitions behind CDT and FDT, based on the arguments I’ve seen in the wild and based on what makes me feel more compelled by CDT, or by FDT. I could easily be wrong about the crux of disagreement between some CDT and FDT advocates.
In turn, FDT advocates tend to think the following reflects an epistemic mistake by CDT advocates:
I’m not the slave of my decision theory, or of the predictor, or of any environmental factor; I can freely choose to do anything in any dilemma, and by choosing to not leave money on the table (e.g., in a transparent Newcomb problem with a 1% chance of predictor failure where I’ve already observed that the second box is empty), I’m “getting away with something” and getting free utility that the FDT agent would miss out on.
The alleged mistake here is a violation of naturalism. Humans tend to think of themselves as free Cartesian agents acting upon the world, rather than as deterministic subprocesses of a larger deterministic process. If we consistently and whole-heartedly accepted the “deterministic subprocess” view of our decision-making, we would find nothing strange about the idea that it’s sometimes right for this subprocess to do locally incorrect things for the sake of better global results.
Is the following a roughly accurate re-characterization of the intuition here?
“Suppose that there’s an agent that implements P_UDT. Because it is following P_UDT, when it enters the box room it finds a ton of money in the first box and then refrains from taking the money in the second box. People who believe R_CDT claim that the agent should have also taken the money in the second box. But, given that the universe is deterministic, this doesn’t really make sense. From before the moment the agent the room, it was already determined that the agent would one box. Since (in a physically determinstic sense) the P_UDT agent could not have two-boxed, there’s no relevant sense in which the agent should have two-boxed.”
If so, then I suppose my first reaction is that this seems like a general argument against normative realism rather than an argument against any specific proposed criterion of rightness. It also applies, for example, to the claim that a P_CDT agent “should have” one-boxed—since in a physically deterministic sense it could not have. Therefore, I think it’s probably better to think of this as an argument against the truth (and possibly conceptual coherence) of both R_CDT and R_UDT, rather than an argument that favors one over the other.
In general, it seems to me like all statements that evoke counterfactuals have something like this problem. For example, it is physically determined what sort of decision procedure we will build into any given AI system; only one choice of decision procedure is physically consistent with the state of the world at the time the choice is made. So—insofar as we accept this kind of objection from determinism—there seems to be something problematically non-naturalistic about discussing what “would have happened” if we built in one decision procedure or another.
Since (in a physically determinstic sense) the P_UDT agent could not have two-boxed, there’s no relevant sense in which the agent should have two-boxed.”
No, I don’t endorse this argument. To simplify the discussion, let’s assume that the Newcomb predictor is infallible. FDT agents, CDT agents, and EDT agents each get a decision: two-box (which gets you $1000 plus an empty box), or one-box (which gets you $1,000,000 and leaves the $1000 behind). Obviously, insofar as they are in fact following the instructions of their decision theory, there’s only one possible outcome; but it would be odd to say that a decision stops being a decision just because it’s determined by something. (What’s the alternative?)
I do endorse “given the predictor’s perfect accuracy, it’s impossible for the P_UDT agent to two-box and come away with $1,001,000”. I also endorse “given the predictor’s perfect accuracy, it’s impossible for the P_CDT agent to two-box and come away with $1,001,000″. Per the problem specification, no agent can two-box and get $1,001,000 or one-box and get $0. But this doesn’t mean that no decision is made; it just means that the predictor can predict the decision early enough to fill the boxes accordingly.
(Notably, the agent following P_CDT two-boxes because $1,001,000 > $1,000,000 and $1000 > $0, even though this “dominance” argument appeals to two outcomes that are known to be impossible just from the problem statement. I certainly don’t think agents “should” try to achieve outcomes that are impossible from the problem specification itself. The reason agents get more utility than CDT in Newcomb’s problem is that non-CDT agents take into account that the predictor is a predictor when they construct their counterfactuals.)
In the transparent version of this dilemma, the agent who sees the $1M and one-boxes also “could have two-boxed”, but if they had two-boxed, it would only have been after making a different observation. In that sense, if the agent has any lingering uncertainty about what they’ll choose, the uncertainty goes away as soon as they see whether the box is full.
In general, it seems to me like all statements that evoke counterfactuals have something like this problem. For example, it is physically determined what sort of decision procedure we will build into any given AI system; only choice of decision procedure is physically consistent with the state of the world at the time the choice is made. So—insofar as we accept this kind of objection from determinism—there seems to be something problematically non-naturalistic about discussing what “would have happened” if we built in one decision procedure or another.
No, there’s nothing non-naturalistic about this. Consider the scenario you and I are in. Simplifying somewhat, we can think of ourselves as each doing meta-reasoning to try to choose between different decision algorithms to follow going forward; where the new things we learn in this conversation are themselves a part of that meta-reasoning.
The meta-reasoning process is deterministic, just like the object-level decision algorithms are. But this doesn’t mean that we can’t choose between object-level decision algorithms. Rather, the meta-reasoning (in spite of having deterministic causes) chooses either “I think I’ll follow P_FDT from now on” or “I think I’ll follow P_CDT from now on”. Then the chosen decision algorithm (in spite of also having deterministic causes) outputs choices about subsequent actions to take. Meta-processes that select between decision algorithms (to put into an AI, or to run in your own brain, or to recommend to other humans, etc.)) can make “real decisions”, for exactly the same reason (and in exactly the same sense) that the decision algorithms in question can make real decisions.
It isn’t problematic that all these processes requires us to consider counterfactuals that (if we were omniscient) we would perceive as inconsistent/impossible. Deliberation, both at the object level and at the meta level, just is the process of determining the unique and only possible decision. Yet because we are uncertain about the outcome of the deliberation while deliberating, and because the details of the deliberation process do determine our decision (even as these details themselves have preceding causes), it feels from the inside of this process as though both options are “live”, are possible, until the very moment we decide.
I certainly don’t think agents “should” try to achieve outcomes that are impossible from the problem specification itself.
I think you need to make a clearer distinction here between “outcomes that don’t exist in the universe’s dynamics” (like taking both boxes and receiving $1,001,000) and “outcomes that can’t exist in my branch” (like there not being a bomb in the unlucky case). Because if you’re operating just in the branch you find yourself in, many outcomes whose probability an FDT agent is trying to affect are impossible from the problem specification (once you include observations).
And, to be clear, I do think agents “should” try to achieve outcomes that are impossible from the problem specification including observations, if certain criteria are met, in a way that basically lines up with FDT, just like agents “should” try to achieve outcomes that are already known to have happened from the problem specification including observations.
As an example, if you’re in Parfit’s Hitchhiker, you should pay once you reach town, even though reaching town has probability 1 in cases where you’re deciding whether or not to pay, and the reason for this is because it was necessary for reaching town to have had probability 1.
Notably, the agent following P_CDT two-boxes because $1,001,000 > $1,000,000 and $1000 > $0, even though this “dominance” argument appeals to two outcomes that are known to be impossible just from the problem statement. I certainly don’t think agents “should” try to achieve outcomes that are impossible from the problem specification itself.
Suppose that we accept the principle that agents never “should” try to achieve outcomes that are impossible from the problem specification—with one implication being that it’s false that (as R_CDT suggests) agents that see a million dollars in the first box “should” two-box.
This seems to imply that it’s also false that (as R_UDT suggests) an agent that sees that the first box is empty “should” one box. By the problem specification, of course, one boxing when there is no money in the first box is also an impossible outcome. Since decisions to two box only occur when the first box is empty, this would then imply that decisions to two box are never irrational in the context of this problem. But I imagine you don’t want to say that.
I think I probably still don’t understand your objection here—so I’m not sure this point is actually responsive to it—but I initially have trouble seeing what potential violations of naturalism/determinism R_CDT could be committing that R_UDT would not also be committing.
(Of course, just to be clear, both R_UDT and R_CDT imply that the decision to commit yourself to a one-boxing policy at the start of the game would be rational. They only diverge in their judgments of what actual in-room boxing decision would be rational. R_UDT says that the decision to two-box is irrational and R_CDT says that the decision to one-box is irrational.)
But the arguments I’ve seen for “CDT is the most rational decision theory” to date have struck me as either circular, or as reducing to “I know CDT doesn’t get me the most utility, but something about it just feels right”.
It seems to me like they’re coming down to saying something like: the “Guaranteed Payoffs Principle” / “Don’t Make Things Worse Principle” is more core to rational action than being self-consistent. Whereas others think self-consistency is more important.
Mind you, if the sentence “CDT is the most rational decision theory” is true in some substantive, non-trivial, non-circular sense
It’s not clear to me that the justification for CDT is more circular than the justification for FDT. Doesn’t it come down to which principles you favor?
Maybe you could say FDT is more elegant. Or maybe that it satisfies more of the intuitive properties we’d hope for from a decision theory (where elegance might be one of those). But I’m not sure that would make the justification less-circular per se.
I guess one way the justification for CDT could be more circular is if the key or only principle that pushes in favor of it over FDT can really just be seen as a restatement of CDT in a way that the principles that push in favor of FDT do not. Is that what you would claim?
Whereas others think self-consistency is more important.
The main argument against CDT (in my view) is that it tends to get you less utility (regardless of whether you add self-modification so it can switch to other decision theories). Self-consistency is a secondary issue.
It’s not clear to me that the justification for CDT is more circular than the justification for FDT. Doesn’t it come down to which principles you favor?
FDT gets you more utility than CDT. If you value literally anything in life more than you value “which ritual do I use to make my decisions?”, then you should go with FDT over CDT; that’s the core argument.
This argument for FDT would be question-begging if CDT proponents rejected utility as a desirable thing. But instead CDT proponents who are familiar with FDT agree utility is a positive, and either (a) they think there’s no meaningful sense in which FDT systematically gets more utility than CDT (which I think is adequately refuted by Abram Demski), or (b) they think that CDT has other advantages that outweigh the loss of utility (e.g., CDT feels more intuitive to them).
The latter argument for CDT isn’t circular, but as a fan of utility (i.e., of literally anything else in life), it seems very weak to me.
The main argument against CDT (in my view) is that it tends to get you less utility (regardless of whether you add self-modification so it can switch to other decision theories). Self-consistency is a secondary issue.
I do think the argument ultimately needs to come down to an intuition about self-effacingness.
The fact that agents earn less expected utility if they implement P_CDT than if they implement some other decision procedure seems to support the claim that agents should not implement P_CDT.
But there’s nothing logically inconsistent about believing both (a) that R_CDT is true and (b) that agents should not implement P_CDT. To again draw an analogy with a similar case, there’s also nothing logically inconsistent about believing both (a) that utilitarianism is true and (b) that agents should not in general make decisions by carrying out utilitarian reasoning.
So why shouldn’t I believe that R_CDT is true? The argument needs an additional step. And it seems to me like the most addition step here involves an intuition that the criterion of rightness would not be self-effacing.
More formally, it seems like the argument needs to be something along these lines:
Over their lifetimes, agents who implement P_CDT earn less expected utility than agents who implement certain other decision procedures.
(Assumption) Agents should implement whatever decision procedure will earn them the most expected lifetime utility.
Therefore, agents should not implement P_CDT.
(Assumption) The criterion of rightness is not self-effacing. Equivalently, if agents should not implement some decision procedure P_X, then it is not the case that R_X is true.
Therefore—as an implication of points (3) and (4) -- R_CDT is not true.
Whether you buy the “No Self-Effacement” assumption in Step 4 -- or, alternatively, the countervailing “Don’t Make Things Worse” assumption that supports R_CDT—seems to ultimately be a mattter of intuition. At least, I don’t currently know what else people can appeal to here to resolve the disagreement.
[[SIDENOTE: Step 2 is actually a bit ambiguous, since it doesn’t specify how expected lifetime utility is being evaluated. For example, are we talking about expected lifetime utility from a causal or evidential perspective? But I don’t think this ambiguity matters much for the argument.]]
[[SECOND SIDENOTE: I’m using the phrase “self-effacing” rather than “self-contradictory” here, because I think it’s more standard and because “self-contradictory” seems to suggest logical inconsistency.]]
But there’s nothing logically inconsistent about believing both (a) that R_CDT is true and (b) that agents should not implement P_CDT.
If the thing being argued for is “R_CDT plus P_SONOFCDT”, then that makes sense to me, but is vulnerable to all the arguments I’ve been making: Son-of-CDT is in a sense the worst of both worlds, since it gets less utility than FDT and lacks CDT’s “Don’t Make Things Worse” principle.
If the thing being argued for is “R_CDT plus P_FDT”, then I don’t understand the argument. In what sense is P_FDT compatible with, or conducive to, R_CDT? What advantage does this have over “R_FDT plus P_FDT”? (Indeed, what difference between the two views would be intended here?)
So why shouldn’t I believe that R_CDT is true? The argument needs an additional step. And it seems to me like the most addition step here involves an intuition that the criterion of rightness would not be not self-effacing.
The argument against “R_CDT plus P_SONOFCDT” doesn’t require any mention of self-effacingness; it’s entirely sufficient to note that P_SONOFCDT gets less utility than P_FDT.
The argument against “R_CDT plus P_FDT” seems to demand some reference to self-effacingness or inconsistency, or triviality / lack of teeth. But I don’t understand what this view would mean or why anyone would endorse it (and I don’t take you to be endorsing it).
For example, are we talking about expected lifetime utility from a causal or evidential perspective? But I don’t think this ambiguity matters much for the argument.
We want to evaluate actual average utility rather than expected utility, since the different decision theories are different theories of what “expected utility” means.
Hm, I think I may have misinterpretted your previous comment as emphasizing the point that P_CDT “gets you less utility” rather than the point that P_SONOFCDT “gets you less utility.” So my comment was aiming to explain why I don’t think the fact that P_CDT gets less utility provides a strong challenge to the claim that R_CDT is true (unless we accept the “No Self-Effacement Principle”). But it sounds like you might agree that this fact doesn’t on its own provide a strong challenge.
If the thing being argued for is “R_CDT plus P_SONOFCDT”, then that makes sense to me, but is vulnerable to all the arguments I’ve been making: Son-of-CDT is in a sense the worst of both worlds, since it gets less utility than FDT and lacks CDT’s “Don’t Make Things Worse” principle.
In response to the first argument alluded to here: “Gets the most [expected] utility” is ambiguous, as I think we’ve both agreed.
My understanding is that P_SONOFCDT is definitionally the policy that, if an agent decided to adopt it, would cause the largest increase in expected utility. So—if we evaluate the expected utility of a decision to adopt a policy from a casual perspective—it seems to me that P_SONOFCDT “gets the most expected utility.”
If we evaluate the expected utility of a policy from an evidential or subjunctive perspective, however, then another policy may “get the most utility” (because policy adoption decisions may be non-causally correlated.)
Apologies if I’m off-base, but it reads to me like you might be suggesting an argument along these lines:
R_CDT says that it is rational to decide to follow a policy that would not maximize “expected utility” (defined in evidential/subjunctive terms).
(Assumption) But it is not rational to decide to follow a policy that would not maximize “expected utility” (defined in evidential/subjunctive terms).
Therefore R_CDT is not true.
The natural response to this argument is that it’s not clear why we should accept the assumption in Step 2. R_CDT says that the rationality of a decision depends on its “expected utility” defined in causal terms. So someone starting from the position that R_CDT is true obviously won’t accept the assumption in Step 2. R_EDT and R_FDT say that the rationality of a decision depends on its “expected utility” defined in evidential or subjunctive terms. So we might allude to R_EDT or R_FDT to justify the assumption, but of course this would also mean arguing backwards from the conclusion that the argument is meant to reach.
Overall at least this particular simple argument—that R_CDT is false because P_SONOFCDT gets less “expected utility” as defined in evidential/quasi-evidential terms—would seemingly fail to due circularity. But you may have in mind a different argument.
We want to evaluate actual average utility rather than expected utility, since the different decision theories are different theories of what “expected utility” means.
I felt confused by this comment. Doesn’t even R_FDT judge the rationality of a decision by its expected value (rather than its actual value)? And presumably you don’t want to say that someone who accepts unpromising gambles and gets lucky (ending up with high actual average utility) has made more “rational” decisions than someone who accepts promising gambles and gets unlucky (ending up with low actual average utility)?
You also correctly point out that the decision procedure that R_CDT implies agents should rationally commit to—P_SONOFCDT—sometimes outputs decisions that definitely make things worse. So “Don’t Make Things Worse” implies that some of the decisions outputted by P_SONOFCDT are irrational.
But I still don’t see what the argument is here unless we’re assuming “No Self-Effacement.” It still seems to me like we have a few initial steps and then a missing piece.
(Observation) R_CDT implies that it is rational to commit to following the decision procedure P_SONOFCDT.
(Observation) P_SONOFCDT sometimes outputs decisions that definitely make things worse.
(Assumption) It is irrational to take decisions that definitely make things worse. In other words, the “Don’t Make Things Worse” Principle is true.
Therefore, as an implication of Step 2 and Step 3, P_SONOFCDT sometimes outputs irrational decisions.
???
Therefore, R_CDT is false.
The “No Self-Effacement” Principle is equivalent to the principle that: If a criterion of rightness implies that it is rational to commit to a decision procedure, then that decision procedure only produces rational actions. So if we were to assume “No Self-Effacement” in Step 5 then this would allow us to arrive at the conclusion that R_CDT is false. But if we’re not assuming “No Self-Effacement,” then it’s not clear to me how we get there.
Actually, in the context of this particular argument, I suppose we don’t really have the option of assuming that “No Self-Effacement” is true—because this assumption would be inconsistent with the earlier assumption that “Don’t Make Things Worse” is true. So I’m not sure it’s actually possible to make this argument schema work in any case.
There may be a pretty different argument here, which you have in mind. I at least don’t see it yet though.
There may be a pretty different argument here, which you have in mind. I at least don’t see it yet though.
Perhaps the argument is something like:
“Don’t make things worse” (DMTW) is one of the intuitions that leads us to favoring R_CDT
But the actual policy that R_CDT recommends does not in fact follow DMTW
So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_′s, and not about P_′s
But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn’t get to claim the full intuitive endorsement of DMTW. (Yes, DMTW endorses it more than it endorses R_FDT, but R_CDT is still at least somewhat counter-intuitive when judged against the DMTW intuition.)
So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_′s, and not about P_′s
But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn’t get to claim the full intuitive endorsement of DMTW. (Yes, DMTW endorses it more than it endorses R_FDT, but R_CDT is still at least somewhat counter-intuitive when judged against the DMTW intuition.)
Here are two logically inconsistent principles that could be true:
Don’t Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
I have strong intuitions that the fist one is true. I have much weaker (comparatively neglible) intuitions that the second one is true. Since they’re mutually inconsistent, I reject the second and accept the first. I imagine this is also true of most other people who are sympathetic to R_CDT.
One could argue that R_CDT sympathists don’t actually have much stronger intuitions regarding the first principle than the second—i.e. that their intuitions aren’t actually very “targeted” on the first one—but I don’t think that would be right. At least, it’s not right in my case.
A more viable strategy might be to argue for something like a meta-principle:
The ‘Don’t Make Things Worse’ Meta-Principle: If you find “Don’t Make Things Worse” strongly intuitive, then you should also find “Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse” just about as intuitive.
If the meta-principle were true, then I guess this would sort of imply that people’s intuitions in favor of “Don’t Make Things Worse” should be self-neutralizing. They should come packaged with equally strong intuitions for another position that directly contradicts it.
But I don’t see why the meta-principle should be true. At least, my intuitions in favor of the meta-principle are way less strong than my intutions in favor of “Don’t Make Things Worse” :)
Just to say slightly more on this, I think the Bomb case is again useful for illustrating my (I think not uncommon) intuitions here.
Bomb Case: Omega puts a million dollars in a transparent box if he predicts you’ll open it. He puts a bomb in the transparent box if he predicts you won’t open it. He’s only wrong about one in a trillion times.
Now suppose you enter the room and see that there’s a bomb in the box. You know that if you open the box, the bomb will explode and you will die a horrible and painful death. If you leave the room and don’t open the box, then nothing bad will happen to you. You’ll return to a grateful family and live a full and healthy life. You understand all this. You want so badly to live. You then decide to walk up to the bomb and blow yourself up.
Intuitively, this decision strikes me as deeply irrational. You’re intentionally taking an action that you know will cause a horrible outcome that you want badly to avoid. It feels very relevant that you’re flagrantly violating the “Don’t Make Things Worse” principle.
Now, let’s step back a time step. Suppose you know that you’re sort of person who would refuse to kill yourself by detonating the bomb. You might decide that—since Omega is such an accurate predictor—it’s worth taking a pill to turn you into that sort of person, to increase your odds of getting a million dollars. You recognize that this may lead you, in the future, to take an action that makes things worse in a horrifying way. But you calculate that the decision you’re making now is nonetheless making things better in expectation.
This decision strikes me as pretty intuitively rational. You’re violating the second principle—the “Don’t Commit to a Policy...” Principle—but this violation just doesn’t seem that intuitively relevent or remarkable to me. I personally feel like there is nothing too odd about the idea that it can be rational to commit to violating principles of rationality in the future.
(This obviously just a description of my own intuitions, as they stand, though.)
It feels very relevant that you’re flagrantly violating the “Don’t Make Things Worse” principle.
By triggering the bomb, you’re making things worse from your current perspective, but making things better from the perspective of earlier you. Doesn’t that seem strange and deserving of an explanation? The explanation from a UDT perspective is that by updating upon observing the bomb, you actually changed your utility function. You used to care about both the possible worlds where you end up seeing a bomb in the box, and the worlds where you don’t. After updating, you think you’re either a simulation within Omega’s prediction so your action has no effect on yourself or you’re in the world with a real bomb, and you no longer care about the version of you in the world with a million dollars in the box, and this accounts for the conflict/inconsistency.
Giving the human tendency to change our (UDT-)utility functions by updating, it’s not clear what to do (or what is right), and I think this reduces UDT’s intuitive appeal and makes it less of a slam-dunk over CDT/EDT. But it seems to me that it takes switching to the UDT perspective to even understand the nature of the problem. (Quite possibly this isn’t adequately explained in MIRI’s decision theory papers.)
Don’t Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
...
One could argue that R_CDT sympathists don’t actually have much stronger intuitions regarding the first principle than the second—i.e. that their intuitions aren’t actually very “targeted” on the first one—but I don’t think that would be right. At least, it’s not right in my case.
I would agree that, with these two principles as written, more people would agree with the first. (And certainly believe you that that’s right in your case.)
But I feel like the second doesn’t quite capture what I had in mind regarding the DMTW intuition applied to P_′s.
Consider an alternate version:
If a decision would definitely make things worse, then taking that decision is not good policy.
Or alternatively:
If a decision would definitely make things worse, a rational person would not take that decision.
It seems to me that these two claims are naively intuitive on their face, in roughly the same way that the ”… then taking that decision is not rational.” version is. And it’s only after you’ve considered prisoners’ dilemmas or Newcomb’s paradox, etc. that you realize that good policy (or being a rational agent) actually diverges from what’s rational in the moment.
(But maybe others would disagree on how intuitive these versions are.)
EDIT: And to spell out my argument a bit more: if several alternate formulations of a principle are each intuitively appealing, and it turns out that whether some claim (e.g. R_CDT is true) is consistent with the principle comes down to the precise formulation used, then it’s not quite fair to say that the principle fully endorses the claim and that the claim is not counter-intuitive from the perspective of the original intuition.
Of course, this argument is moot if it’s true that the original DMTW intuition was always about rational in-the-moment action, and never about policies or actors. And maybe that’s the case? But I think it’s a little more ambiguous with the ”… is not good policy” or “a rational person would not...” versions than with the “Don’t commit to a policy...” version.
EDIT2: Does what I’m trying to say make sense? (I felt like I was struggling a bit to express myself in this comment.)
If the thing being argued for is “R_CDT plus P_SONOFCDT” … If the thing being argued for is “R_CDT plus P_FDT...
Just as a quick sidenote:
I’ve been thinking of P_SONOFCDT as, by definition, the decision procedure that R_CDT implies that it is rational to commit to implementing.
If we define P_SONOFCDT this way, then anyone who believes that R_CDT is true must also believe that it is rational to implement P_SONOFCDT.
The belief that R_CDT is true and the belief that it is rational to implement P_FDT would only then be consistent if P_SONOFCDT is equivalent to P_FDT (which of course they aren’t). So I would inclined to say that no one should believe in both the correctness of R_CDT and the rationality of implementing P_FDT.
[[EDIT: Actually, I need to distinguish between the decision procedure that it would be rational commit to yourself and the decision procedure that it would be rational to build into an agents. These can sometimes be different. For example, suppose that R_CDT is true and that you’re building twin AI systems and you would like them both to succeed. Then it would be rational for you to give them decision procedures that will cause them to cooperate if they face each other in a prisoner’s dilemma (e.g. some version of P_FDT). But if R_CDT is true and you’ve just been born into the world as one of the twins, it would be rational for you to commit to a decision procedure that would cause you to defect if you face the other AI system in a prisoner’s dilemma (i.e. P_SONOFCDT). I slightly edited the above comment to reflect this. My tentative view—which I’ve alluded to above—is that the various proposed criteria of rightness don’t in practice actually diverge all that much when it comes to the question of what sorts of decision procedures we should build into AI systems. Although I also understand that MIRI is not mainly interested in the question of what sorts of decision procedures we should build into AI systems.]]
I think “Don’t Make Things Worse” is a plausible principle at first glance.
One argument against this principle is that CDT endorses following it if you must, but would prefer to self-modify to stop following it (since doing so has higher expected causal utility). The general policy of following the “Don’t Make Things Worse Principle” makes things worse.
Once you’ve already adopted son-of-CDT, which says something like “act like UDT in future dilemmas insofar as the correlations were produced after I adopted this rule, but act like CDT in those dilemmas insofar as the correlations were produced before I adopted this rule”, it’s not clear to me why you wouldn’t just go: “Oh. CDT has lost the thing I thought made it appealing in the first place, this ‘Don’t Make Things Worse’ feature. If we’re going to end up stuck with UDT plus extra theoretical ugliness and loss-of-utility tacked on top, then why not just switch to UDT full stop?”
A more general argument against the Bomb intuition pump is that it involves trading away larger amounts of utility in most possible world-states, in order to get a smaller amount of utility in the Bomb world-state. From Abram Demski’s comments:
And:
This just seems to be the point that R_CDT is self-effacing: It says that people should not follow P_CDT, because following other decision procedures will produce better outcomes in expectation.
I definitely agree that R_CDT is self-effacing in this way (at least in certain scenarios). The question is just whether self-effacingness or failure to satisfy “Don’t Make Things Worse” is more relevant when trying to judge the likelihood of a criterion of rightness being correct. I’m not sure whether it’s possible to do much here other than present personal intuitions.
The point that R_UDT only violates the “Don’t Make Things Worse” principle only infrequently seems relevant, but I’m still not sure this changes my intuitions very much.
I may just be missing something, but I don’t see what this theoretical ugliness is. And I don’t intuitively find the ugliness/elegance of the decision procedure recommend by a criterion of rightness to be very relevant when trying to judge whether the criterion is correct.
[[EDIT: Just an extra thought on the fact that R_CDT is self-effacing. My impression is that self-effacingness is typically regarded as a relatively weak reason to reject a moral theory. For example, a lot of people regard utilitarianism as self-effacing both because it’s costly to directly evaluate the utility produced by actions and because others often react poorly to people who engage in utilitarian-style reasoning -- but this typically isn’t regarded as a slam-dunk reasons to believe that utilitarianism is false. I think the SEP article on consequentialism is expressing a pretty mainstream position when it says: “[T]here is nothing incoherent about proposing a decision procedure that is separate from one’s criterion of the right.… Criteria can, thus, be self-effacing without being self-refuting.” Insofar as people don’t tend to buy self-effacingness as a slam-dunk argument against the truth of moral theories, it’s not clear why they should buy it as a slam-dunk argument against the truth of normative decision theories.]]
Sorry to drop in in the middle of this back and forth, but I am curious—do you think it’s quite likely that there is a single criterion of rightness that is objectively “correct”?
It seems to me that we have a number of intuitive properties (meta criteria of rightness?) that we would like a criterion of rightness to satisfy (e.g. “don’t make things worse”, or “don’t be self-effacing”). And so far there doesn’t seem to be any single criterion that satisfies all of them.
So why not just conclude that, similar to the case with voting and Arrow’s theorem, perhaps there’s just no single perfect criterion of rightness.
In other words, once we agree that CDT doesn’t make things worse, but that UDT is better as a general policy, is there anything left to argue about about which is “correct”?
EDIT: Decided I had better go and read your Realism and Rationality post, and ended up leaving a lengthy comment there.
Happy to be dropped in on :)
I think it’s totally conceivable that no criterion of rightness is correct (e.g. because the concept of a “criterion of rightness” turns out to be some spooky bit of nonsense that doesn’t really map onto anything in the real world.)
I suppose the main things I’m arguing are just that:
When a philosopher expresses support for a “decision theory,” they are typically saying that they believe some claim about what the correct criterion of rightness is.
Claims about the correct criterion of rightness are distinct from decision procedures.
Therefore, when a member of the rationalist community uses the word “decision theory” to refer to a decision procedure, they are talking about something that’s pretty conceptually distinct from what philosophers typically have in mind. Discussions about what decision procedure performs best or about what decision procedure we should build into future AI systems [[EDIT: or what decision procedure most closely matches our preferences about decision procedures]] don’t directly speak to the questions that most academic “decision theorists” are actually debating with one another.
I also think that, conditional on there being a correct criterion of rightness, R_CDT is more plausible than R_UDT. But this is a relatively tentative view. I’m definitely not a super hardcore R_CDT believer.
I guess here—in almost definitely too many words—is how I think about the issue here. (Hopefully these comments are at least somewhat responsive to your question.)
It seems like following general situation is pretty common: Someone is initially inclined to think that anything with property P will also have property Q1 and Q2. But then they realize that properties Q1 and Q2 are inconsistent with one another.
One possible reaction to this situation is to conclude that nothing actually has property P. Maybe the idea of property P isn’t even conceptually coherent and we should stop talking about it (while continuing to independently discuss properties Q1 and Q2). Often the more natural reaction, though, is to continue to believe that some things have property P—but just drop the assumption that these things will also have both property Q1 and property Q2.
This obviously a pretty abstract description, so I’ll give a few examples. (No need to read the examples if the point seems obvious.)
Ethics: I might initially be inclined to think that it’s always ethical (property P) to maximize happiness and that it’s always unethical to torture people. But then I may realize that there’s an inconsistency here: in at least rare circumstances, such as ticking time-bomb scenarios where torture can extract crucial information, there may be no decision that is both happiness maximizing (Q1) and torture-avoiding (Q2). It seems like a natural reaction here is just to drop either the belief that maximizing happiness is always ethical or that torture is always unethical. It doesn’t seem like I need to abandon my belief that some actions have the property of being ethical.
Theology: I might initially be inclined to think that God is all-knowing, all-powerful, and all-good. But then I might come to believe (whether rightly or not) that, given the existance of evil, these three properties are inconsistent. I might then continue to believe that God exists, but just drop my belief that God is all-good. (To very awkwardly re-express this in the language of properties: This would mean dropping my belief that any entity that has the property of being God also has the property of being all-good).
Politician-bashing: I might initially be inclined to characterize some politician both as an incompetent leader and as someone who’s successfully carrying out an evil long-term plan to transform the country. Then I might realize that these two characterizations are in tension with one another. A pretty natural reaction, then, might be to continue to believe the politician exists—but just drop my belief that they’re incompetent.
To turn to the case of the decision-theoretic criterion of rightness, I might initially be inclined to think that the correct criterion of rightness will satisfy both “Don’t Make Things Worse” and “No Self-Effacement.” It’s now become clear, though, that no criterion of rightness can satisfy both of these principles. I think it’s pretty reasoanble, then, to continue to believe that there’s a correct criterion of rightness—but just drop the belief that the correct criterion of rightness will also satisfy “No Self-Effacement.”
Thanks! This is helpful.
I think I disagree with the claim (or implication) that keeping P is more often more natural. Well, you’re just saying it’s “often” natural, and I suppose it’s natural in some cases and not others. But I think we may disagree on how often it’s natural, though hard to say at this very abstract level. (Did you see my comment in response to your Realism and Rationality post?)
In particular, I’m curious what makes you optimistic about finding a “correct” criterion of rightness. In the case of the politician, it seems clear that learning they don’t have some of the properties you thought shouldn’t call into question whether they exist at all.
But for the case of a criterion of rightness, my intuition (informed by the style of thinking in my comment), is that there’s no particular reason to think there should be one criterion that obviously fits the bill. Your intuition seems to be the opposite, and I’m not sure I understand why.
My best guess, particularly informed by reading through footnote 15 on your Realism and Rationality post, is that when faced with ethical dilemmas (like your torture vs lollipop examples), it seems like there is a correct answer. Does that seem right?
(I realize at this point we’re talking about intuitions and priors on a pretty abstract level, so it may be hard to give a good answer.)
Hey again!
I appreciated your comment on the LW post. I started writing up a response to this comment and your LW one, back when the thread was still active, and then stopped because it had become obscenely long. Then I ended up badly needing to procrastinate doing something else today. So here’s an over-long document I probably shouldn’t have written, which you are under no social obligation to read.
Thanks! Just read it.
I think there’s a key piece of your thinking that I don’t quite understand / disagree with, and it’s the idea that normativity is irreducible.
I think I follow you that if normativity were irreducible, then it wouldn’t be a good candidate for abandonment or revision. But that seems almost like begging the question. I don’t understand why it’s irreducible.
Suppose normativity is not actually one thing, but is a jumble of 15 overlapping things that sometimes come apart. This doesn’t seem like it poses any challenge to your intuitions from footnote 6 in the document (starting with “I personally care a lot about the question: ‘Is there anything I should do, and, if so, what?’”). And at the same time it explains why there are weird edge cases where the concept seems to break down.
So few things in life seem to be irreducible. (E.g. neither Eric nor Ben is irreducible!) So why would normativity be?
[You also should feel under no social obligation to respond, though it would be fun to discuss this the next time we find ourselves at the same party, should such a situation arise.]
This is a good discussion! Ben, thank you for inspiring so many of these different paths we’ve been going down. :) At some point the hydra will have to stop growing, but I do think the intuitions you’ve been sharing are widespread enough that it’s very worthwhile to have public discussion on these points.
On the contrary:
MIRI is more interested in identifying generalizations about good reasoning (“criteria of rightness”) than in fully specifying a particular algorithm.
MIRI does discuss decision algorithms in order to better understand decision-making, but this isn’t different in kind from the ordinary way decision theorists hash things out. E.g., the traditional formulation of CDT is underspecified in dilemmas like Death in Damascus. Joyce and Arntzenius’ response to this wasn’t to go “algorithms are uncouth in our field”; it was to propose step-by-step procedures that they think capture the intuitions behind CDT and give satisfying recommendations for how to act.
MIRI does discuss “what decision procedure performs best”, but this isn’t any different from traditional arguments in the field like “naive EDT is wrong because it performs poorly in the smoking lesion problem”. Compared to the average decision theorist, the average rationalist puts somewhat more weight on some considerations and less weight on others, but this isn’t different in kind from the ordinary disagreements that motivate different views within academic decision theory, and these disagreements about what weight to give categories of consideration are themselves amenable to argument.
As I noted above, MIRI is primarily interested in decision theory for the sake of better understanding the nature of intelligence, optimization, embedded agency, etc., not for the sake of picking a “decision theory we should build into future AI systems”. Again, this doesn’t seem unlike the case of philosophers who think that decision theory arguments will help them reach conclusions about the nature of rationality.
Could you give an example of what the correctness of a meta-criterion like “Don’t Make Things Worse” could in principle consist in?
I’m not looking here for a “reduction” in the sense of a full translation into other, simpler terms. I just want a way of making sense of how human brains can tell what’s “decision-theoretically normative” in cases like this.
Human brains didn’t evolve to have a primitive “normativity detector” that beeps every time a certain thing is Platonically Normative. Rather, different kinds of normativity can be understood by appeal to unmysterious matters like “things brains value as ends”, “things that are useful for various ends”, “things that accurately map states of affairs”...
When I think of other examples of normativity, my sense is that in every case there’s at least one good account of why a human might be able to distinguish “truly” normative things from non-normative ones. E.g. (considering both epistemic and non-epistemic norms):
1. If I discover two alien species who disagree about the truth-value of “carbon atoms have six protons”, I can evaluate their correctness by looking at the world and seeing whether their statement matches the world.
2. If I discover two alien species who disagree about the truth value of “pawns cannot move backwards in chess” or “there are statements in the language of Peano arithmetic that can neither be proved nor disproved in Peano arithmetic”, then I can explain the rules of ‘proving things about chess’ or ‘proving things about PA’ as a symbol game, and write down strings of symbols that collectively constitute a ‘proof’ of the statement in question.
I can then assert that if any member of any species plays the relevant ‘proof’ game using the same rules, from now until the end of time, they will never prove the negation of my result, and (paper, pen, time, and ingenuity allowing) they will always be able to re-prove my result.
(I could further argue that these symbol games are useful ones to play, because various practical tasks are easier once we’ve accumulated enough knowledge about legal proofs in certain games. This usefulness itself provides a criteria for choosing between “follow through on the proof process” and “just start doodling things or writing random letters down”.)
The above doesn’t answer questions like “do the relevant symbols have Platonic objects as truthmakers or referents?”, or “why do we live in a consistent universe?”, or the like. But the above answer seems sufficient for rejecting any claim that there’s something pointless, epistemically suspect, or unacceptably human-centric about affirming Gödel’s first incompleteness theorem. The above is minimally sufficient grounds for going ahead and continuing to treat math as something more significant than theology, regardless of whether we then go on to articulate a more satisfying explanation of why these symbol games work the way they do.
3. If I discover two alien species who disagree about the truth-value of “suffering is terminally valuable”, then I can think of at least two concrete ways to evaluate which parties are correct. First, I can look at the brains of a particular individual or group, see what that individual or group terminally values, and see whether the statement matches what’s encoded in those brains. Commonly the group I use for this purpose is human beings, such that if an alien (or a housecat, etc.) terminally values suffering, I say that this is “wrong”.
Alternatively, I can make different “wrong” predicates for each species: wronghuman, wrongalien1, wrongalien2, wronghousecat, etc.
This has the disadvantage of maybe making it sound like all these values are on “equal footing” in an internally inconsistent way (“it’s wrong to put undue weight on what’s wronghuman!”, where the first “wrong” is secretly standing in for “wronghuman”), but has the advantage of making it easy to see why the aliens’ disagreement might be important and substantive, while still allowing that aliens’ normative claims can be wrong (because they can be mistaken about their own core values).
The details of how to go from a brain to an encoding of “what’s right” seem incredibly complex and open to debate, but it seems beyond reasonable dispute that if the information content of a set of terminal values is encoded anywhere in the universe, it’s going to be in brains (or constructs from brains) rather than in patterns of interstellar dust, digits of pi, physical laws, etc.
If a criterion like “Don’t Make Things Worse” deserves a lot of weight, I want to know what that weight is coming from.
If the answer is “I know it has to come from something, but I don’t know what yet”, then that seems like a perfectly fine placeholder answer to me.
If the answer is “This is like the ‘terminal values’ case, in that (I hypothesize) it’s just an ineradicable component of what humans care about”, then that also seems structurally fine, though I’m extremely skeptical of the claim that the “warm glow of feeling causally efficacious” is important enough to outweigh other things of great value in the real world.
If the answer is “I think ‘Don’t Make Things Worse’ is instrumentally useful, i.e., more useful than UDT for achieving the other things humans want in life”, then I claim this is just false. But, again, this seems like the right kind of argument to be making; if CDT is better than UDT, then that betterness ought to consist in something.
I mostly agree with this. I think the disagreement between CDT and FDT/UDT advocates is less about definitions, and more about which of these things feels more compelling:
1. On the whole, FDT/UDT ends up with more utility.
(I think this intuition tends to hold more force with people the more emotionally salient “more utility” is to you. E.g., consider a version of Newcomb’s problem where two-boxing gets you $100, while one-boxing gets you $100,000 and saves your child’s life.)
2. I’m not the slave of my decision theory, or of the predictor, or of any environmental factor; I can freely choose to do anything in any dilemma, and by choosing to not leave money on the table (e.g., in a transparent Newcomb problem with a 1% chance of predictor failure where I’ve already observed that the second box is empty), I’m “getting away with something” and getting free utility that the FDT agent would miss out on.
(I think this intuition tends to hold more force with people the more emotionally salient it is to imagine the dollars sitting right there in front of you and you knowing that it’s “too late” for one-boxing to get you any more utility in this world.)
There are other considerations too, like how much it matters to you that CDT isn’t self-endorsing. CDT prescribes self-modifying in all future dilemmas so that you behave in a more UDT-like way. It’s fine to say that you personally lack the willpower to follow through once you actually get into the dilemma and see the boxes sitting in front of you; but it’s still the case that a sufficiently disciplined and foresightful CDT agent will generally end up behaving like FDT in the very dilemmas that have been cited to argue for CDT.
If a more disciplined and well-prepared version of you would have one-boxed, then isn’t there something off about saying that two-boxing is in any sense “correct”? Even the act of praising CDT seems a bit self-destructive here, inasmuch as (a) CDT prescribes ditching CDT, and (b) realistically, praising or identifying with CDT is likely to make it harder for a human being to follow through on switching to son-of-CDT (as CDT prescribes).
Mind you, if the sentence “CDT is the most rational decision theory” is true in some substantive, non-trivial, non-circular sense, then I’m inclined to think we should acknowledge this truth, even if it makes it a bit harder to follow through on the EDT+CDT+UDT prescription to one-box in strictly-future Newcomblike problems. When the truth is inconvenient, I tend to think it’s better to accept that truth than to linguistically conceal it.
But the arguments I’ve seen for “CDT is the most rational decision theory” to date have struck me as either circular, or as reducing to “I know CDT doesn’t get me the most utility, but something about it just feels right”.
It’s fine, I think, if “it just feels right” is meant to be a promissory note for some forthcoming account — a clue that there’s some deeper reason to favor CDT, though we haven’t discovered it yet. As the FDT paper puts it:
On the other hand, if “it just feels right” is meant to be the final word on why “CDT is the most rational decision theory”, then I feel comfortable saying that “rational” is a poor choice of word here, and neither maps onto a key descriptive category nor maps onto any prescription or norm worthy of being followed.
My impression is that most CDT advocates who know about FDT think FDT is making some kind of epistemic mistake, where the most popular candidate (I think) is some version of magical thinking.
Superstitious people often believe that it’s possible to directly causally influence things across great distances of time and space. At a glance, FDT’s prescription (“one-box, even though you can’t causally affect whether the box is full”) as well as its account of how and why this works (“you can somehow ‘control’ the properties of abstract objects like ‘decision functions’”) seem weird and spooky in the manner of a superstition.
FDT’s response: if a thing seems spooky, that’s a fine first-pass reason to be suspicious of it. But at some point, the accusation of magical thinking has to cash out in some sort of practical, real-world failure—in the case of decision theory, some systematic loss of utility that isn’t balanced by an equal, symmetric loss of utility from CDT. After enough experience of seeing a tool outperforming the competition in scenario after scenario, at some point calling the use of that tool “magical thinking” starts to ring rather hollow. At that point, it’s necessary to consider the possibility that FDT is counter-intuitive but correct (like Einstein’s “spukhafte Fernwirkung”), rather than magical.
In turn, FDT advocates tend to think the following reflects an epistemic mistake by CDT advocates:
The alleged mistake here is a violation of naturalism. Humans tend to think of themselves as free Cartesian agents acting upon the world, rather than as deterministic subprocesses of a larger deterministic process. If we consistently and whole-heartedly accepted the “deterministic subprocess” view of our decision-making, we would find nothing strange about the idea that it’s sometimes right for this subprocess to do locally incorrect things for the sake of better global results.
E.g., consider the transparent Newcomb problem with a 1% chance of predictor error. If we think of the brain’s decision-making as a rule-governed system whose rules we are currently determining (via a meta-reasoning process that is itself governed by deterministic rules), then there’s nothing strange about enacting a rule that gets us $1M in 99% of outcomes and $0 in 1% of outcomes; and following through when the unlucky 1% scenario hits us is nothing to agonize over, it’s just a consequence of the rule we already decided. In that regard, steering the rule-governed system that is your brain is no different than designing a factory robot that performs well enough in 99% of cases to offset the 1% of cases where something goes wrong.
(Note how a lot of these points are more intuitive in CS language. I don’t think it’s a coincidence that people coming from CS were able to improve on academic decision theory’s ideas on these points; I think it’s related to what kinds of stumbling blocks get in the way of thinking in these terms.)
Suppose you initially tell yourself:
Suppose that you then find yourself facing the 1%-likely outcome where Omega leaves the box empty regardless of your choice. You then have a change of heart and decide to two-box after all, taking the $1000.
I claim that the above description feels from the inside like your brain is escaping the iron chains of determinism (even if your scientifically literate system-2 verbal reasoning fully recognizes that you’re a deterministic process). And I claim that this feeling (plus maybe some reluctance to fully accept the problem description as accurate?) is the only thing that makes CDT’s decision seem reasonable in this case.
In reality, however, if we end up not following through on our verbal commitment and we one-box in that 1% scenario, then this would just prove that we’d been mistaken about what rule we had successfully installed in our brains. As it turns out, we were really following the lower-global-utility rule from the outset. A lack of follow-through or a failure of will is itself a part of the decision-making process that Omega is predicting; however much it feels as though a last-minute swerve is you “getting away with something”, it’s really just you deterministically following through on an algorithm that will get you less utility in 99% of scenarios (while happening to be bad at predicting your own behavior and bad at following through on verbalized plans).
I should emphasize that the above is my own attempt to characterize the intuitions behind CDT and FDT, based on the arguments I’ve seen in the wild and based on what makes me feel more compelled by CDT, or by FDT. I could easily be wrong about the crux of disagreement between some CDT and FDT advocates.
Is the following a roughly accurate re-characterization of the intuition here?
“Suppose that there’s an agent that implements P_UDT. Because it is following P_UDT, when it enters the box room it finds a ton of money in the first box and then refrains from taking the money in the second box. People who believe R_CDT claim that the agent should have also taken the money in the second box. But, given that the universe is deterministic, this doesn’t really make sense. From before the moment the agent the room, it was already determined that the agent would one box. Since (in a physically determinstic sense) the P_UDT agent could not have two-boxed, there’s no relevant sense in which the agent should have two-boxed.”
If so, then I suppose my first reaction is that this seems like a general argument against normative realism rather than an argument against any specific proposed criterion of rightness. It also applies, for example, to the claim that a P_CDT agent “should have” one-boxed—since in a physically deterministic sense it could not have. Therefore, I think it’s probably better to think of this as an argument against the truth (and possibly conceptual coherence) of both R_CDT and R_UDT, rather than an argument that favors one over the other.
In general, it seems to me like all statements that evoke counterfactuals have something like this problem. For example, it is physically determined what sort of decision procedure we will build into any given AI system; only one choice of decision procedure is physically consistent with the state of the world at the time the choice is made. So—insofar as we accept this kind of objection from determinism—there seems to be something problematically non-naturalistic about discussing what “would have happened” if we built in one decision procedure or another.
No, I don’t endorse this argument. To simplify the discussion, let’s assume that the Newcomb predictor is infallible. FDT agents, CDT agents, and EDT agents each get a decision: two-box (which gets you $1000 plus an empty box), or one-box (which gets you $1,000,000 and leaves the $1000 behind). Obviously, insofar as they are in fact following the instructions of their decision theory, there’s only one possible outcome; but it would be odd to say that a decision stops being a decision just because it’s determined by something. (What’s the alternative?)
I do endorse “given the predictor’s perfect accuracy, it’s impossible for the P_UDT agent to two-box and come away with $1,001,000”. I also endorse “given the predictor’s perfect accuracy, it’s impossible for the P_CDT agent to two-box and come away with $1,001,000″. Per the problem specification, no agent can two-box and get $1,001,000 or one-box and get $0. But this doesn’t mean that no decision is made; it just means that the predictor can predict the decision early enough to fill the boxes accordingly.
(Notably, the agent following P_CDT two-boxes because $1,001,000 > $1,000,000 and $1000 > $0, even though this “dominance” argument appeals to two outcomes that are known to be impossible just from the problem statement. I certainly don’t think agents “should” try to achieve outcomes that are impossible from the problem specification itself. The reason agents get more utility than CDT in Newcomb’s problem is that non-CDT agents take into account that the predictor is a predictor when they construct their counterfactuals.)
In the transparent version of this dilemma, the agent who sees the $1M and one-boxes also “could have two-boxed”, but if they had two-boxed, it would only have been after making a different observation. In that sense, if the agent has any lingering uncertainty about what they’ll choose, the uncertainty goes away as soon as they see whether the box is full.
No, there’s nothing non-naturalistic about this. Consider the scenario you and I are in. Simplifying somewhat, we can think of ourselves as each doing meta-reasoning to try to choose between different decision algorithms to follow going forward; where the new things we learn in this conversation are themselves a part of that meta-reasoning.
The meta-reasoning process is deterministic, just like the object-level decision algorithms are. But this doesn’t mean that we can’t choose between object-level decision algorithms. Rather, the meta-reasoning (in spite of having deterministic causes) chooses either “I think I’ll follow P_FDT from now on” or “I think I’ll follow P_CDT from now on”. Then the chosen decision algorithm (in spite of also having deterministic causes) outputs choices about subsequent actions to take. Meta-processes that select between decision algorithms (to put into an AI, or to run in your own brain, or to recommend to other humans, etc.)) can make “real decisions”, for exactly the same reason (and in exactly the same sense) that the decision algorithms in question can make real decisions.
It isn’t problematic that all these processes requires us to consider counterfactuals that (if we were omniscient) we would perceive as inconsistent/impossible. Deliberation, both at the object level and at the meta level, just is the process of determining the unique and only possible decision. Yet because we are uncertain about the outcome of the deliberation while deliberating, and because the details of the deliberation process do determine our decision (even as these details themselves have preceding causes), it feels from the inside of this process as though both options are “live”, are possible, until the very moment we decide.
(See also Decisions are for making bad outcomes inconsistent.)
I think you need to make a clearer distinction here between “outcomes that don’t exist in the universe’s dynamics” (like taking both boxes and receiving $1,001,000) and “outcomes that can’t exist in my branch” (like there not being a bomb in the unlucky case). Because if you’re operating just in the branch you find yourself in, many outcomes whose probability an FDT agent is trying to affect are impossible from the problem specification (once you include observations).
And, to be clear, I do think agents “should” try to achieve outcomes that are impossible from the problem specification including observations, if certain criteria are met, in a way that basically lines up with FDT, just like agents “should” try to achieve outcomes that are already known to have happened from the problem specification including observations.
As an example, if you’re in Parfit’s Hitchhiker, you should pay once you reach town, even though reaching town has probability 1 in cases where you’re deciding whether or not to pay, and the reason for this is because it was necessary for reaching town to have had probability 1.
+1, I agree with all this.
Suppose that we accept the principle that agents never “should” try to achieve outcomes that are impossible from the problem specification—with one implication being that it’s false that (as R_CDT suggests) agents that see a million dollars in the first box “should” two-box.
This seems to imply that it’s also false that (as R_UDT suggests) an agent that sees that the first box is empty “should” one box. By the problem specification, of course, one boxing when there is no money in the first box is also an impossible outcome. Since decisions to two box only occur when the first box is empty, this would then imply that decisions to two box are never irrational in the context of this problem. But I imagine you don’t want to say that.
I think I probably still don’t understand your objection here—so I’m not sure this point is actually responsive to it—but I initially have trouble seeing what potential violations of naturalism/determinism R_CDT could be committing that R_UDT would not also be committing.
(Of course, just to be clear, both R_UDT and R_CDT imply that the decision to commit yourself to a one-boxing policy at the start of the game would be rational. They only diverge in their judgments of what actual in-room boxing decision would be rational. R_UDT says that the decision to two-box is irrational and R_CDT says that the decision to one-box is irrational.)
That should be “a one-boxing policy”, right?
Yep, thanks for the catch! Edited to fix.
It seems to me like they’re coming down to saying something like: the “Guaranteed Payoffs Principle” / “Don’t Make Things Worse Principle” is more core to rational action than being self-consistent. Whereas others think self-consistency is more important.
It’s not clear to me that the justification for CDT is more circular than the justification for FDT. Doesn’t it come down to which principles you favor?
Maybe you could say FDT is more elegant. Or maybe that it satisfies more of the intuitive properties we’d hope for from a decision theory (where elegance might be one of those). But I’m not sure that would make the justification less-circular per se.
I guess one way the justification for CDT could be more circular is if the key or only principle that pushes in favor of it over FDT can really just be seen as a restatement of CDT in a way that the principles that push in favor of FDT do not. Is that what you would claim?
The main argument against CDT (in my view) is that it tends to get you less utility (regardless of whether you add self-modification so it can switch to other decision theories). Self-consistency is a secondary issue.
FDT gets you more utility than CDT. If you value literally anything in life more than you value “which ritual do I use to make my decisions?”, then you should go with FDT over CDT; that’s the core argument.
This argument for FDT would be question-begging if CDT proponents rejected utility as a desirable thing. But instead CDT proponents who are familiar with FDT agree utility is a positive, and either (a) they think there’s no meaningful sense in which FDT systematically gets more utility than CDT (which I think is adequately refuted by Abram Demski), or (b) they think that CDT has other advantages that outweigh the loss of utility (e.g., CDT feels more intuitive to them).
The latter argument for CDT isn’t circular, but as a fan of utility (i.e., of literally anything else in life), it seems very weak to me.
I do think the argument ultimately needs to come down to an intuition about self-effacingness.
The fact that agents earn less expected utility if they implement P_CDT than if they implement some other decision procedure seems to support the claim that agents should not implement P_CDT.
But there’s nothing logically inconsistent about believing both (a) that R_CDT is true and (b) that agents should not implement P_CDT. To again draw an analogy with a similar case, there’s also nothing logically inconsistent about believing both (a) that utilitarianism is true and (b) that agents should not in general make decisions by carrying out utilitarian reasoning.
So why shouldn’t I believe that R_CDT is true? The argument needs an additional step. And it seems to me like the most addition step here involves an intuition that the criterion of rightness would not be self-effacing.
More formally, it seems like the argument needs to be something along these lines:
Over their lifetimes, agents who implement P_CDT earn less expected utility than agents who implement certain other decision procedures.
(Assumption) Agents should implement whatever decision procedure will earn them the most expected lifetime utility.
Therefore, agents should not implement P_CDT.
(Assumption) The criterion of rightness is not self-effacing. Equivalently, if agents should not implement some decision procedure P_X, then it is not the case that R_X is true.
Therefore—as an implication of points (3) and (4) -- R_CDT is not true.
Whether you buy the “No Self-Effacement” assumption in Step 4 -- or, alternatively, the countervailing “Don’t Make Things Worse” assumption that supports R_CDT—seems to ultimately be a mattter of intuition. At least, I don’t currently know what else people can appeal to here to resolve the disagreement.
[[SIDENOTE: Step 2 is actually a bit ambiguous, since it doesn’t specify how expected lifetime utility is being evaluated. For example, are we talking about expected lifetime utility from a causal or evidential perspective? But I don’t think this ambiguity matters much for the argument.]]
[[SECOND SIDENOTE: I’m using the phrase “self-effacing” rather than “self-contradictory” here, because I think it’s more standard and because “self-contradictory” seems to suggest logical inconsistency.]]
If the thing being argued for is “R_CDT plus P_SONOFCDT”, then that makes sense to me, but is vulnerable to all the arguments I’ve been making: Son-of-CDT is in a sense the worst of both worlds, since it gets less utility than FDT and lacks CDT’s “Don’t Make Things Worse” principle.
If the thing being argued for is “R_CDT plus P_FDT”, then I don’t understand the argument. In what sense is P_FDT compatible with, or conducive to, R_CDT? What advantage does this have over “R_FDT plus P_FDT”? (Indeed, what difference between the two views would be intended here?)
The argument against “R_CDT plus P_SONOFCDT” doesn’t require any mention of self-effacingness; it’s entirely sufficient to note that P_SONOFCDT gets less utility than P_FDT.
The argument against “R_CDT plus P_FDT” seems to demand some reference to self-effacingness or inconsistency, or triviality / lack of teeth. But I don’t understand what this view would mean or why anyone would endorse it (and I don’t take you to be endorsing it).
We want to evaluate actual average utility rather than expected utility, since the different decision theories are different theories of what “expected utility” means.
Hm, I think I may have misinterpretted your previous comment as emphasizing the point that P_CDT “gets you less utility” rather than the point that P_SONOFCDT “gets you less utility.” So my comment was aiming to explain why I don’t think the fact that P_CDT gets less utility provides a strong challenge to the claim that R_CDT is true (unless we accept the “No Self-Effacement Principle”). But it sounds like you might agree that this fact doesn’t on its own provide a strong challenge.
In response to the first argument alluded to here: “Gets the most [expected] utility” is ambiguous, as I think we’ve both agreed.
My understanding is that P_SONOFCDT is definitionally the policy that, if an agent decided to adopt it, would cause the largest increase in expected utility. So—if we evaluate the expected utility of a decision to adopt a policy from a casual perspective—it seems to me that P_SONOFCDT “gets the most expected utility.”
If we evaluate the expected utility of a policy from an evidential or subjunctive perspective, however, then another policy may “get the most utility” (because policy adoption decisions may be non-causally correlated.)
Apologies if I’m off-base, but it reads to me like you might be suggesting an argument along these lines:
R_CDT says that it is rational to decide to follow a policy that would not maximize “expected utility” (defined in evidential/subjunctive terms).
(Assumption) But it is not rational to decide to follow a policy that would not maximize “expected utility” (defined in evidential/subjunctive terms).
Therefore R_CDT is not true.
The natural response to this argument is that it’s not clear why we should accept the assumption in Step 2. R_CDT says that the rationality of a decision depends on its “expected utility” defined in causal terms. So someone starting from the position that R_CDT is true obviously won’t accept the assumption in Step 2. R_EDT and R_FDT say that the rationality of a decision depends on its “expected utility” defined in evidential or subjunctive terms. So we might allude to R_EDT or R_FDT to justify the assumption, but of course this would also mean arguing backwards from the conclusion that the argument is meant to reach.
Overall at least this particular simple argument—that R_CDT is false because P_SONOFCDT gets less “expected utility” as defined in evidential/quasi-evidential terms—would seemingly fail to due circularity. But you may have in mind a different argument.
I felt confused by this comment. Doesn’t even R_FDT judge the rationality of a decision by its expected value (rather than its actual value)? And presumably you don’t want to say that someone who accepts unpromising gambles and gets lucky (ending up with high actual average utility) has made more “rational” decisions than someone who accepts promising gambles and gets unlucky (ending up with low actual average utility)?
You also correctly point out that the decision procedure that R_CDT implies agents should rationally commit to—P_SONOFCDT—sometimes outputs decisions that definitely make things worse. So “Don’t Make Things Worse” implies that some of the decisions outputted by P_SONOFCDT are irrational.
But I still don’t see what the argument is here unless we’re assuming “No Self-Effacement.” It still seems to me like we have a few initial steps and then a missing piece.
(Observation) R_CDT implies that it is rational to commit to following the decision procedure P_SONOFCDT.
(Observation) P_SONOFCDT sometimes outputs decisions that definitely make things worse.
(Assumption) It is irrational to take decisions that definitely make things worse. In other words, the “Don’t Make Things Worse” Principle is true.
Therefore, as an implication of Step 2 and Step 3, P_SONOFCDT sometimes outputs irrational decisions.
???
Therefore, R_CDT is false.
The “No Self-Effacement” Principle is equivalent to the principle that: If a criterion of rightness implies that it is rational to commit to a decision procedure, then that decision procedure only produces rational actions. So if we were to assume “No Self-Effacement” in Step 5 then this would allow us to arrive at the conclusion that R_CDT is false. But if we’re not assuming “No Self-Effacement,” then it’s not clear to me how we get there.
Actually, in the context of this particular argument, I suppose we don’t really have the option of assuming that “No Self-Effacement” is true—because this assumption would be inconsistent with the earlier assumption that “Don’t Make Things Worse” is true. So I’m not sure it’s actually possible to make this argument schema work in any case.
There may be a pretty different argument here, which you have in mind. I at least don’t see it yet though.
Perhaps the argument is something like:
“Don’t make things worse” (DMTW) is one of the intuitions that leads us to favoring R_CDT
But the actual policy that R_CDT recommends does not in fact follow DMTW
So R_CDT only gets intuitive appeal from DMTW to the extent that DMTW was about R_′s, and not about P_′s
But intuitions are probably(?) not that precisely targeted, so R_CDT shouldn’t get to claim the full intuitive endorsement of DMTW. (Yes, DMTW endorses it more than it endorses R_FDT, but R_CDT is still at least somewhat counter-intuitive when judged against the DMTW intuition.)
Here are two logically inconsistent principles that could be true:
Don’t Make Things Worse: If a decision would definitely make things worse, then taking that decision is not rational.
Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse: It is not rational to commit to a policy that, in the future, will sometimes output decisions that definitely make things worse.
I have strong intuitions that the fist one is true. I have much weaker (comparatively neglible) intuitions that the second one is true. Since they’re mutually inconsistent, I reject the second and accept the first. I imagine this is also true of most other people who are sympathetic to R_CDT.
One could argue that R_CDT sympathists don’t actually have much stronger intuitions regarding the first principle than the second—i.e. that their intuitions aren’t actually very “targeted” on the first one—but I don’t think that would be right. At least, it’s not right in my case.
A more viable strategy might be to argue for something like a meta-principle:
The ‘Don’t Make Things Worse’ Meta-Principle: If you find “Don’t Make Things Worse” strongly intuitive, then you should also find “Don’t Commit to a Policy That In the Future Will Sometimes Make Things Worse” just about as intuitive.
If the meta-principle were true, then I guess this would sort of imply that people’s intuitions in favor of “Don’t Make Things Worse” should be self-neutralizing. They should come packaged with equally strong intuitions for another position that directly contradicts it.
But I don’t see why the meta-principle should be true. At least, my intuitions in favor of the meta-principle are way less strong than my intutions in favor of “Don’t Make Things Worse” :)
Just to say slightly more on this, I think the Bomb case is again useful for illustrating my (I think not uncommon) intuitions here.
Bomb Case: Omega puts a million dollars in a transparent box if he predicts you’ll open it. He puts a bomb in the transparent box if he predicts you won’t open it. He’s only wrong about one in a trillion times.
Now suppose you enter the room and see that there’s a bomb in the box. You know that if you open the box, the bomb will explode and you will die a horrible and painful death. If you leave the room and don’t open the box, then nothing bad will happen to you. You’ll return to a grateful family and live a full and healthy life. You understand all this. You want so badly to live. You then decide to walk up to the bomb and blow yourself up.
Intuitively, this decision strikes me as deeply irrational. You’re intentionally taking an action that you know will cause a horrible outcome that you want badly to avoid. It feels very relevant that you’re flagrantly violating the “Don’t Make Things Worse” principle.
Now, let’s step back a time step. Suppose you know that you’re sort of person who would refuse to kill yourself by detonating the bomb. You might decide that—since Omega is such an accurate predictor—it’s worth taking a pill to turn you into that sort of person, to increase your odds of getting a million dollars. You recognize that this may lead you, in the future, to take an action that makes things worse in a horrifying way. But you calculate that the decision you’re making now is nonetheless making things better in expectation.
This decision strikes me as pretty intuitively rational. You’re violating the second principle—the “Don’t Commit to a Policy...” Principle—but this violation just doesn’t seem that intuitively relevent or remarkable to me. I personally feel like there is nothing too odd about the idea that it can be rational to commit to violating principles of rationality in the future.
(This obviously just a description of my own intuitions, as they stand, though.)
By triggering the bomb, you’re making things worse from your current perspective, but making things better from the perspective of earlier you. Doesn’t that seem strange and deserving of an explanation? The explanation from a UDT perspective is that by updating upon observing the bomb, you actually changed your utility function. You used to care about both the possible worlds where you end up seeing a bomb in the box, and the worlds where you don’t. After updating, you think you’re either a simulation within Omega’s prediction so your action has no effect on yourself or you’re in the world with a real bomb, and you no longer care about the version of you in the world with a million dollars in the box, and this accounts for the conflict/inconsistency.
Giving the human tendency to change our (UDT-)utility functions by updating, it’s not clear what to do (or what is right), and I think this reduces UDT’s intuitive appeal and makes it less of a slam-dunk over CDT/EDT. But it seems to me that it takes switching to the UDT perspective to even understand the nature of the problem. (Quite possibly this isn’t adequately explained in MIRI’s decision theory papers.)
I would agree that, with these two principles as written, more people would agree with the first. (And certainly believe you that that’s right in your case.)
But I feel like the second doesn’t quite capture what I had in mind regarding the DMTW intuition applied to P_′s.
Consider an alternate version:
Or alternatively:
It seems to me that these two claims are naively intuitive on their face, in roughly the same way that the ”… then taking that decision is not rational.” version is. And it’s only after you’ve considered prisoners’ dilemmas or Newcomb’s paradox, etc. that you realize that good policy (or being a rational agent) actually diverges from what’s rational in the moment.
(But maybe others would disagree on how intuitive these versions are.)
EDIT: And to spell out my argument a bit more: if several alternate formulations of a principle are each intuitively appealing, and it turns out that whether some claim (e.g. R_CDT is true) is consistent with the principle comes down to the precise formulation used, then it’s not quite fair to say that the principle fully endorses the claim and that the claim is not counter-intuitive from the perspective of the original intuition.
Of course, this argument is moot if it’s true that the original DMTW intuition was always about rational in-the-moment action, and never about policies or actors. And maybe that’s the case? But I think it’s a little more ambiguous with the ”… is not good policy” or “a rational person would not...” versions than with the “Don’t commit to a policy...” version.
EDIT2: Does what I’m trying to say make sense? (I felt like I was struggling a bit to express myself in this comment.)
Just as a quick sidenote:
I’ve been thinking of P_SONOFCDT as, by definition, the decision procedure that R_CDT implies that it is rational to commit to implementing.
If we define P_SONOFCDT this way, then anyone who believes that R_CDT is true must also believe that it is rational to implement P_SONOFCDT.
The belief that R_CDT is true and the belief that it is rational to implement P_FDT would only then be consistent if P_SONOFCDT is equivalent to P_FDT (which of course they aren’t). So I would inclined to say that no one should believe in both the correctness of R_CDT and the rationality of implementing P_FDT.
[[EDIT: Actually, I need to distinguish between the decision procedure that it would be rational commit to yourself and the decision procedure that it would be rational to build into an agents. These can sometimes be different. For example, suppose that R_CDT is true and that you’re building twin AI systems and you would like them both to succeed. Then it would be rational for you to give them decision procedures that will cause them to cooperate if they face each other in a prisoner’s dilemma (e.g. some version of P_FDT). But if R_CDT is true and you’ve just been born into the world as one of the twins, it would be rational for you to commit to a decision procedure that would cause you to defect if you face the other AI system in a prisoner’s dilemma (i.e. P_SONOFCDT). I slightly edited the above comment to reflect this. My tentative view—which I’ve alluded to above—is that the various proposed criteria of rightness don’t in practice actually diverge all that much when it comes to the question of what sorts of decision procedures we should build into AI systems. Although I also understand that MIRI is not mainly interested in the question of what sorts of decision procedures we should build into AI systems.]]
Do you mean
It seems to better fit the pattern of the example just prior.