So, you would agree that the following is an English description of a theorem:
If an agent has complete, transitive preferences, and it does not pursue dominated strategies, then it must be representable as maximizing expected utility.
Yep, I agree with that.
I feel pretty fine with justifying the transitive part via theorems basically like the one I gave above.
Note that your money-pump justifies acyclicity(The agent does not strictly prefer A to B, B to C, and C to A) rather than the version of transitivity necessary for the VNM and Complete Class theorems (If the agent weakly prefers A to B, and B to C, then the agent weakly prefers A to C). Gustafsson thinks you need Completeness to get a money-pump for this version of transitivity working (see footnote 8 on page 3), and I’m inclined to agree.
when you have intransitive preferences it’s not even clear what a “dominated strategy” would be.
A dominated strategy would be a strategy which leads you to choose an option that is worse in some respect than another available option and not better than that other available option in any respect. For example, making all the trades and getting A- in the decision-situation below would be a dominated strategy, since you could have made no trades and got A:
So I still think that it is basically incorrect to say:
And yet the error seems to have gone uncorrected for more than a decade.
The error is claiming that
There exist theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.
I haven’t seen anyone point out that that claim is false.
That said, one could reason as follows:
Rohin, John, and others have argued that agents with incomplete preferences can act in accordance with policies that make them immune to pursuing dominated strategies.
Agents with incomplete preferences cannot be represented as maximizing expected utility.
So, if Rohin’s, John’s, and others’ arguments are sound, there cannot exist theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.
Then one would have corrected the error. But since the availability of this kind of reasoning is easily missed, it seems worth correcting the error directly.
Okay, it seems like we agree on the object-level facts, and what’s left is a disagreement about whether people have been making a major error. I’m less interested in that disagreement so probably won’t get into a detailed discussion, but I’ll briefly outline my position here.
The error is claiming that
There exist theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.
I haven’t seen anyone point out that that claim is false.
The main way in which this claim is false (on your way of using words) is that it fails to note some of the antecedents in the theorem (completeness, maybe transitivity).
But I don’t think this is a reasonable way to use words, and I don’t think it’s reasonable to read the quotes in your appendix as claiming what you say they claim.
Converting math into English is a tricky business. Often a lot of the important “assumptions” in a theorem are baked into things like the type signature of a particular variable or the definitions of some key terms; in my toy theorem above I give two examples (completeness and lack of time-dependence). You are going to lose some information about what the theorem says when you convert it from math to English; an author’s job is to communicate the “important” parts of the theorem (e.g. the conclusion, any antecedents that the reader may not agree with, implications of the type signature that limit the applicability of the conclusion), which will depend on the audience.
As a result when you read an English description of a theorem, you should not expect it to state every antecedent. So it seems unreasonable to me to critique a claim in English about a theorem existing purely because it didn’t list all the antecedents.
I think it is reasonable to critique a claim in English about a theorem on the basis that it didn’t highlight an important antecedent that limits its applicability. If you said “AI alignment researchers should make sure to highlight the Completeness axiom when discussing coherence theorems” I’d be much more sympathetic (though personally my advice would be “AI alignment researchers should make sure to either argue for or highlight as an assumption the point that the AI is goal-directed / has preferences”).
Gustafsson thinks you need Completeness to get a money-pump for this version of transitivity working
Yup, good point, I think it doesn’t change the conclusion.
Often a lot of the important “assumptions” in a theorem are baked into things like the type signature of a particular variable or the definitions of some key terms; in my toy theorem above I give two examples (completeness and lack of time-dependence). You are going to lose some information about what the theorem says when you convert it from math to English; an author’s job is to communicate the “important” parts of the theorem (e.g. the conclusion, any antecedents that the reader may not agree with, implications of the type signature that limit the applicability of the conclusion), which will depend on the audience.
Yep, I agree with all of this.
Converting math into English is a tricky business.
Often, but not in this case. If authors understood the above points and meant to refer to the Complete Class Theorem, they need only have said:
If an agent has complete, transitive preferences, and it does not pursue dominated strategies, then it must be representable as maximizing expected utility.
(And they probably wouldn’t have mentioned Cox, Savage, etc.)
Yup, good point, I think it doesn’t change the conclusion.
I think it does. If the money-pump for transitivity needs Completeness, and Completeness is doubtful, then the money-pump for transitivity is doubtful too.
Yep, I agree with that.
Note that your money-pump justifies acyclicity (The agent does not strictly prefer A to B, B to C, and C to A) rather than the version of transitivity necessary for the VNM and Complete Class theorems (If the agent weakly prefers A to B, and B to C, then the agent weakly prefers A to C). Gustafsson thinks you need Completeness to get a money-pump for this version of transitivity working (see footnote 8 on page 3), and I’m inclined to agree.
A dominated strategy would be a strategy which leads you to choose an option that is worse in some respect than another available option and not better than that other available option in any respect. For example, making all the trades and getting A- in the decision-situation below would be a dominated strategy, since you could have made no trades and got A:
The error is claiming that
There exist theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.
I haven’t seen anyone point out that that claim is false.
That said, one could reason as follows:
Rohin, John, and others have argued that agents with incomplete preferences can act in accordance with policies that make them immune to pursuing dominated strategies.
Agents with incomplete preferences cannot be represented as maximizing expected utility.
So, if Rohin’s, John’s, and others’ arguments are sound, there cannot exist theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy.
Then one would have corrected the error. But since the availability of this kind of reasoning is easily missed, it seems worth correcting the error directly.
Okay, it seems like we agree on the object-level facts, and what’s left is a disagreement about whether people have been making a major error. I’m less interested in that disagreement so probably won’t get into a detailed discussion, but I’ll briefly outline my position here.
The main way in which this claim is false (on your way of using words) is that it fails to note some of the antecedents in the theorem (completeness, maybe transitivity).
But I don’t think this is a reasonable way to use words, and I don’t think it’s reasonable to read the quotes in your appendix as claiming what you say they claim.
Converting math into English is a tricky business. Often a lot of the important “assumptions” in a theorem are baked into things like the type signature of a particular variable or the definitions of some key terms; in my toy theorem above I give two examples (completeness and lack of time-dependence). You are going to lose some information about what the theorem says when you convert it from math to English; an author’s job is to communicate the “important” parts of the theorem (e.g. the conclusion, any antecedents that the reader may not agree with, implications of the type signature that limit the applicability of the conclusion), which will depend on the audience.
As a result when you read an English description of a theorem, you should not expect it to state every antecedent. So it seems unreasonable to me to critique a claim in English about a theorem existing purely because it didn’t list all the antecedents.
I think it is reasonable to critique a claim in English about a theorem on the basis that it didn’t highlight an important antecedent that limits its applicability. If you said “AI alignment researchers should make sure to highlight the Completeness axiom when discussing coherence theorems” I’d be much more sympathetic (though personally my advice would be “AI alignment researchers should make sure to either argue for or highlight as an assumption the point that the AI is goal-directed / has preferences”).
Yup, good point, I think it doesn’t change the conclusion.
I think that’s right.
Yep, I agree with all of this.
Often, but not in this case. If authors understood the above points and meant to refer to the Complete Class Theorem, they need only have said:
If an agent has complete, transitive preferences, and it does not pursue dominated strategies, then it must be representable as maximizing expected utility.
(And they probably wouldn’t have mentioned Cox, Savage, etc.)
I think it does. If the money-pump for transitivity needs Completeness, and Completeness is doubtful, then the money-pump for transitivity is doubtful too.
Upon rereading I realize I didn’t state this explicitly, but my conclusion was the following:
Transitivity depending on completeness doesn’t invalidate that conclusion.
Ah I see! Yep, agree with that.