But my argument against proposals like yours is not that agents wouldn’t have sufficiently good memories. The objection (following Broome and others) is that the agents at node 2 have no reason at that node for ruling out option A- with your policy. The fact that A could have been chosen earlier should not concern you at node 2. A- is not dominated by any of the available options at node 2.
Regarding the inference being poor, my argument in the book has two parts (1) the money pump for Completeness which relies on Decision-Tree Separability and (2) the defence of Decision-Tree Separability. It is (2) that rules out your proposal.
Regarding your two quick thoughts, lots of people may be irrational. So that arguments does not work.
I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
sufficiently-advanced artificial agents’ dispositions to choose options at a choice node will not depend on other parts of the decision tree than those that can be reached from that node.
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.
But my argument against proposals like yours is not that agents wouldn’t have sufficiently good memories. The objection (following Broome and others) is that the agents at node 2 have no reason at that node for ruling out option A- with your policy. The fact that A could have been chosen earlier should not concern you at node 2. A- is not dominated by any of the available options at node 2.
Regarding the inference being poor, my argument in the book has two parts (1) the money pump for Completeness which relies on Decision-Tree Separability and (2) the defence of Decision-Tree Separability. It is (2) that rules out your proposal.
Regarding your two quick thoughts, lots of people may be irrational. So that arguments does not work.
I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.