I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
sufficiently-advanced artificial agents’ dispositions to choose options at a choice node will not depend on other parts of the decision tree than those that can be reached from that node.
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.
I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.