Thanks for the comment! In this context, where we’re arguing about whether sufficiently-advanced artificial agents willsatisfy the VNM axioms, I only have to give up Decision-Tree Separability*:
Sufficiently-advanced artificial agents’ dispositions to choose options at a choice node will not depend on other parts of the decision tree than those that can be reached from that node.
And Decision-Tree Separability* isn’t particularly plausible. It’s false if any sufficiently-advanced artificial agent acts in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’ And it’s easy to see why agents might act in accordance with that policy: it makes them immune to money-pumps for Completeness.
Also, it seems as if one of the major downsides of resolute choice is that agents sometimes have to act against their preferences. But, as I argue in the post, artificial agents with incomplete preferences who act in accordance with the policy above will never have to act against their preferences.
What you are suggesting is what I called “The Conservative Approach” to resolute choice, which I discuss critically on pages 73–74. It is not a new idea.
Note also that avoiding money pumps for Completeness cannot alone motivate your suggested policy, since you can also avoid them by satisfying Completeness. So that argument does not work (without assuming the point at issue).
Finally, I guess I don’t see why consequentialism would less plausible for artificial agents than other agents.
I didn’t mean to suggest it was new! I remember that part of your book.
Your second point seems to me to get the dialectic wrong. We can read coherence arguments as saying:
Sufficiently-advanced artificial agents won’t pursue dominated strategies, so they’ll have complete preferences.
I’m pointing out that that inference is poor. Advanced artificial agents might instead avoid dominated strategies by acting in accordance with the policy that I suggest.
I’m still thinking about your last point. Two quick thoughts:
It seems like most humans aren’t consequentialists.
Advanced artificial agents could have better memories of their past decisions than humans.
But my argument against proposals like yours is not that agents wouldn’t have sufficiently good memories. The objection (following Broome and others) is that the agents at node 2 have no reason at that node for ruling out option A- with your policy. The fact that A could have been chosen earlier should not concern you at node 2. A- is not dominated by any of the available options at node 2.
Regarding the inference being poor, my argument in the book has two parts (1) the money pump for Completeness which relies on Decision-Tree Separability and (2) the defence of Decision-Tree Separability. It is (2) that rules out your proposal.
Regarding your two quick thoughts, lots of people may be irrational. So that arguments does not work.
I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
sufficiently-advanced artificial agents’ dispositions to choose options at a choice node will not depend on other parts of the decision tree than those that can be reached from that node.
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.
Thanks for the comment! In this context, where we’re arguing about whether sufficiently-advanced artificial agents will satisfy the VNM axioms, I only have to give up Decision-Tree Separability*:
And Decision-Tree Separability* isn’t particularly plausible. It’s false if any sufficiently-advanced artificial agent acts in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’ And it’s easy to see why agents might act in accordance with that policy: it makes them immune to money-pumps for Completeness.
Also, it seems as if one of the major downsides of resolute choice is that agents sometimes have to act against their preferences. But, as I argue in the post, artificial agents with incomplete preferences who act in accordance with the policy above will never have to act against their preferences.
What you are suggesting is what I called “The Conservative Approach” to resolute choice, which I discuss critically on pages 73–74. It is not a new idea.
Note also that avoiding money pumps for Completeness cannot alone motivate your suggested policy, since you can also avoid them by satisfying Completeness. So that argument does not work (without assuming the point at issue).
Finally, I guess I don’t see why consequentialism would less plausible for artificial agents than other agents.
I didn’t mean to suggest it was new! I remember that part of your book.
Your second point seems to me to get the dialectic wrong. We can read coherence arguments as saying:
Sufficiently-advanced artificial agents won’t pursue dominated strategies, so they’ll have complete preferences.
I’m pointing out that that inference is poor. Advanced artificial agents might instead avoid dominated strategies by acting in accordance with the policy that I suggest.
I’m still thinking about your last point. Two quick thoughts:
It seems like most humans aren’t consequentialists.
Advanced artificial agents could have better memories of their past decisions than humans.
But my argument against proposals like yours is not that agents wouldn’t have sufficiently good memories. The objection (following Broome and others) is that the agents at node 2 have no reason at that node for ruling out option A- with your policy. The fact that A could have been chosen earlier should not concern you at node 2. A- is not dominated by any of the available options at node 2.
Regarding the inference being poor, my argument in the book has two parts (1) the money pump for Completeness which relies on Decision-Tree Separability and (2) the defence of Decision-Tree Separability. It is (2) that rules out your proposal.
Regarding your two quick thoughts, lots of people may be irrational. So that arguments does not work.
I think all of these objections would be excellent if I were arguing against this claim:
Agents are rationally required to satisfy the VNM axioms.
But I’m arguing against this claim:
Sufficiently-advanced artificial agents will satisfy the VNM axioms.
And given that, I think your objections miss the mark.
On your first point, I’m prepared to grant that agents have no reason to rule out option A- at node 2. All I need to claim is that advanced artificial agents might rule out option A- at node 2. And I think my argument makes that claim plausible.
On your second point, Decision-Tree Separability doesn’t rule out my proposal. What would rule it out is Decision-Tree Separability*:
And whatever the merits of Decision-Tree Separability, Decision-Tree Separability* seems to me not very plausible.
On your third point, (whether or not most humans are irrational) most humans are non-consequentialists. So even if it is no more plausible that artificial agents will be non-consequentialists than humans, it can be plausible that artificial agents will be non-consequentialists. And it is relevant that advanced artificial agents could be better at remembering their past decisions than humans. That would make them better able to act in accordance with the policy that I suggest.