What you are suggesting is what I called “The Conservative Approach” to resolute choice, which I discuss critically on pages 73–74. It is not a new idea.
Note also that avoiding money pumps for Completeness cannot alone motivate your suggested policy, since you can also avoid them by satisfying Completeness. So that argument does not work (without assuming the point at issue).
Finally, I guess I don’t see why consequentialism would less plausible for artificial agents than other agents.
But my argument against proposals like yours is not that agents wouldn’t have sufficiently good memories. The objection (following Broome and others) is that the agents at node 2 have no reason at that node for ruling out option A- with your policy. The fact that A could have been chosen earlier should not concern you at node 2. A- is not dominated by any of the available options at node 2.
Regarding the inference being poor, my argument in the book has two parts (1) the money pump for Completeness which relies on Decision-Tree Separability and (2) the defence of Decision-Tree Separability. It is (2) that rules out your proposal.
Regarding your two quick thoughts, lots of people may be irrational. So that arguments does not work.