and I would urge the author to create an actual concrete situation that doesn’t seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences
I’d be surprised if you couldn’t come up with situations where completeness isn’t worth the cost—e.g. something like, to close some preference gaps you’d have to think for 100x as long, but if you close them all arbitrarily then you end up with intrasitivity.
This seems like a great point. Completeness requires closing all preference gaps, but if you do that inconsistently and violate transitivity then suddenly you are vulnerable to money-pumping.
I’d be surprised if you couldn’t come up with situations where completeness isn’t worth the cost—e.g. something like, to close some preference gaps you’d have to think for 100x as long, but if you close them all arbitrarily then you end up with intrasitivity.
This seems like a great point. Completeness requires closing all preference gaps, but if you do that inconsistently and violate transitivity then suddenly you are vulnerable to money-pumping.