The most important thing about your decision theory is that it shouldn’t predictably and in expectation leave you worse off than if you had used a different approach. My claim in the post is that we’re using such an approach, and it leaves us predictably worse off in certain specific cases.
This isn’t a problem with expected utility maximization (with counterfactuals), though, right? I think the use of counterfactuals is theoretically sound, but we may be incorrectly modelling counterfactuals.
It’s a problem with using expected utility maximization in a game theoretic setup without paying attention to other players’ decisions and responses—that is, using counterfactuals which don’t account for other player actions, instead of Shapley values, which are a game theoretic solution to the multi-agent dilemma.
This isn’t a problem with expected utility maximization (with counterfactuals), though, right? I think the use of counterfactuals is theoretically sound, but we may be incorrectly modelling counterfactuals.
It’s a problem with using expected utility maximization in a game theoretic setup without paying attention to other players’ decisions and responses—that is, using counterfactuals which don’t account for other player actions, instead of Shapley values, which are a game theoretic solution to the multi-agent dilemma.