I may write up more object-level thoughts here, because this is interesting, but I just wanted to quickly emphasize the upshot that initially motivated me to write up this explanation.
(I don’t really want to argue here that non-naturalist or non-analytic naturalist normative realism of the sort I’ve just described is actually a correct view; I mainly wanted to give a rough sense of what the view consists of and what leads people to it. It may well be the case that the view is wrong, because all true normative-seeming claims are in principle reducible to claims about things like preferences. I think the comments you’ve just made cover some reasons to suspect this.)
The key point is just that when these philosophers say that “Action X is rational,” they are explicitly reporting that they do not mean “Action X suits my terminal preferences” or “Action X would be taken by an agent following a policy that maximizes lifetime utility” or any other such reduction.
I think that when people are very insistent that they don’t mean something by their statements, it makes sense to believe them. This implies that the question they are discussing—“What are the necessary and sufficient conditions that make a decision rational?”—is distinct from questions like “What decision would an agent that tends to win take?” or “What decision procedure suits my terminal preferences?”
It may be the case that the question they are asking is confused or insensible—because any sensible question would be reducible—but it’s in any case different. So I think it’s a mistake to interpret at least these philosophers’ discussions of “decisions theories” or “criteria of rightness” as though they were discussions of things like terminal preferences or winning strategies. And it doesn’t seem to me like the answer to the question they’re asking (if it has an answer) would likely imply anything much about things like terminal preferences or winning strategies.
[[NOTE: Plenty of decision theorists are not non-naturalist or non-analytic naturalist realists, though. It’s less clear to me how related or unrelated the thing they’re talking about is to issues of interest to MIRI. I think that the conception of rationality I’m discussing here mainly just presents an especially clear case.]]
I may write up more object-level thoughts here, because this is interesting, but I just wanted to quickly emphasize the upshot that initially motivated me to write up this explanation.
(I don’t really want to argue here that non-naturalist or non-analytic naturalist normative realism of the sort I’ve just described is actually a correct view; I mainly wanted to give a rough sense of what the view consists of and what leads people to it. It may well be the case that the view is wrong, because all true normative-seeming claims are in principle reducible to claims about things like preferences. I think the comments you’ve just made cover some reasons to suspect this.)
The key point is just that when these philosophers say that “Action X is rational,” they are explicitly reporting that they do not mean “Action X suits my terminal preferences” or “Action X would be taken by an agent following a policy that maximizes lifetime utility” or any other such reduction.
I think that when people are very insistent that they don’t mean something by their statements, it makes sense to believe them. This implies that the question they are discussing—“What are the necessary and sufficient conditions that make a decision rational?”—is distinct from questions like “What decision would an agent that tends to win take?” or “What decision procedure suits my terminal preferences?”
It may be the case that the question they are asking is confused or insensible—because any sensible question would be reducible—but it’s in any case different. So I think it’s a mistake to interpret at least these philosophers’ discussions of “decisions theories” or “criteria of rightness” as though they were discussions of things like terminal preferences or winning strategies. And it doesn’t seem to me like the answer to the question they’re asking (if it has an answer) would likely imply anything much about things like terminal preferences or winning strategies.
[[NOTE: Plenty of decision theorists are not non-naturalist or non-analytic naturalist realists, though. It’s less clear to me how related or unrelated the thing they’re talking about is to issues of interest to MIRI. I think that the conception of rationality I’m discussing here mainly just presents an especially clear case.]]