So, as an experiment, I’m going to be a very obstinate reductionist in this comment. I’ll insist that a lot of these hard-seeming concepts aren’t so hard.
Many of them are complicated, in the fashion of “knowledge”—they admit an endless variety of edge cases and exceptions—but these complications are quirks of human cognition and language rather than deep insights into ultimate metaphysical reality. And where there’s a simple core we can point to, that core generally isn’t mysterious.
It may be inconvenient to paraphrase the term away (e.g., because it packages together several distinct things in a nice concise way, or has important emotional connotations, or does important speech-act work like encouraging a behavior). But when I say it “isn’t mysterious”, I mean it’s pretty easy to see how the concept can crop up in human thought even if it doesn’t belong on the short list of deep fundamental cosmic structure terms.
I would say that there’s also at least a fourth way that philosophers often use the word “rational,” which is also the main way I use the word “rational.” This is to refer to an irreducibly normative concept.
Why is this a fourth way? My natural response is to say that normativity itself is either a messy, parochial human concept (like “love,” “knowledge,” “France”) , or it’s not (in which case it goes in bucket 2).
Some examples of concepts that are arguably irreducible are “truth,” “set,” “property,” “physical,” “existance,” and “point.”
Picking on the concept here that seems like the odd one out to me: I feel confident that there isn’t a cosmic law (of nature, or of metaphysics, etc.) that includes “truth” as a primitive (unless the list of primitives is incomprehensibly long). I could see an argument for concepts like “intentionality/reference”, “assertion”, or “state of affairs”, though the former two strike me as easy to explain in simple physical terms.
Mundane empirical “truth” seems completely straightforward. Then there’s the truth of sentences like “Frodo is a hobbit”, “2+2=4”, “I could have been the president”, “Hamburgers are more delicious than battery acid”… Some of these are easier or harder to make sense of in the naive correspondence model, but regardless, it seems clear that our colloquial use of the word “true” to refer to all these different statements is pre-philosophical, and doesn’t reflect anything deeper than that “each of these sentences at least superficially looks like it’s asserting some state of affairs, and each sentence satisfies the conventional assertion-conditions of our linguistic community”.
I think that philosophers are really good at drilling down on a lot of interesting details and creative models for how we can try to tie these disparate speech-acts together. But I think there’s also a common failure mode in philosophy of treating these questions as deeper, more mysterious, or more joint-carving than the facts warrant. Just because you can argue about the truthmakers of “Frodo is a hobbit” doesn’t mean you’re learning something deep about the universe (or even something particularly deep about human cognition) in the process.
[Parfit:] It is hard to explain the concept of a reason, or what the phrase ‘a reason’ means. Facts give us reasons, we might say, when they count in favour of our having some attitude, or our acting in some way. But ‘counts in favour of’ means roughly ‘gives a reason for’. Like some other fundamental concepts, such as those involved in our thoughts about time, consciousness, and possibility, the concept of a reason is indefinable in the sense that it cannot be helpfully explained merely by using words.
Suppose I build a robot that updates hypotheses based on observations, then selects actions that its hypotheses suggest will help it best achieve some goal. When the robot is deciding which hypotheses to put more confidence in based on an observation, we can imagine it thinking, “To what extent is observation o a [WORD] to believe hypothesis h?” When the robot is deciding whether it assigns enough probability to h to choose an action a, we can imagine it thinking, “To what extent is P(h)=0.7 a [WORD] to choose action a?” As a shorthand, when observation o updates a hypothesis h that favors an action a, the robot can also ask to what extent o itself is a [WORD] to choose a.
When two robots meet, we can moreover add that they negotiate a joint “compromise” goal that allows them to work together rather than fight each other for resources. In communicating with each other, they then start also using “[WORD]” where an action is being evaluated relative to the joint goal, not just the robot’s original goal.
Thus when Robot A tells Robot B “I assign probability 90% to ‘it’s noon’, which is [WORD] to have lunch”, A may be trying to communicate that A wants to eat, or that A thinks eating will serve A and B’s joint goal. (This gets even messier if the robots have an incentive to obfuscate which actions and action-recommendations are motivated by the personal goal vs. the joint goal.)
If you decide to relabel “[WORD]” as “reason”, I claim that this captures a decent chunk of how people use the phrase “a reason”. “Reason” is a suitcase word, but that doesn’t mean there are no similarities between e.g. “data my goals endorse using to adjust the probability of a given hypothesis” and “probabilities-of-hypotheses my goals endorse using to select an action”), or that the similarity is mysterious and ineffable.
(I recognize that the above story leaves out a lot of important and interesting stuff. Though past a certain point, I think the details will start to become Gettier-case nitpicks, as with most concepts.)
For example, suppose we follow a suggestion once made by Eliezer to reduce the concept of “a rational choice” to the concept of “a winning choice” (or, in line with the type-2 conception you mention, a “utility-maximizing choice”).
That essay isn’t trying to “reduce” the term “rationality” in the sense of taking a pre-existing word and unpacking or translating it. The essay is saying that what matters is utility, and if a human being gets too invested in verbal definitions of “what the right thing to do is”, they risk losing sight of the thing they actually care about and were originally in the game to try to achieve (i.e., their utility).
Therefore: if you’re going to use words like “rationality”, make sure that the words in question won’t cause you to shoot yourself in the foot and take actions that will end up costing you utility (e.g., costing human lives, costing years of averted suffering, costing money, costing anything or everything). And if you aren’t using “rationality” in a safe “nailed-to-utility” way, make sure that you’re willing to turn on a time and stop being “rational” the second your conception of rationality starts telling you to throw away value.
It ultimately seems hard, at least to me, to make non-vacuous true claims about what it’s “rational” to do withoit evoking a non-reducible notion of “rationality.”
“Rationality” is a suitcase word. It refers to lots of different things. On LessWrong, examples include not just “(systematized) winning” but (as noted in the essay) “Bayesian reasoning”, or in Rationality: Appreciating Cognitive Algorithms, “cognitive algorithms or mental processes that systematically produce belief-accuracy or goal-achievement”. In philosophy, the list is a lot longer.
The common denominator seems to largely be “something something reasoning / deliberation” plus (as you note) “something something normativity / desirability / recommendedness / requiredness”.
The idea of “normativity” doesn’t currently seem that mysterious to me either, though you’re welcome to provide perplexing examples. My initial take is that it seems to be a suitcase word containing a bunch of ideas tied to:
Goals/preferences/values, especially overridingly strong ones.
Encouraged, endorsed, mandated, or praised conduct.
Encouraging, endorsing, mandating, and praising are speech-acts that seem very central to how humans perceive and intervene on social situations; and social situations seem pretty central to human cognition overall. So I don’t think it’s particularly surprising if words associated with such loaded ideas would have fairly distinctive connotations and seem to resist reduction, especially reduction that neglects the pragmatic dimensions of human communication and only considers the semantic dimension.
I may write up more object-level thoughts here, because this is interesting, but I just wanted to quickly emphasize the upshot that initially motivated me to write up this explanation.
(I don’t really want to argue here that non-naturalist or non-analytic naturalist normative realism of the sort I’ve just described is actually a correct view; I mainly wanted to give a rough sense of what the view consists of and what leads people to it. It may well be the case that the view is wrong, because all true normative-seeming claims are in principle reducible to claims about things like preferences. I think the comments you’ve just made cover some reasons to suspect this.)
The key point is just that when these philosophers say that “Action X is rational,” they are explicitly reporting that they do not mean “Action X suits my terminal preferences” or “Action X would be taken by an agent following a policy that maximizes lifetime utility” or any other such reduction.
I think that when people are very insistent that they don’t mean something by their statements, it makes sense to believe them. This implies that the question they are discussing—“What are the necessary and sufficient conditions that make a decision rational?”—is distinct from questions like “What decision would an agent that tends to win take?” or “What decision procedure suits my terminal preferences?”
It may be the case that the question they are asking is confused or insensible—because any sensible question would be reducible—but it’s in any case different. So I think it’s a mistake to interpret at least these philosophers’ discussions of “decisions theories” or “criteria of rightness” as though they were discussions of things like terminal preferences or winning strategies. And it doesn’t seem to me like the answer to the question they’re asking (if it has an answer) would likely imply anything much about things like terminal preferences or winning strategies.
[[NOTE: Plenty of decision theorists are not non-naturalist or non-analytic naturalist realists, though. It’s less clear to me how related or unrelated the thing they’re talking about is to issues of interest to MIRI. I think that the conception of rationality I’m discussing here mainly just presents an especially clear case.]]
So, as an experiment, I’m going to be a very obstinate reductionist in this comment. I’ll insist that a lot of these hard-seeming concepts aren’t so hard.
Many of them are complicated, in the fashion of “knowledge”—they admit an endless variety of edge cases and exceptions—but these complications are quirks of human cognition and language rather than deep insights into ultimate metaphysical reality. And where there’s a simple core we can point to, that core generally isn’t mysterious.
It may be inconvenient to paraphrase the term away (e.g., because it packages together several distinct things in a nice concise way, or has important emotional connotations, or does important speech-act work like encouraging a behavior). But when I say it “isn’t mysterious”, I mean it’s pretty easy to see how the concept can crop up in human thought even if it doesn’t belong on the short list of deep fundamental cosmic structure terms.
Why is this a fourth way? My natural response is to say that normativity itself is either a messy, parochial human concept (like “love,” “knowledge,” “France”) , or it’s not (in which case it goes in bucket 2).
Picking on the concept here that seems like the odd one out to me: I feel confident that there isn’t a cosmic law (of nature, or of metaphysics, etc.) that includes “truth” as a primitive (unless the list of primitives is incomprehensibly long). I could see an argument for concepts like “intentionality/reference”, “assertion”, or “state of affairs”, though the former two strike me as easy to explain in simple physical terms.
Mundane empirical “truth” seems completely straightforward. Then there’s the truth of sentences like “Frodo is a hobbit”, “2+2=4”, “I could have been the president”, “Hamburgers are more delicious than battery acid”… Some of these are easier or harder to make sense of in the naive correspondence model, but regardless, it seems clear that our colloquial use of the word “true” to refer to all these different statements is pre-philosophical, and doesn’t reflect anything deeper than that “each of these sentences at least superficially looks like it’s asserting some state of affairs, and each sentence satisfies the conventional assertion-conditions of our linguistic community”.
I think that philosophers are really good at drilling down on a lot of interesting details and creative models for how we can try to tie these disparate speech-acts together. But I think there’s also a common failure mode in philosophy of treating these questions as deeper, more mysterious, or more joint-carving than the facts warrant. Just because you can argue about the truthmakers of “Frodo is a hobbit” doesn’t mean you’re learning something deep about the universe (or even something particularly deep about human cognition) in the process.
Suppose I build a robot that updates hypotheses based on observations, then selects actions that its hypotheses suggest will help it best achieve some goal. When the robot is deciding which hypotheses to put more confidence in based on an observation, we can imagine it thinking, “To what extent is observation o a [WORD] to believe hypothesis h?” When the robot is deciding whether it assigns enough probability to h to choose an action a, we can imagine it thinking, “To what extent is P(h)=0.7 a [WORD] to choose action a?” As a shorthand, when observation o updates a hypothesis h that favors an action a, the robot can also ask to what extent o itself is a [WORD] to choose a.
When two robots meet, we can moreover add that they negotiate a joint “compromise” goal that allows them to work together rather than fight each other for resources. In communicating with each other, they then start also using “[WORD]” where an action is being evaluated relative to the joint goal, not just the robot’s original goal.
Thus when Robot A tells Robot B “I assign probability 90% to ‘it’s noon’, which is [WORD] to have lunch”, A may be trying to communicate that A wants to eat, or that A thinks eating will serve A and B’s joint goal. (This gets even messier if the robots have an incentive to obfuscate which actions and action-recommendations are motivated by the personal goal vs. the joint goal.)
If you decide to relabel “[WORD]” as “reason”, I claim that this captures a decent chunk of how people use the phrase “a reason”. “Reason” is a suitcase word, but that doesn’t mean there are no similarities between e.g. “data my goals endorse using to adjust the probability of a given hypothesis” and “probabilities-of-hypotheses my goals endorse using to select an action”), or that the similarity is mysterious and ineffable.
(I recognize that the above story leaves out a lot of important and interesting stuff. Though past a certain point, I think the details will start to become Gettier-case nitpicks, as with most concepts.)
That essay isn’t trying to “reduce” the term “rationality” in the sense of taking a pre-existing word and unpacking or translating it. The essay is saying that what matters is utility, and if a human being gets too invested in verbal definitions of “what the right thing to do is”, they risk losing sight of the thing they actually care about and were originally in the game to try to achieve (i.e., their utility).
Therefore: if you’re going to use words like “rationality”, make sure that the words in question won’t cause you to shoot yourself in the foot and take actions that will end up costing you utility (e.g., costing human lives, costing years of averted suffering, costing money, costing anything or everything). And if you aren’t using “rationality” in a safe “nailed-to-utility” way, make sure that you’re willing to turn on a time and stop being “rational” the second your conception of rationality starts telling you to throw away value.
“Rationality” is a suitcase word. It refers to lots of different things. On LessWrong, examples include not just “(systematized) winning” but (as noted in the essay) “Bayesian reasoning”, or in Rationality: Appreciating Cognitive Algorithms, “cognitive algorithms or mental processes that systematically produce belief-accuracy or goal-achievement”. In philosophy, the list is a lot longer.
The common denominator seems to largely be “something something reasoning / deliberation” plus (as you note) “something something normativity / desirability / recommendedness / requiredness”.
The idea of “normativity” doesn’t currently seem that mysterious to me either, though you’re welcome to provide perplexing examples. My initial take is that it seems to be a suitcase word containing a bunch of ideas tied to:
Goals/preferences/values, especially overridingly strong ones.
Encouraged, endorsed, mandated, or praised conduct.
Encouraging, endorsing, mandating, and praising are speech-acts that seem very central to how humans perceive and intervene on social situations; and social situations seem pretty central to human cognition overall. So I don’t think it’s particularly surprising if words associated with such loaded ideas would have fairly distinctive connotations and seem to resist reduction, especially reduction that neglects the pragmatic dimensions of human communication and only considers the semantic dimension.
I may write up more object-level thoughts here, because this is interesting, but I just wanted to quickly emphasize the upshot that initially motivated me to write up this explanation.
(I don’t really want to argue here that non-naturalist or non-analytic naturalist normative realism of the sort I’ve just described is actually a correct view; I mainly wanted to give a rough sense of what the view consists of and what leads people to it. It may well be the case that the view is wrong, because all true normative-seeming claims are in principle reducible to claims about things like preferences. I think the comments you’ve just made cover some reasons to suspect this.)
The key point is just that when these philosophers say that “Action X is rational,” they are explicitly reporting that they do not mean “Action X suits my terminal preferences” or “Action X would be taken by an agent following a policy that maximizes lifetime utility” or any other such reduction.
I think that when people are very insistent that they don’t mean something by their statements, it makes sense to believe them. This implies that the question they are discussing—“What are the necessary and sufficient conditions that make a decision rational?”—is distinct from questions like “What decision would an agent that tends to win take?” or “What decision procedure suits my terminal preferences?”
It may be the case that the question they are asking is confused or insensible—because any sensible question would be reducible—but it’s in any case different. So I think it’s a mistake to interpret at least these philosophers’ discussions of “decisions theories” or “criteria of rightness” as though they were discussions of things like terminal preferences or winning strategies. And it doesn’t seem to me like the answer to the question they’re asking (if it has an answer) would likely imply anything much about things like terminal preferences or winning strategies.
[[NOTE: Plenty of decision theorists are not non-naturalist or non-analytic naturalist realists, though. It’s less clear to me how related or unrelated the thing they’re talking about is to issues of interest to MIRI. I think that the conception of rationality I’m discussing here mainly just presents an especially clear case.]]