If I tell you “I’m torturing an animal in my apartment,” do you go “well, if there are no other animals being tortured anywhere in the world, then that’s really terrible! But there are some, so it’s probably not as terrible. Let me go check how many animals are being tortured.”
(a minute later)
“Oh, like ten billion. In that case you’re not doing anything morally bad, carry on.”
I can’t see why a person’s suffering would be less morally significant depending on how many other people are suffering. And as a general principle, arbitrarily bounding variables because you’re distressed by their behavior at the limits seems risky.
Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you’re willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don’t run out of money) your willingness to pay is roughly linear in the number of birds you save.
Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only $0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only $1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.
All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren’t as strong and I don’t know them.
I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn’t make sense to talk about $1 per 100 avoided deaths in isolation.
A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die.
This doesn’t follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.
Two empirical reasons not to take the extreme scope neglect in studies like the 2,000 vs 200,000 birds one as directly reflecting people’s values.
First, the results of studies like this depend on how you ask the question. A simple variation which generally leads to more scope sensitivity is to present the two options side by side, so that the same people would be asked both about 2,000 birds and about the 200,000 birds (some call this “joint evaluation” in contrast to “separate evaluation”). Other variations also generally produce more scope sensitive results (this Wikipedia article seems uneven in quality but gives a flavor for some of those variations.) The fact that this variation exists means that just take people’s answers at face value does not work as a straightforward approach to understanding people’s values, and I think the studies which find more scope sensitivity often have a strong case for being better designed.
Second, there are variants of scope insensitivity which involve things other than people’s values. Christopher Hsee has done a number of studies in the context of consumer choice, where the quantity is something like the amount of ice cream that you get or the number of entries in a dictionary, which find scope insensitivity under separate evaluation (but not under joint evaluation), and there is good reason to think that people do prefer more ice cream and more comprehensive dictionaries. Daniel Kahneman has argued that several different kinds of extension neglect all reflect similar cognitive processes, including scope neglect in the bird study, base rate neglect in the Tom W problem, and duration neglect in studies of colonoscopies. And superforecasting researchers have found that ordinary forecasters neglect scope in questions like (in 2012) “How likely is it that the Assad regime will fall in the next three months?” vs. “How likely is it that the Assad regime will fall in the next six months?”; superforecasters’ forecasts are more sensitive to the 3 month vs. 6 month quantity (there’s a passage in Superforecasting about this which I’ll leave as a reply, and a paper by Mellers & colleagues with more examples). These results suggest that people’s answers to questions about values-at-scale has a lot to do with how people think about quantities, that “how people think about quantities” is a fairly messy empirical matter, and that it’s fairly common for people’s thinking about quantities to involve errors/biases/limitations which make their answers less sensitive to the size of the quantity.
This does not imply that the extreme scope sensitivity common in effective altruism matches people’s values; I think that claim requires more of a philosophical argument rather than an empirical one. Just that the extreme scope insensitivity found in some studies probably doesn’t match people’s values.
Flash back to early 2012. How likely is the Assad regime to fall? Arguments against a fall include (1) the regime has well-armed core supporters; (2) it has powerful regional allies. Arguments in favor of a fall include (1) the Syrian army is suffering massive defections; (2) the rebels have some momentum, with fighting reaching the capital. Suppose you weight the strength of these arguments, they feel roughly equal, and you settle on a probability of roughly 50%.
But notice what’s missing? The time frame. It obviously matters. To use an extreme illustration, the probability of the regime falling in the next twenty-four hours must be less—likely a lot less—than the probability that it will fall in the next twenty-four months. To put this in Kahneman’s terms, the time frame is the “scope” of the forecast.
So we asked one randomly selected group of superforecasters, “How likely is it that the Assad regime will fall in the next three months?” Another group was asked how likely it was in the next six months. We did the same experiment with regular forecasters.
Kahneman predicted widespread “scope insensitivity.” Unconsciously, they would do a bait and switch, ducking the hard question that requires calibrating the probability to the time frame and tackling the easier question about the relative weight of the arguments for and against the regime’s downfall. The time frame would make no difference to the final answers, just as it made no difference whether 2,000, 20,000, or 200,000 migratory birds died. Mellers ran several studies and found that, exactly as Kahneman expected, the vast majority of forecasters were scope insensitive. Regular forecasters said there was a 40% chance Assad’s regime would fall over three months and a 41% chance it would fall over six months.
But the superforecasters did much better: They put the probability of Assad’s fall at 15% over three months and 24% over six months. That’s not perfect scope sensitivity (a tricky thing to define), but it was good enough to surprise Kahneman. If we bear in mind that no one was asked both the three- and six-month version of the question, that’s quite an accomplishment. It suggests that the superforecasters not only paid attention to the time frame in the question but also thought about other possible time frames—and thereby shook off a hard-to-shake bias.
Note: in the other examples studied by Mellers & colleagues (2015), regular forecasters were less sensitive to scope than they should’ve been, but they were not completely insensitive to scope, so the Assad example here (40% vs. 41%) is unusually extreme.
Hm, I think that most of the people who participated in this experiment:
three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
There’s also an essay from 2008 about the intuitions behind utilitarianism that you might find helpful for understanding why someone could consider scope insensitivity a bias instead of just the way human values work:
I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not.
I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:
Scope neglect is not a cognitive bias like confirmation bias. I can want there to be ≥80 birds saved, but be indifferent about larger numbers: this does not violate the von Neumann-Morgenstern axioms (nor any other axiomatic systems that underlie alternatives to utility theory that I know of). Similarly, I can most highly value there being exactly 3 flowers in the vase on the table (less being too sparse, and more being too busy). The pebble-sorters of course go the extra mile.
Calling scope neglect a bias pre-supposes that we ought to value certain things linearly (or at least monotonically). This does not follow from any mathematics I know of. Instead it tries to sneak in utilitarian assumptions by calling their violation “biased”.
Anything is VNM-consistent if your utility function is allowed to take universe-histories or sequences of actions. So you will have to make some assumptions.
To answer this question in short: It is so because it’s innate. Like any other bias scope insensitivity comes from within, in the case of an individual as well as an organization run by individuals. We may generalize it as the product of human values because of the long-running history of constant ‘Self-Value’ teachings(not the spiritual ones). But there will always be a disparity when considering the ever-evolving nature of human values, especially in the current era.
--------
On the contrary, most of the time, I do consider scope insensitivity as the typical human way. One absurd reason I identified is the outward negligence towards any scope of sensitive issues. There’s always this demand for a huge and attractive convincing, whenever there are multiple issues at hand. And the ones with the ability to convince often get listened to. The result: the insensitivity towards the scope of the issue posed by a commoner(less talented).
This is just one case. If we somehow avert from pinning blame, we can say that there is a very real imbalance between the scope identifiers and the scope rectifiers.
There’s a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and reproductive fitness—but that doesn’t mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.
Scope sensitivity, I guess, is the triumph of ‘rational compassion’ (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns.
But this is an empirical question in human psychology, and I don’t think there’s much research on it yet. (I hope to do some in the next couple of years though).
That explanation is a bit vague, I don’t understand what you mean.
By “quantitative thinking” do you mean something like having a textual length simplicity prior over moralities?
By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world?
Why do you call it a triumph (implying it’s good) over small-scope concerns?
Why do you say this is an empirical question? What do you plan on testing?
Why is scope insensitivity considered a bias instead of just the way human values work?
Quoting Kelsey Piper:
Not a philosopher, but scope sensitivity follows from consistency (either in the sense of acting similarly in similar situations, or maximizing a utility function). Suppose you’re willing to pay $1 to save 100 birds from oil; if you would do the same trade again at a roughly similar rate (assuming you don’t run out of money) your willingness to pay is roughly linear in the number of birds you save.
Scope insensitivity in practice is relatively extreme; in the original study, people were willing to pay $80 for 2000 birds and $88 for 200,000 birds. So if you think this represents their true values, people were willing to pay $.04 per bird for the first 2000 birds but only $0.00004 per bird for the next 198,000 birds. This is a factor of 1000 difference; most of the time when people have this variance in price they are either being irrational, or there are huge diminishing returns and they really value something else that we can identify. For example if someone values the first 2 movie tickets at $1000 each but further movie tickets at only $1, maybe they really enjoy the experience of going with a companion, and the feeling of happiness is not increased by a third ticket. So in the birds example it seems plausible that most people value the feeling of having saved some birds.
Why should you be consistent? One reason is the triage framing, which is given in Replacing Guilt. Another reason is the money-pump; if you value birds at $1 per 100 and $2 per 1000, and are willing to make trades in either direction, there is a series of trades that causes you to lose both $ and birds.
All of this relies on you caring about consequences somewhat. If your morality is entirely duty-based or has some other foundation, there are other arguments but they probably aren’t as strong and I don’t know them.
I think the money-pump argument is wrong. You are practically assuming the conclusion. A scope insensitive person would negatively value the total number of bird deaths, or maybe positively value the number of birds alive. So that each death is less bad if other birds also die. In this case it doesn’t make sense to talk about $1 per 100 avoided deaths in isolation.
This doesn’t follow for me. I agree that you can construct some set of preferences or utility function such that being scope-insensitive is rational, but you can do that for any policy.
Two empirical reasons not to take the extreme scope neglect in studies like the 2,000 vs 200,000 birds one as directly reflecting people’s values.
First, the results of studies like this depend on how you ask the question. A simple variation which generally leads to more scope sensitivity is to present the two options side by side, so that the same people would be asked both about 2,000 birds and about the 200,000 birds (some call this “joint evaluation” in contrast to “separate evaluation”). Other variations also generally produce more scope sensitive results (this Wikipedia article seems uneven in quality but gives a flavor for some of those variations.) The fact that this variation exists means that just take people’s answers at face value does not work as a straightforward approach to understanding people’s values, and I think the studies which find more scope sensitivity often have a strong case for being better designed.
Second, there are variants of scope insensitivity which involve things other than people’s values. Christopher Hsee has done a number of studies in the context of consumer choice, where the quantity is something like the amount of ice cream that you get or the number of entries in a dictionary, which find scope insensitivity under separate evaluation (but not under joint evaluation), and there is good reason to think that people do prefer more ice cream and more comprehensive dictionaries. Daniel Kahneman has argued that several different kinds of extension neglect all reflect similar cognitive processes, including scope neglect in the bird study, base rate neglect in the Tom W problem, and duration neglect in studies of colonoscopies. And superforecasting researchers have found that ordinary forecasters neglect scope in questions like (in 2012) “How likely is it that the Assad regime will fall in the next three months?” vs. “How likely is it that the Assad regime will fall in the next six months?”; superforecasters’ forecasts are more sensitive to the 3 month vs. 6 month quantity (there’s a passage in Superforecasting about this which I’ll leave as a reply, and a paper by Mellers & colleagues with more examples). These results suggest that people’s answers to questions about values-at-scale has a lot to do with how people think about quantities, that “how people think about quantities” is a fairly messy empirical matter, and that it’s fairly common for people’s thinking about quantities to involve errors/biases/limitations which make their answers less sensitive to the size of the quantity.
This does not imply that the extreme scope sensitivity common in effective altruism matches people’s values; I think that claim requires more of a philosophical argument rather than an empirical one. Just that the extreme scope insensitivity found in some studies probably doesn’t match people’s values.
A passage from Superforecasting:
Note: in the other examples studied by Mellers & colleagues (2015), regular forecasters were less sensitive to scope than they should’ve been, but they were not completely insensitive to scope, so the Assad example here (40% vs. 41%) is unusually extreme.
Hm, I think that most of the people who participated in this experiment:
would agree after the results were shown to them that they were doing something irrational that they wouldn’t endorse if aware of it. (Example taken from here: https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity )
There’s also an essay from 2008 about the intuitions behind utilitarianism that you might find helpful for understanding why someone could consider scope insensitivity a bias instead of just the way human values work:
https://www.lesswrong.com/posts/r5MSQ83gtbjWRBDWJ/the-intuitions-behind-utilitarianism
I think scope insensitivity could be a form of risk aversion over the difference you make in the world (=difference-making) (scope insensitivity is related at least). I explain here why I think that risk aversion over the difference you make is irrational even though risk aversion over states of the world is not.
I think they are basically not a bias in the way confirmation bias is, and anyone claiming otherwise is pre-supposing linear aggregation of welfare already. From a thing I wrote recently:
Anything is VNM-consistent if your utility function is allowed to take universe-histories or sequences of actions. So you will have to make some assumptions.
Various social aggregation theorems (e.g. Harsanyi’s) show that “rational” people must aggregate welfare additively.
(I think this is a technical version of Thomas Kwa’s comment.)
To answer this question in short: It is so because it’s innate. Like any other bias scope insensitivity comes from within, in the case of an individual as well as an organization run by individuals. We may generalize it as the product of human values because of the long-running history of constant ‘Self-Value’ teachings(not the spiritual ones). But there will always be a disparity when considering the ever-evolving nature of human values, especially in the current era.
--------
On the contrary, most of the time, I do consider scope insensitivity as the typical human way. One absurd reason I identified is the outward negligence towards any scope of sensitive issues. There’s always this demand for a huge and attractive convincing, whenever there are multiple issues at hand. And the ones with the ability to convince often get listened to. The result: the insensitivity towards the scope of the issue posed by a commoner(less talented).
This is just one case. If we somehow avert from pinning blame, we can say that there is a very real imbalance between the scope identifiers and the scope rectifiers.
There’s a lot of interesting writing about the evolutionary biology and evolutionary psychology of genetic selfishness, nepotism, and tribalism, and why human values descriptively focus on the sentient beings that are more directly relevant to our survival and reproductive fitness—but that doesn’t mean our normative or prescriptive values should follow whatever natural selection and sexual selection programmed us to value.
Then what does scope sensitivity follow from?
Scope sensitivity, I guess, is the triumph of ‘rational compassion’ (as Paul Bloom talks about it in his book Against Empathy), quantitative thinking, and moral imagination, over human moral instincts that are much more focused on small-scope, tribal concerns.
But this is an empirical question in human psychology, and I don’t think there’s much research on it yet. (I hope to do some in the next couple of years though).
That explanation is a bit vague, I don’t understand what you mean. By “quantitative thinking” do you mean something like having a textual length simplicity prior over moralities? By triumph of moral imagination do you mean somehow changing the mental representation of the world you are evaluating so that it represents better the state of the world? Why do you call it a triumph (implying it’s good) over small-scope concerns? Why do you say this is an empirical question? What do you plan on testing?