Let’s taboo the word “care”. I expect the average longtermist thinks that deaths from famines and floods are about as bad as the average non-longtermist EA. Problems do not become “less bad” simply because other problems exist.
Having different priorities, stemming from different beliefs about e.g. what things matter and how effectively we can address them, is orthogonal to relative evaluations of how bad any individual problem is.
They don’t become less bad, but we pay less attention and devote fewer resources to them, which is a very plausible way of interpreting “caring less”. On this interpretation, it’s psychologically implausible that we can care as much about these other problems as others can, even if our abstract utility functions don’t say they matter any less just because other things matter more.
I don’t think the right response is to argue some definitional point. We should just own that we care less on a commonsense interpretation of the word, and explain why that’s right to do.
I agree that the definitional point would be uninteresting, except that I think the commensense interpretation bundles a bunch of connotations which are wrong (and negative). In context, people receiving this message will have systematically incorrect beliefs about longtermism and those who use it as a framework for prioritization. This is plainly obvious if you e.g. go read pretty much any Twitter thread where people who are hearing about it for the first time (or were otherwise introduced to it in an adversarial context) are debating the subject.
They don’t become less bad, but we pay less attention and devote fewer resources to them, which is a very plausible way of interpreting “caring less”.
One meaning of “caring” (let’s call it Caring-1) is the kind of care a parent provides for their child. This is precisely the type of care you’re talking about here. It implies a responsibility to nurture, protect, and feel for and individual person, place, or thing. Common sense is that we have a responsibility to care for a very limited number of others in this way, and to at least be cognizant enough to do no harm to a much wider circle of others.
“Caring” can also refer to one’s receptivity to “chance encounters with other people’s problems.” Let’s call this Caring-2.
If you had a golden opportunity to help out with a certain problem, would you (“Do you want a hand with that”)?
Do you approve of the fact that somebody out there is working on a certain problem (“X is doing amazing work on this problem!”)?
Do you feel and express sympathy for a certain problem when it is brought to your attention (“I’m so sorry”)?
Do you acknowledge the reality of the suffering various problems cause, even if you don’t personally work on that problem yourself (“that is a really serious issue”)?
Will you acknowledge that the problem seem like a plausible choice for extending Caring-1, even if you don’t personally choose to do so (“somebody should do somethign!”)?
Nobody can provide Caring-1 to every issue. The difference between short-termists and longtermists is to what sorts of issues they extend or reject Caring-2.
Longtermists may reject or downplay Caring-2 for major present-day issues (famines, floods, etc), in favor of extending either Caring-1 or Caring-2 for far-future issues (astronomical waste).
Short-termists may reject or downplay Caring-2 for far-future issues (astronomical waste) in order to focus more on present-day issues (famines and floods).
Hossenfelder expresses around 14:45 that she approves of extending Caring-2 to both the short-term and long-term future. What bothers her is the idea that we should extend no caring-2 or caring-1 to the present day, as well as some of the more far-out ideas longtermist thinkers have explored (i.e. simulation arguments).
Of course, Hossenfelder, a theoretical physicist, is smart enough to make this distinction herself. The fact that she chooses not to, and couches her argument in such heated language, says to me that this is just another crude political hit-job.
even if our abstract utility functions don’t say they matter any less just because other things matter more
A utility function can’t say anything else, in decision theory. Total caring is, roughly speaking, conserved.
The psuedo utility functions that a hedonic utilitarian projects onto others can introduce more caring for one thing without reducing their caring for other things, but they’re irrelevant in this context. (and if you ask me, a preference utilitarian, they’re not very relevant in the context of utilitarianism either, but never mind that.)
Hmm, although I think I get what you mean, I’m not sure how it could actually be true given that (preference) utility functions are scale and offset invariant, so the extent of an agent’s caring can only be described relative to the other things they care about?
Let’s taboo the word “care”. I expect the average longtermist thinks that deaths from famines and floods are about as bad as the average non-longtermist EA. Problems do not become “less bad” simply because other problems exist.
Having different priorities, stemming from different beliefs about e.g. what things matter and how effectively we can address them, is orthogonal to relative evaluations of how bad any individual problem is.
They don’t become less bad, but we pay less attention and devote fewer resources to them, which is a very plausible way of interpreting “caring less”. On this interpretation, it’s psychologically implausible that we can care as much about these other problems as others can, even if our abstract utility functions don’t say they matter any less just because other things matter more.
I don’t think the right response is to argue some definitional point. We should just own that we care less on a commonsense interpretation of the word, and explain why that’s right to do.
I agree that the definitional point would be uninteresting, except that I think the commensense interpretation bundles a bunch of connotations which are wrong (and negative). In context, people receiving this message will have systematically incorrect beliefs about longtermism and those who use it as a framework for prioritization. This is plainly obvious if you e.g. go read pretty much any Twitter thread where people who are hearing about it for the first time (or were otherwise introduced to it in an adversarial context) are debating the subject.
One meaning of “caring” (let’s call it Caring-1) is the kind of care a parent provides for their child. This is precisely the type of care you’re talking about here. It implies a responsibility to nurture, protect, and feel for and individual person, place, or thing. Common sense is that we have a responsibility to care for a very limited number of others in this way, and to at least be cognizant enough to do no harm to a much wider circle of others.
“Caring” can also refer to one’s receptivity to “chance encounters with other people’s problems.” Let’s call this Caring-2.
If you had a golden opportunity to help out with a certain problem, would you (“Do you want a hand with that”)?
Do you approve of the fact that somebody out there is working on a certain problem (“X is doing amazing work on this problem!”)?
Do you feel and express sympathy for a certain problem when it is brought to your attention (“I’m so sorry”)?
Do you acknowledge the reality of the suffering various problems cause, even if you don’t personally work on that problem yourself (“that is a really serious issue”)?
Will you acknowledge that the problem seem like a plausible choice for extending Caring-1, even if you don’t personally choose to do so (“somebody should do somethign!”)?
Nobody can provide Caring-1 to every issue. The difference between short-termists and longtermists is to what sorts of issues they extend or reject Caring-2.
Longtermists may reject or downplay Caring-2 for major present-day issues (famines, floods, etc), in favor of extending either Caring-1 or Caring-2 for far-future issues (astronomical waste).
Short-termists may reject or downplay Caring-2 for far-future issues (astronomical waste) in order to focus more on present-day issues (famines and floods).
Hossenfelder expresses around 14:45 that she approves of extending Caring-2 to both the short-term and long-term future. What bothers her is the idea that we should extend no caring-2 or caring-1 to the present day, as well as some of the more far-out ideas longtermist thinkers have explored (i.e. simulation arguments).
Of course, Hossenfelder, a theoretical physicist, is smart enough to make this distinction herself. The fact that she chooses not to, and couches her argument in such heated language, says to me that this is just another crude political hit-job.
A utility function can’t say anything else, in decision theory. Total caring is, roughly speaking, conserved.
The psuedo utility functions that a hedonic utilitarian projects onto others can introduce more caring for one thing without reducing their caring for other things, but they’re irrelevant in this context. (and if you ask me, a preference utilitarian, they’re not very relevant in the context of utilitarianism either, but never mind that.)
Hmm, although I think I get what you mean, I’m not sure how it could actually be true given that (preference) utility functions are scale and offset invariant, so the extent of an agent’s caring can only be described relative to the other things they care about?