I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled “longtermists” but has never actually met one in person.
That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn’t really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)
Nonetheless, I’m writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It’s not going to be the fairest response, in that I’m not going to be particularly charitable in my interpretations, and I’m going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.
How much current animal suffering does longtermism let us ignore?
None of it? Current suffering is still bad! You don’t get the privilege of ignoring it, you sadly set it to the side because you see opportunities to do even more good.
(I would have felt so much better about “How much current animal suffering would longtermism ignore?” It’s really the framing that longtermism is doing you a favor by “letting you” ignore current animal suffering that rubs me wrong.)
A. If millions of people were being kept in battery cages, how much energy should we redirect away from longtermism to work on that?
[...]
Check out some images of battery cages and picture millions of humans kept in the equivalent for 100% of their adult lives, and suppose with some work we could free them: would you stick to your longtermist guns?
Yes! This is pretty close to the actual situation we are in! There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
B. Power is exploited, and absolute power is exploited absolutely
[...]
But to me it seems quite likely that she thinks it’s barbarically inhumane whereas we broadly think it’s OK, or at least spend a lot more energy getting worked up about Will Smith or mask mandates or whatnot. Why do we think it’s OK?
????
I don’t think that animal suffering is OK! I would guess that most longtermists don’t think animal suffering is OK (except for those who have very confident views about particular animals not being moral patients).
Why on earth would you think that longtermists think that animal suffering is OK? Because they don’t personally work on it? I assume you don’t personally work on ending human slavery, presumably that doesn’t mean you think slavery is OK??
C. Sacrificing others makes sense
[...]
And the conclusion that the ongoing (at least plausible) suffering of billions of other creatures, inflicted for our species’ benefit, is less pressing than relatively theoretical future suffering, is convenient enough to be worth double-checking.
Convenient??? I feel like this is just totally misunderstanding how altruistic people tend to feel? It is not convenient for me that the correct response to hearing about millions of people in sexual slavery or watching baby chicks be fed into a large high-speed grinder is to say “sorry, I need to look at these plots to figure out why my code isn’t doing what I want it to do, that’s more important”.
Many of the longtermists I know were dragged to longtermism kicking and screaming, because of all of their intuitions telling them about how they were ignoring obvious moral atrocities right in front of them, and how it isn’t a bad thing if some people don’t get to exist in the future. I don’t know if this is a majority of longtermists.
(It’s probably a much lower fraction of people focused on x-risk reduction—you don’t need to be a longtermist to focus on x-risk reduction, I’m focusing here on the people who would continue to work on longtermist stuff even if it was unlikely to make a difference within their lifetimes.)
I guess maybe it’s supposed to be convenient in that you can have a more comfortable life or something? Idk, I feel like my life would be more comfortable if I just earned-to-give and donated to global poverty / animal welfare causes. And I’ve had to make significantly fewer sacrifices than other longtermists; I already had an incredibly useful background and an interest in computer science and AI before buying into longtermism.
D. Does longtermism mean ignoring current suffering until the heat death of the universe?
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
(EDIT: JackM points out that longtermists could increase total suffering, e.g. through population growth that increases both suffering and happiness, so my “obviously not” is technically false. Imagine that the question was about ignoring current utility instead of ignoring current suffering, which is how I interpreted it and how I expect the OP meant it to be interpreted.)
But it’s not really clear to me that in a 100 or 1,000 years the future won’t still loom large, especially if technological progress continues at any pace at all.
Yes, the future will still loom large? And this just seems fine?
Here’s an analogous argument:
“You say to me that I shouldn’t help my neighbor, and instead I should use it to help people in Africa. But it’s not really clear to me that after we’ve successfully helped people in Africa, the rest of the world’s problems won’t still loom large. Wouldn’t you then want to help, say, people in South America?”
(I generated this argument by taking your argument and replacing the time dimension with a space dimension.)
E. Animals are part of longtermism
(Switching to analysis instead of emotion / persuasion because I don’t really know what your claim is here)
Given that your current post title is “How much current animal suffering does longtermism let us ignore?” I’m assuming that in this section you are trying to say that reducing current animal suffering is an important longtermist priority. (If you’re just saying “there exists some longtermist stuff that has something to do with animals”, I agree, but I’m also not sure why you’d bother talking about that.) I think this is mostly false. Looking at the posts you cite, they seem to be in two categories:
First, claims that animal welfare is a significant part of the far future, and so should be optimized (abrahamrowe and Fai). Both posts neglect the possibility that we transition to a world of digital people that doesn’t want biological animals any more (see this comment and a point added in the summary of Fai’s post after I had a conversation with them), and I think their conclusions are basically wrong for that reason.
Second, moral circle expansion is a part of longtermism, and animals are plausibly a good way to currently do moral circle expansion. But this doesn’t mean a focus on reducing current animal suffering! Some quotes from the posts:
Tobias: “a longtermist outlook implies a much stronger focus on achieving long-term social change, and (comparatively) less emphasis on the immediate alleviation of animal suffering”
Tobias: “If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.”
Jacy: “Therefore, I’m not particularly concerned about the factory farming of biological animals continuing into the far future.”
But the great thing about longtermist arguments is you only need a maybe.
That’s not true! You want the best possible maybe you can get; it’s not enough to just say “maybe this has a beneficial effect” and go do the thing.
There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,
The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer’s and other forms of dementia. Not to mention their loved ones who are forced to witness this.
The 6% of the US population who experienced at least one major depressive episode in the last year.
The estimated half a million homeless population in the United States .
The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family
EDIT: I made this comment assuming the comment I’m replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here’s the response anyway:
Well it’s not so much that longtermists ignore such suffering, it’s that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.
For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the ‘neglectedness’ part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven’t thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.
I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total.
When OP says :
D. Does longtermism mean ignoring current suffering until the heat death of the universe?
My answer is “pretty much yes”. (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say “So what? I’m still maximising undiscounted utility over time” (see my comment here).
(Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare.
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn’t actually true, though I agree if you just mean pragmatically, most longtermists aren’t suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, “the view that impact on the far future is the most important feature of our actions today.” Longtermism is therefore completely agnostic about whether you’re a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It’s entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.
Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it’s still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.
I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.
It read to me that you were upset and offended and you wrote a lot in response.
I wouldn’t say I was offended. Even if the author is wrong about some facts about me, it’s not like they should know those facts about me? Which seems like it would be needed for me to feel offended?
I was maybe a bit upset? I would have called it annoyance but “slightly upset” is reasonable as a descriptor. For A, B, D and E my reaction feels mostly like “I’m confused why this seems like a decent argument for your thesis”, and for C it was more like being upset.
I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled “longtermists” but has never actually met one in person.
That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn’t really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)
Nonetheless, I’m writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It’s not going to be the fairest response, in that I’m not going to be particularly charitable in my interpretations, and I’m going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.
None of it? Current suffering is still bad! You don’t get the privilege of ignoring it, you sadly set it to the side because you see opportunities to do even more good.
(I would have felt so much better about “How much current animal suffering would longtermism ignore?” It’s really the framing that longtermism is doing you a favor by “letting you” ignore current animal suffering that rubs me wrong.)
Yes! This is pretty close to the actual situation we are in! There is an estimate of 24.9 million people in slavery, of which 4.8 million are sexually exploited! Very likely these estimates are exaggerated, and the conditions are not as bad as one would think hearing those words, and even if they were the conditions might not be as bad as battery cages, but my broader point is that the world really does seem like it is very broken and there are problems of huge scale even just restricting to human welfare, and you still have to prioritize, which means ignoring some truly massive problems.
????
I don’t think that animal suffering is OK! I would guess that most longtermists don’t think animal suffering is OK (except for those who have very confident views about particular animals not being moral patients).
Why on earth would you think that longtermists think that animal suffering is OK? Because they don’t personally work on it? I assume you don’t personally work on ending human slavery, presumably that doesn’t mean you think slavery is OK??
Convenient??? I feel like this is just totally misunderstanding how altruistic people tend to feel? It is not convenient for me that the correct response to hearing about millions of people in sexual slavery or watching baby chicks be fed into a large high-speed grinder is to say “sorry, I need to look at these plots to figure out why my code isn’t doing what I want it to do, that’s more important”.
Many of the longtermists I know were dragged to longtermism kicking and screaming, because of all of their intuitions telling them about how they were ignoring obvious moral atrocities right in front of them, and how it isn’t a bad thing if some people don’t get to exist in the future. I don’t know if this is a majority of longtermists.
(It’s probably a much lower fraction of people focused on x-risk reduction—you don’t need to be a longtermist to focus on x-risk reduction, I’m focusing here on the people who would continue to work on longtermist stuff even if it was unlikely to make a difference within their lifetimes.)
I guess maybe it’s supposed to be convenient in that you can have a more comfortable life or something? Idk, I feel like my life would be more comfortable if I just earned-to-give and donated to global poverty / animal welfare causes. And I’ve had to make significantly fewer sacrifices than other longtermists; I already had an incredibly useful background and an interest in computer science and AI before buying into longtermism.
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
(EDIT: JackM points out that longtermists could increase total suffering, e.g. through population growth that increases both suffering and happiness, so my “obviously not” is technically false. Imagine that the question was about ignoring current utility instead of ignoring current suffering, which is how I interpreted it and how I expect the OP meant it to be interpreted.)
Yes, the future will still loom large? And this just seems fine?
Here’s an analogous argument:
“You say to me that I shouldn’t help my neighbor, and instead I should use it to help people in Africa. But it’s not really clear to me that after we’ve successfully helped people in Africa, the rest of the world’s problems won’t still loom large. Wouldn’t you then want to help, say, people in South America?”
(I generated this argument by taking your argument and replacing the time dimension with a space dimension.)
(Switching to analysis instead of emotion / persuasion because I don’t really know what your claim is here)
Given that your current post title is “How much current animal suffering does longtermism let us ignore?” I’m assuming that in this section you are trying to say that reducing current animal suffering is an important longtermist priority. (If you’re just saying “there exists some longtermist stuff that has something to do with animals”, I agree, but I’m also not sure why you’d bother talking about that.) I think this is mostly false. Looking at the posts you cite, they seem to be in two categories:
First, claims that animal welfare is a significant part of the far future, and so should be optimized (abrahamrowe and Fai). Both posts neglect the possibility that we transition to a world of digital people that doesn’t want biological animals any more (see this comment and a point added in the summary of Fai’s post after I had a conversation with them), and I think their conclusions are basically wrong for that reason.
Second, moral circle expansion is a part of longtermism, and animals are plausibly a good way to currently do moral circle expansion. But this doesn’t mean a focus on reducing current animal suffering! Some quotes from the posts:
Tobias: “a longtermist outlook implies a much stronger focus on achieving long-term social change, and (comparatively) less emphasis on the immediate alleviation of animal suffering”
Tobias: “If we take the longtermist perspective seriously, we will likely arrive at different priorities and focus areas: it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.”
Jacy: “Therefore, I’m not particularly concerned about the factory farming of biological animals continuing into the far future.”
That’s not true! You want the best possible maybe you can get; it’s not enough to just say “maybe this has a beneficial effect” and go do the thing.
Thanks for writing this comment.
I agree, there is already a lot of human suffering that longtermists de-prioritize. More concrete examples include,
The 0.57% of the US population that is imprisoned at any given time this year. (This might even be more analogous to battery cages than slavery).
The 25.78 million people who live under the totalitarian North Korean regime.
The estimated 27.2% of the adult US population that who lives with more than one of these chronic health conditions: arthritis, cancer, chronic obstructive pulmonary disease, coronary heart disease, current asthma, diabetes, hepatitis, hypertension, stroke, and weak or failing kidneys.
The nearly 10% of the world population who lives in extreme poverty, which is defined as a level of consumption equivalent to less than $2 of spending per day, adjusting for price differences between nations.
The 7 million Americans who are currently having their brain rot away, bit by bit, due to Alzheimer’s and other forms of dementia. Not to mention their loved ones who are forced to witness this.
The 6% of the US population who experienced at least one major depressive episode in the last year.
The estimated half a million homeless population in the United States .
The significant fraction of people who have profound difficulties with learning and performing work, who disproportionately live in poverty and are isolated from friends and family
EDIT: I made this comment assuming the comment I’m replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here’s the response anyway:
Well it’s not so much that longtermists ignore such suffering, it’s that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.
For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the ‘neglectedness’ part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven’t thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.
I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:
Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total.
When OP says :
My answer is “pretty much yes”. (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say “So what? I’m still maximising undiscounted utility over time” (see my comment here).
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
No you’re not missing anything that I can see. When OP says:
I think they’re really asking:
Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn’t actually true, though I agree if you just mean pragmatically, most longtermists aren’t suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, “the view that impact on the far future is the most important feature of our actions today.” Longtermism is therefore completely agnostic about whether you’re a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It’s entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.
Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it’s still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.
I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.
I agree with this sentiment.
As an instrumental thing, I am worried that this
sort of postposts like the OP could backfire.The original post or my comment?
In either case, why?
I agree with your comment.
It read to me that you were upset and offended and you wrote a lot in response.
I didn’t think the OP seemed good to me, either in content or rhetoric.
Below is a screenshot of a draft of a larger comment that I didn’t share until now, raising my concerns.
(It’s a half written draft, it just contains fragments of thoughts).
I wish people could see what is possible and what has been costly in animal welfare.
I wish they knew how expensive it is carry around certain beliefs and I wish they could see who is bearing the cost for that.
Thanks. One response:
I wouldn’t say I was offended. Even if the author is wrong about some facts about me, it’s not like they should know those facts about me? Which seems like it would be needed for me to feel offended?
I was maybe a bit upset? I would have called it annoyance but “slightly upset” is reasonable as a descriptor. For A, B, D and E my reaction feels mostly like “I’m confused why this seems like a decent argument for your thesis”, and for C it was more like being upset.